[CP2K-user] [CP2K:21783] Re: SCF converges for outer circle but SCF does not converge in general
Joshua Edzards
edzards.joshua at gmail.com
Wed Aug 27 11:58:22 UTC 2025
Dear Marcelle, dear Frederick,
thank you very much for the rapid response! This helps a lot!
All the best
Josh
Frederick Stein schrieb am Mittwoch, 27. August 2025 um 13:00:27 UTC+2:
> Dear Josh,
> You should use the same EPS_SCF for the OUTER_SCF (you did not provide it,
> thus CP2K uses 1.0E-5) as for the inner SCF iteration (in your case 5E-07).
> The iteration did not converged because the inner SCF loop did not converge.
> HTH,
> Frederick
>
> Joshua Edzards schrieb am Mittwoch, 27. August 2025 um 12:48:34 UTC+2:
>
>> Dear CP2K comunity,
>>
>> I was running a single-point energy calculation with PBE0 on MOF5 with a
>> hydrogen molecule inside. Somehow, after the third outer SCF loop, the
>> calculation seems to have converged. But the next line clearly states that
>> it failed.
>>
>> outer SCF iter = 3 RMS gradient = 0.58E-05 energy =
>> -1195.0423806694
>> outer SCF loop converged in 3 iterations or 75 steps
>>
>>
>>
>> *******************************************************************************
>> * ___
>> *
>> * / \
>> *
>> * [ABORT]
>> *
>> * \___/ SCF run NOT converged. To continue the calculation
>> regardless, *
>> * | please set the keyword IGNORE_CONVERGENCE_FAILURE.
>> *
>> * O/|
>> *
>> * /| |
>> *
>> * / \
>> qs_scf.F:611 *
>>
>> *******************************************************************************
>>
>> I was wondering why this happens. Additionally, I get an error message
>> from Slurm, and I assume that this might cause the problem.
>>
>> --------------------------------------------------------------------------
>> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>> with errorcode 1.
>>
>> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>> You may or may not see output from other processes, depending on
>> exactly when Open MPI kills them.
>> --------------------------------------------------------------------------
>> [c0365:144409] 191 more processes have sent help message help-mpi-api.txt
>> / mpi-abort
>> [c0365:144409] Set MCA parameter "orte_base_help_aggregate" to 0 to see
>> all help / error messages
>>
>> The Slurm script, the error, and also CP2K in- and output files are
>> attached. Any help is appreciated. Also, if more information is needed, I
>> am happy to provide it.
>>
>> Thank you very much, and all the best
>> Josh
>>
>
--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cp2k/3406cc11-2c5b-49b3-8906-724790396dd9n%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20250827/8dac1a83/attachment.htm>
More information about the CP2K-user
mailing list