<div>Dear Marcelle, dear Frederick,</div><div><br /></div><div>thank you very much for the rapid response! This helps a lot!</div><div><br /></div><div>All the best</div><div>Josh</div><br /><div class="gmail_quote"><div dir="auto" class="gmail_attr">Frederick Stein schrieb am Mittwoch, 27. August 2025 um 13:00:27 UTC+2:<br/></div><blockquote class="gmail_quote" style="margin: 0 0 0 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div>Dear Josh,</div><div>You should use the same EPS_SCF for the OUTER_SCF (you did not provide it, thus CP2K uses 1.0E-5) as for the inner SCF iteration (in your case 5E-07). The iteration did not converged because the inner SCF loop did not converge.</div><div>HTH,</div><div>Frederick</div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">Joshua Edzards schrieb am Mittwoch, 27. August 2025 um 12:48:34 UTC+2:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Dear CP2K comunity,</div><div><br></div><div>I was running a single-point energy calculation with PBE0 on MOF5 with a hydrogen molecule inside. Somehow, after the third outer SCF loop, the calculation seems to have converged. But the next line clearly states that it failed.</div><div><br></div><div> outer SCF iter = 3 RMS gradient = 0.58E-05 energy = -1195.0423806694<br> outer SCF loop converged in 3 iterations or 75 steps<br><br><br> *******************************************************************************<br> * ___ *<br> * / \ *<br> * [ABORT] *<br> * \___/ SCF run NOT converged. To continue the calculation regardless, *<br> * | please set the keyword IGNORE_CONVERGENCE_FAILURE. *<br> * O/| *<br> * /| | *<br> * / \ qs_scf.F:611 *<br> *******************************************************************************</div><div><br></div><div>I was wondering why this happens. Additionally, I get an error message from Slurm, and I assume that this might cause the problem.</div><div><br></div><div>--------------------------------------------------------------------------<br>MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD<br>with errorcode 1.<br><br>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.<br>You may or may not see output from other processes, depending on<br>exactly when Open MPI kills them.<br>--------------------------------------------------------------------------<br>[c0365:144409] 191 more processes have sent help message help-mpi-api.txt / mpi-abort<br>[c0365:144409] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages</div><div><br></div><div>The Slurm script, the error, and also CP2K in- and output files are attached. Any help is appreciated. Also, if more information is needed, I am happy to provide it.</div><div><br></div><div>Thank you very much, and all the best</div><div>Josh</div></blockquote></div></blockquote></div>
<p></p>
-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion visit <a href="https://groups.google.com/d/msgid/cp2k/3406cc11-2c5b-49b3-8906-724790396dd9n%40googlegroups.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/3406cc11-2c5b-49b3-8906-724790396dd9n%40googlegroups.com</a>.<br />