<div>Hi Matt,</div><div><br></div><div>Good catch, i thought i was running with analytical for all my calculations. I've been testing 3 different parametrisations of SCAN: D3-BJ, D3, and rVV10; the D3 run with analytical stress tensor however does seem to run into the same issue as the other 2 using numerical. I'll retry the other 2 with analytical and see whether that leads to a difference.</div><div><br></div><div>Regarding the cutoff, i've followed the approach on the cp2k website (https://www.cp2k.org/howto:converging_cutoff) while also keeping an eye on the charge density on the r and g grids. The energy does not tend towards a converged value while the charge density on grids is constant at ~1.0E-08 across cutoffs in the range 500-1200Ry. I'll take a look at forces to see whether i get a smoother convergence.</div><div><br></div><div>Thanks for your advice,</div><div>Martin<br></div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">On Friday, November 5, 2021 at 2:18:17 PM UTC mattwa...@gmail.com wrote:<br/></div><blockquote class="gmail_quote" style="margin: 0 0 0 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div>Hello,</div><div>I'd suggest the problem is likely <br></div><div> STRESS_TENSOR NUMERICAL</div><div>this is a finite difference approximation to the stress tensor and is massively expensive. Use ANALYTICAL if possible. <br></div><div><br></div><div>SCAN might need a very high cutoff for sensible stress tensor calcs too.<br></div><div>Matt<br></div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">On Thursday, 4 November 2021 at 12:24:42 UTC <a href data-email-masked rel="nofollow">martin....@gmail.com</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Hello all,</div><div><br></div><div>I am continuing some work a masters student previously did on a layered MOF material. Calculations run smoothly with PBE-D3 for cell optimisations and NVT MD, but hang (until walltime runs out, without erroring out) after a small number of steps for NPT. Running TRACE TRUE on these jobs consistently shows the last line as an mpi communication (17 905 mp_alltoall_z11v start Hostmem: 818 MB GPUmem: 0 MB)</div><div><br></div><div>I have tested the same calculation on different HPCs and versions of cp2k (5.1 through to 8.1, albeit all using central installs of mpi) and run into the same issue; is this just an mpi issue or is there anything i can try on CP2K's end?</div><div><br></div><div>Interestingly, trying cell optimisations using SCAN (CP2K 8.2) also leads to hanging (same endpoint for TRACE) after the first SCF cycle runs to completion. I have attached inputs for the pbe-d3 NPT run and SCAN geo-opt.</div><div><br></div><div>Best regards,</div><div>Martin<br></div></blockquote></div></blockquote></div>
<p></p>
-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/de02cd17-dc44-49d9-99ba-8be2439c0ccdn%40googlegroups.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/de02cd17-dc44-49d9-99ba-8be2439c0ccdn%40googlegroups.com</a>.<br />