<div dir="ltr"><div>Dear Mauro,</div><div><br></div><div>yes, but I just meant a general way of reporting time in computations.</div><div><br></div><div>Anyway, as suggested by Fabian, using square number of cores helps. <br></div><div>And for my more than 1000 atoms system, scaling in xTB is as follows:</div><div></div><div>cores time_per_MD_step</div><div>100 4<br></div><div>144 3</div><div><br></div><div>Regards</div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Manjusha<br></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 22, 2021 at 11:51 AM Mauro Sgroi <<a href="mailto:maurofran...@gmail.com">maurofran...@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear
Manjusha,<div>If I understand well, the time per MD step should correspond to the times that I'm reporting divided by 100. </div><div><br></div><div>Dear Fabian,</div><div>Thanks a lot for the information. I will check the use of 25 cores instead of 28. I'm using 28 since our nodes have 28 cores each.</div><div><br></div><div>Is this scaling the related to the xTB method or is it linked to its current implementation on CP2K? Can the linear scaling approach improve the situation?</div><div><br></div><div>Thanks a lot and best regards,</div><div>Mauro.</div><div><br></div><div class="gmail_quote"><div dir="auto" class="gmail_attr">Il giorno lunedì 22 febbraio 2021 alle 11:44:24 UTC+1 <a href="mailto:fa...@gmail.com" target="_blank">fa...@gmail.com</a> ha scritto:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Dear Mauro,</div><div><br></div><div>This is consistent with my own observations of the scaling of xTB. Because of the increasing cost of communication more CPUs don't necesarily speed up the simulation.
I don't use more than 25 CPUs with xTB unless I have well above 1000 atoms.</div><div><br></div><div>Please note that the number of MPI ranks should be a square number. 25 CPUs are probably faster than 28 unless you are using k-points.<br></div><div><br></div><div>Cheers,</div><div>Fabian<br></div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">On Monday, 22 February 2021 at 11:10:11 UTC+1 <a rel="nofollow">chu...@gmail.com</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi Mauro,</div><div><br></div><div>it will be easy to look into your scaling issue if you report time for each MD step. I am also doing MD with xTB.<br></div><div><br></div><div>Regards</div><div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>Manjusha<br></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 22, 2021 at 10:59 AM Mauro Sgroi <<a rel="nofollow">mauro...@gmail.com</a>> wrote:<br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear all,<div>I'm testing the xTB code on a liquid electrolyte containing a Li ion.</div><div>I'm running MD on a cell containing 728 atoms.</div><div>I obtain a disappointing scaling with the number of cores. The following are times for 100 MD steps:</div><div><br></div><div><div>cores Time (s)</div><div>28 7818 </div><div>56 7364</div><div>84 6529</div><div>112 7493</div></div><div><br></div><div>My input file can be downloaded here: </div><div><br></div><div><a href="https://drive.google.com/file/d/1KvZz5x6FwgOP3dzH6fePghMQBineIlfE/view?usp=sharing" rel="nofollow" target="_blank">https://drive.google.com/file/d/1KvZz5x6FwgOP3dzH6fePghMQBineIlfE/view?usp=sharing</a></div><div><br></div><div>Is this the right behaviour to be expected? Or is there is something wrong in my compilation of the code or in the input file? </div><div>The HPC facility has a Infinband network protocol and a fast shared filesystem.</div><div><br></div><div>Thanks a lot in advance and best regards,</div><div>Mauro Sgroi.</div><div><br></div><div><br></div><div><br></div>
<p></p></blockquote></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
-- <br>
You received this message because you are subscribed to the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to <a rel="nofollow">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/162a9eac-2a85-4d76-a4c2-08305f29cf5bn%40googlegroups.com?utm_medium=email&utm_source=footer" rel="nofollow" target="_blank">https://groups.google.com/d/msgid/cp2k/162a9eac-2a85-4d76-a4c2-08305f29cf5bn%40googlegroups.com</a>.<br>
</blockquote></div>
</blockquote></div></blockquote></div>
<p></p>
-- <br>
You received this message because you are subscribed to the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp...@googlegroups.com" target="_blank">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/b4027b97-35f2-49b8-8b23-0fa0596addedn%40googlegroups.com?utm_medium=email&utm_source=footer" target="_blank">https://groups.google.com/d/msgid/cp2k/b4027b97-35f2-49b8-8b23-0fa0596addedn%40googlegroups.com</a>.<br>
</blockquote></div>