[CP2K-user] [CP2K:14808] Scaling with xTB

fa...@gmail.com fabia... at gmail.com
Mon Feb 22 10:59:25 UTC 2021


Strong scaling always hits a ceiling. Since xTB is a relatively cheap 
method (compared to e.g. DFT) the number of CPUs one can effectively 
utilize is low. I suspect that this is inherent to xTB and has nothing to 
do with CP2K.

You can try different SCF methods to reduce the cost, but I doubt that the 
scaling substantially changes.

Cheers,
Fabian

On Monday, 22 February 2021 at 11:51:07 UTC+1 mauro... at gmail.com wrote:

> Dear  Manjusha,
> If I understand well, the time per MD step should correspond to the times 
> that I'm reporting divided by 100. 
>
> Dear Fabian,
> Thanks a lot for the information. I will check the use of 25 cores instead 
> of 28. I'm using 28 since our nodes have 28 cores each.
>
> Is this scaling the related to the xTB method or is it linked to its 
> current implementation on CP2K? Can the linear scaling approach improve the 
> situation?
>
> Thanks a lot and best regards,
> Mauro.
>
> Il giorno lunedì 22 febbraio 2021 alle 11:44:24 UTC+1 fa... at gmail.com 
> ha scritto:
>
>> Dear Mauro,
>>
>> This is consistent with my own observations of the scaling of xTB. 
>> Because of the increasing cost of communication more CPUs don't necesarily 
>> speed up the simulation. I don't use more than 25 CPUs with xTB unless I 
>> have well above 1000 atoms.
>>
>> Please note that the number of MPI ranks should be a square number. 25 
>> CPUs are probably faster than 28 unless you are using k-points.
>>
>> Cheers,
>> Fabian
>>
>> On Monday, 22 February 2021 at 11:10:11 UTC+1 chu... at gmail.com wrote:
>>
>>> Hi Mauro,
>>>
>>> it will be easy to look into your scaling issue if you report time for 
>>> each MD step. I am also doing MD with xTB.
>>>
>>> Regards
>>> Manjusha
>>>
>>>
>>> On Mon, Feb 22, 2021 at 10:59 AM Mauro Sgroi <mauro... at gmail.com> 
>>> wrote:
>>>
>>>> Dear all,
>>>> I'm testing the xTB code on a liquid electrolyte containing a Li ion.
>>>> I'm running MD on a cell containing 728 atoms.
>>>> I obtain a disappointing scaling with the number of cores. The 
>>>> following are times for 100 MD steps:
>>>>
>>>> cores  Time (s)
>>>> 28         7818 
>>>> 56         7364
>>>> 84         6529
>>>> 112       7493
>>>>
>>>> My input file can be downloaded here: 
>>>>
>>>>
>>>> https://drive.google.com/file/d/1KvZz5x6FwgOP3dzH6fePghMQBineIlfE/view?usp=sharing
>>>>
>>>> Is this the right behaviour to be expected? Or is there is something 
>>>> wrong in my compilation of the code or in the input file? 
>>>> The HPC facility has a Infinband network protocol and a fast shared 
>>>> filesystem.
>>>>
>>>> Thanks a lot in advance and best regards,
>>>> Mauro Sgroi.
>>>>
>>>>
>>>>
>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "cp2k" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to cp... at googlegroups.com.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/cp2k/162a9eac-2a85-4d76-a4c2-08305f29cf5bn%40googlegroups.com 
>>>> <https://groups.google.com/d/msgid/cp2k/162a9eac-2a85-4d76-a4c2-08305f29cf5bn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20210222/73bd1e3a/attachment.htm>


More information about the CP2K-user mailing list