[CP2K-user] [CP2K:11513] Re: number of g-vectors for grid

hut... at chem.uzh.ch hut... at chem.uzh.ch
Wed Apr 3 08:24:58 UTC 2019

As CP2K is not aware of the cluster-node geometry (it only knows
about MPI ranks) this cannot be the source of the problem.
I assume you are stretching the algorithm for g-vector distribution
until it breaks, with other words there is a bug.
However, I'm not able to reproduce it with the hardware available here.


Juerg Hutter                         Phone : ++41 44 635 4491
Institut für Chemie C                FAX   : ++41 44 635 6838
Universität Zürich                   E-mail: hut... at chem.uzh.ch
Winterthurerstrasse 190
CH-8057 Zürich, Switzerland

-----cp... at googlegroups.com wrote: -----
To: "cp2k" <cp... at googlegroups.com>
From: "Hans Pabst" 
Sent by: cp... at googlegroups.com
Date: 04/02/2019 06:06PM
Subject: Re: [CP2K:11513] Re: number of g-vectors for grid

Sorry, my explanation was perhaps not clear enough.

The issue is with number of cluster-nodes and not the number of MPI-ranks. Of course, 128 MPI-ranks are fine. For instance, I tried 160x48x2, 108x12x8, and some more configurations (which reads as [Nodes x RanksPerNode x OmpThreads]). As a side-note, e.g. 108x12x8 hits a total rank-count that is typically preferred by CP2K (108x12 == 36x36 aka square-number). Back to my problem, I found 256 cluster-nodes work fine but none of the configurations in between 128 and 256 cluster-nodes. For me this translates in economic disadvantage for the end user given that less than 256 nodes can do the job.

 You received this message because you are subscribed to the Google Groups "cp2k" group.
 To unsubscribe from this group and stop receiving emails from it, send an email to cp... at googlegroups.com.
 To post to this group, send email to cp... at googlegroups.com.
 Visit this group at https://groups.google.com/group/cp2k.
 For more options, visit https://groups.google.com/d/optout.

More information about the CP2K-user mailing list