flags for reducing memory consumption for quickstep and quickstep QM/MM

Axel akoh... at gmail.com
Tue Sep 11 17:08:23 UTC 2007


hi teo,

On Sep 11, 12:40 pm, Teodoro Laino <teodor... at gmail.com> wrote:
> Very good Axel,
>
> let me point the situation:
>
> there's a keyword RS_GRID in the EWALD section (both in MM and DFT)
> but that one affects only the EWALD calculations.
> In particular EWALD in MM should be quite clear.. EWALD in DFT is
> used for DFTB.
>
> the other place where there's a RS_GRID keyword is in the &QS section
> and this should affect the memory in your case.

ok, so we should get into the habit of always using the full
"keyword-path" so that there are no misunderstandings.

> if you use RS_GRID in the EWALD, it is properly parsed but has no
> effect on your GPW calculation.

i was testing FIST, i.e. classical MD against FORCE_EVAL/MM/POISSON/
EWALD/RS_GRID
with all three variations of ewald.

i now also tested FORCE_EVAL/DFT/QS/RS_GRID and found
that using DISTRIBUTED actually increases(!) memory usage
(and since linux does lazy memory allocation, RSS shows
actual used/touched memory pages and not only the reserved
address space).

i have been trying the H2O-32.inp from tests/QS/benchmarks
and explicitely added the RS_GRID flag with both values
and then ran across 6 cpus each. in this case there were
actually (very small) differences in total energies (as to
be expected) and also different numbers of calls to different
MP_xxx subroutines.

cheers,
   axel.

>
> Teo
>
> On 11 Sep 2007, at 18:24, Axel wrote:
>
> > i can confirm this on x86_64 using intel 10, OpenMPI, and MKL.
> > i tested with FIST. i noticed, however, that there are two entries
> > for ewald one in /FORCE_EVAL/DFT/POISSON/EWALD and one in
> > /FORCE_EVAL/MM/POISSON/EWALD and both claim to be applicable only
> > to classical atoms. it would be nice if somebody could clarify this.
>
> > out of the three EWALD options, SPME (which i have been using already)
> > seems to be the least memory hungry followed by plain EWALD and PME.
>
> > what strikes me odd, is that in the communication summary, there are
> > the exact same number of calls to the MP_xxx subroutines in both
> > cases.
> > i would have expected that in the distributed case, there is a
> > (slightly?)
> > different communication pattern as with replicated. could it be, that
> > the flag is not correctly handed down? it appears in the restart
> > files,
> > so i assume it is parsed ok.




More information about the CP2K-user mailing list