[CP2K:259] Re: flags for reducing memory consumption for quickstep and quickstep QM/MM
Teodoro Laino
teodor... at gmail.com
Tue Sep 11 17:11:48 UTC 2007
>
> ok, so we should get into the habit of always using the full
> "keyword-path" so that there are no misunderstandings.
>
yep!
>
> i now also tested FORCE_EVAL/DFT/QS/RS_GRID and found
> that using DISTRIBUTED actually increases(!) memory usage
> (and since linux does lazy memory allocation, RSS shows
> actual used/touched memory pages and not only the reserved
> address space).
Yep, hope people knowing that part better than I do will look into
that..
It's a little bit strange that you observe an increase memory usage..
Teo
>
> i have been trying the H2O-32.inp from tests/QS/benchmarks
> and explicitely added the RS_GRID flag with both values
> and then ran across 6 cpus each. in this case there were
> actually (very small) differences in total energies (as to
> be expected) and also different numbers of calls to different
> MP_xxx subroutines.
>
> cheers,
> axel.
>
>>
>> Teo
>>
>> On 11 Sep 2007, at 18:24, Axel wrote:
>>
>>> i can confirm this on x86_64 using intel 10, OpenMPI, and MKL.
>>> i tested with FIST. i noticed, however, that there are two entries
>>> for ewald one in /FORCE_EVAL/DFT/POISSON/EWALD and one in
>>> /FORCE_EVAL/MM/POISSON/EWALD and both claim to be applicable only
>>> to classical atoms. it would be nice if somebody could clarify this.
>>
>>> out of the three EWALD options, SPME (which i have been using
>>> already)
>>> seems to be the least memory hungry followed by plain EWALD and PME.
>>
>>> what strikes me odd, is that in the communication summary, there are
>>> the exact same number of calls to the MP_xxx subroutines in both
>>> cases.
>>> i would have expected that in the distributed case, there is a
>>> (slightly?)
>>> different communication pattern as with replicated. could it be,
>>> that
>>> the flag is not correctly handed down? it appears in the restart
>>> files,
>>> so i assume it is parsed ok.
>
>
> >
More information about the CP2K-user
mailing list