GPW calculations with hybrid functionals
Simone Piccinin
piccini... at gmail.com
Mon Dec 7 14:17:02 UTC 2009
Dear Manuel,
thanks a lot for your comments.
Could you please explain to me why the MOLOPT basis sets are very
expensive when used together with HFX?
Are they more extended than the corresponding non-molecularly
optimized basis sets?
Best wishes,
Simone
On 3 Dic, 12:23, mguidon <manuel... at gmail.com> wrote:
> Hi Simone.
>
>
>
> > I would like to ask your help in understanding if the calculation is
> > setup properly. I have no previous experience in the use of hybrid
> > functionals and any suggestion would be very helpful.
>
> The input seems to be fine.
>
> > I find that this calculation takes (roughly) 1 hour per ionic step,
> > using 128 CPUs on a IBM SP6, and I'd like to know if this sounds
> > reasonable to you or not. Later, I'd like to increase the size of the
> > basis set, include diffuse functions, since I have an anion, and
> > experiment a bit with different hybrids, so I'd be happy to know if
> > there is any way I can reduce the computation time (other than
> > increasing the number of processors).
>
> MOLOPT basis sets are very expensive when used together with HFX. If
> you do not have a large supercomputer at hand I would suggest you go
> for non-molecularly optimized basis sets. You can expect at least a
> ten fold speed-up in that case.
>
>
>
> > For example, to exploit the "Screen on an initial density matrix" can
> > I use as restart file the restart file of the PBE calculation or do I
> > have to do 1 iteration with the current setup and than use its restart
> > file?
>
> You can provide a converged PBE wave-function as a restart, because
> this is typically an upper bound for the hybrid wave-function. Just
> uncomment SCREEN_ON_INITIAL_P TRUE (dont forget to provide the wave-
> function, otherwise you screen on the initial guess!)
>
>
>
> > Among other things, I do not understand how to set the variable
> > MAX_MEMORY. The machine I'm using has 128Gb of shared memory per node,
> > where each node contains 32 processors. So I have up to 4x128 Gb of
> > memory when asking for 128 processors. What is the rational for
> > chosing the value of MAX_MEMORY? According to the online description
> > of this variable, it "defines the maximum amount of memory [MB] to be
> > consumed by the full HFX module". If I set it too small will the code
> > dump to disk whatever doesn't fit in MAX_MEMORY thus slowing down the
> > calculation?
>
> MAX_MEMORY defines the total amount of memory that the HFX module can
> use per MPI process. In your case, you have 128/32 =4 GB of memory per
> process that can be consumed by the full cp2k program (and the OS). I
> suggest you start with 2.5 GB for the HFX module (i.e. the rest of
> cp2k has some 1 GB left). If you run out of memory, i.e. not all
> integral fit into that amount, CP2K stops storing them and calculates
> everything on the fly and the in-core steps become slower.
> Alternatively you can also use disk storage, but this may slow down
> the calculation, depending on the I/O capabilities of your hardware.
>
> You can get an impression of how much memory the integrals need by
> inspecting the HFX_MEM_INFO printouts in the output file.
>
> Cheers
>
> Manuel
More information about the CP2K-user
mailing list