[CP2K:10742] Re: How to accelerate the calculation of pdos ?
Tianshu Jiang in Beijing
jts2t... at gmail.com
Wed Sep 19 11:25:16 UTC 2018
Hi Krack,
I submit a job as you suggested.
Now the output information changed a little but the run time of the
calculation is still nearly same.
The output of no parallel
GLOBAL| Total number of message passing processes
1
GLOBAL| Number of threads for this process
36
And the output of parallel calculation
GLOBAL| Total number of message passing processes
4
GLOBAL| Number of threads for this process
1
在 2018年9月17日星期一 UTC+8下午11:29:41,Matthias Krack写道:
>
> Yes, you may set that environment variable in your .bashrc file or each
> time when you launch a CP2K job (which would overwrite the actual
> definition in .bashrc). As you like. Finally, check the CP2K output file,
> if it has had the desired effect.
>
>
>
> *From:* cp... at googlegroups.com <javascript:> <cp... at googlegroups.com
> <javascript:>> *On Behalf Of *Tianshu Jiang in Beijing
> *Sent:* Montag, 17. September 2018 10:00
> *To:* cp2k <cp... at googlegroups.com <javascript:>>
> *Subject:* Re: [CP2K:10742] Re: How to accelerate the calculation of pdos
> ?
>
>
>
> Hi Krack,
>
>
>
> Do you mean I should add a claim export OMP_NUM_THREADS=1 in my .bashrc
> or some else file ?
>
>
> 在 2018年9月16日星期日 UTC+8下午11:15:20,Matthias Krack写道:
>
> Hi
>
>
>
> I suggest that you ask one of your sysadmins or an experienced user of
> your cluster system how to launch a CP2K job properly using the desired
> resources, especially if you did not understand my hints.
>
>
>
> Matthias
>
>
>
> *Von:* cp... at googlegroups.com <cp... at googlegroups.com> *Im Auftrag von *Tianshu
> Jiang in Beijing
> *Gesendet:* Sonntag, 16. September 2018 08:58
> *An:* cp2k <cp... at googlegroups.com>
> *Betreff:* Re: [CP2K:10738] Re: How to accelerate the calculation of pdos
> ?
>
>
>
> Hi, Krack
>
> Thanks for your patience !
>
> How should I use more cores to accelerate the calculation in cp2k
> inputfile ? Should I add some more statement ?
>
> 在 2018年9月14日星期五 UTC+8下午3:27:21,Matthias Krack写道:
>
> Hi
>
>
>
> As reported in the CP2K output headers
>
>
>
> GLOBAL| Total number of message passing
> processes 1
>
> GLOBAL| Number of threads for this process
> 36
>
>
>
> both runs were not MPI parallel and only OpenMP parallel using the same
> resources, i.e. one MPI process and 36 OpenMP threads. Thus it is not
> surprising that you observed no acceleration, since you used the same
> resources. Moreover, the use of more than 8 threads per (MPI) process is
> rarely beneficial for CP2K runs and will result rather in a slowdown than a
> speedup. You have to launch the runs properly, e.g. using something like
> “mpiexec –n 12 cp2k.popt” which depends, of course, on your installation.
> If you are using a cp2k.psmp executable, then you should set in addition
> “export OMP_NUM_THREADS=1”.
>
>
>
> Matthias
>
>
>
> *From:* cp... at googlegroups.com <cp... at googlegroups.com> *On Behalf Of *Tianshu
> Jiang in Beijing
> *Sent:* Freitag, 14. September 2018 03:50
> *To:* cp2k <cp... at googlegroups.com>
> *Subject:* [CP2K:10729] Re: How to accelerate the calculation of pdos ?
>
>
>
> Hi Krack, thanks for your reply.
>
> In the attachment, bilayerIso.out is the file output by 1 core and
> bilayerIsoPara.out is the file output by 12cores.
>
> I have no idea where the problem is.
>
> 在 2018年9月12日星期三 UTC+8下午7:58:50,Matthias Krack写道:
>
> Hi Tianshu Jiang
>
> without providing the CP2K output files of your 1-core and 12-cores runs,
> it is quite unlikely that you will get any reasonable hint from someone in
> this forum.
>
> Matthias
>
> On Wednesday, 12 September 2018 04:41:07 UTC+2, Tianshu Jiang in Beijing
> wrote:
>
> Hi everyone in cp2k community,
>
>
>
> I am using cp2k to calculate the pdos of graphene, but the time spent to
> completing the calculation when I use 1 core and 12 cores is the same.
>
> The version I compile is Linux-x86-64-gfortran, and I use cp2k.psmp to
> finish the job.
>
> But from the *.out file I get that in both situation (1 core and 12
> cores), the job finished after half an hour from beginning.
>
> My question is how can I accelerate the calculation using parallel
> computing ?
>
>
>
> The following is my inputfile. Thanks for your reply !
>
> &GLOBAL
>
> PROJECT trilayerABCIso
>
> RUN_TYPE ENERGY
>
> PRINT_LEVEL MEDIUM
>
> &END GLOBAL
>
>
>
> &FORCE_EVAL
>
> METHOD Quickstep
>
> &DFT
>
> BASIS_SET_FILE_NAME BASIS_MOLOPT
>
> POTENTIAL_FILE_NAME POTENTIAL
>
>
>
> &POISSON
>
> PERIODIC XYZ
>
> &END POISSON
>
> &SCF
>
> SCF_GUESS ATOMIC
>
> EPS_SCF 1.0E-6
>
> MAX_SCF 300
>
>
>
> # The following settings help with convergence:
>
> ADDED_MOS 100
>
> CHOLESKY INVERSE
>
> &SMEAR ON
>
> METHOD FERMI_DIRAC
>
> ELECTRONIC_TEMPERATURE [K] 300
>
> &END SMEAR
>
> &DIAGONALIZATION
>
> ALGORITHM STANDARD
>
> EPS_ADAPT 0.01
>
> &END DIAGONALIZATION
>
> &MIXING
>
> METHOD BROYDEN_MIXING
>
> ALPHA 0.2
>
> BETA 1.5
>
> NBROYDEN 8
>
> &END MIXING
>
> &END SCF
>
> &XC
>
> &XC_FUNCTIONAL PBE
>
> &END XC_FUNCTIONAL
>
> &END XC
>
> &PRINT
>
> &PDOS
>
> # print all projected DOS available:
>
> NLUMO -1
>
> # split the density by quantum number:
>
> COMPONENTS
>
> &END
>
> &E_DENSITY_CUBE ON
>
> STRIDE 1 1 1
>
> &END E_DENSITY_CUBE
>
> &END PRINT
>
> &END DFT
>
>
>
> &SUBSYS
>
> &CELL
>
> # create a hexagonal unit cell:
>
> ABC [angstrom] 2.4612 2.4612 26.72
>
> ALPHA_BETA_GAMMA 90. 90. 60.
>
> SYMMETRY HEXAGONAL
>
> PERIODIC XYZ
>
> # and replicate this cell (see text):
>
> MULTIPLE_UNIT_CELL 6 6 1
>
> &END CELL
>
> &TOPOLOGY
>
> # also replicate the topology (see text):
>
> MULTIPLE_UNIT_CELL 6 6 1
>
> &END TOPOLOGY
>
> &COORD
>
> SCALED
>
> # ABC stacked
>
> C 1./3 1./3 0.
>
> C 0. 3./3 0.
>
> C 1./3 1./3 1./8
>
> C 2./3 2./3 1./8
>
> C 2./3 2./3 2./8
>
> C 3./3 0. 2./8
>
> &END
>
> &KIND C
>
> ELEMENT C
>
> BASIS_SET TZVP-MOLOPT-GTH
>
> POTENTIAL GTH-PADE-q4
>
> &END KIND
>
> &END SUBSYS
>
>
>
> &END FORCE_EVAL
>
>
>
>
>
>
>
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "cp2k" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to cp2k+... at googlegroups.com.
> To post to this group, send email to cp... at googlegroups.com.
> Visit this group at https://groups.google.com/group/cp2k.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "cp2k" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to cp2k+uns... at googlegroups.com.
> To post to this group, send email to cp... at googlegroups.com.
> Visit this group at https://groups.google.com/group/cp2k.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "cp2k" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to cp2k+... at googlegroups.com <javascript:>.
> To post to this group, send email to cp... at googlegroups.com <javascript:>.
> Visit this group at https://groups.google.com/group/cp2k.
> For more options, visit https://groups.google.com/d/optout.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20180919/f62e69b4/attachment.htm>
More information about the CP2K-user
mailing list