[CP2K:10729] Re: How to accelerate the calculation of pdos ?

Tianshu Jiang in Beijing jts2t... at gmail.com
Sun Sep 16 07:04:52 UTC 2018


Hi, jhg
Thanks for your remind, I notice where the problem occurs.
Maybe I did not specify the 12 threads, I only tell the cluster to use 12 
cores run the job.
How could I claim that I will use 12 threads to run the job ? In cp2k 
inputfile ?

在 2018年9月14日星期五 UTC+8下午3:28:07,jgh写道:
>
> Hi 
>
> I see in both of your outputs that you are running with 
> 1 MPI process and 36 OpenMP threads 
>
>  GLOBAL| Total number of message passing processes                         
>     1 
>  GLOBAL| Number of threads for this process                               
>     36 
>
>  GLOBAL| Total number of message passing processes                         
>     1 
>  GLOBAL| Number of threads for this process                               
>     36 
>
> With this setting you will get identical outputs. I don't see where your 
> claim 
> for 12 threads for the second run is coming from. 
>
> regards 
>
> Juerg Hutter 
> -------------------------------------------------------------- 
> Juerg Hutter                         Phone : ++41 44 635 4491 
> Institut für Chemie C                FAX   : ++41 44 635 6838 
> Universität Zürich                   E-mail: hut... at chem.uzh.ch 
> <javascript:> 
> Winterthurerstrasse 190 
> CH-8057 Zürich, Switzerland 
> --------------------------------------------------------------- 
>
> -----cp... at googlegroups.com <javascript:> wrote: ----- 
> To: "cp2k" <cp... at googlegroups.com <javascript:>> 
> From: "Tianshu Jiang in Beijing" 
> Sent by: cp... at googlegroups.com <javascript:> 
> Date: 09/14/2018 03:49AM 
> Subject: [CP2K:10729] Re: How to accelerate the calculation of pdos ? 
>
> Hi Krack, thanks for your reply. 
> In the attachment, bilayerIso.out is the file output by 1 core and 
> bilayerIsoPara.out is the file output by 12cores. 
> I have no idea where the problem is. 
>
> 在 2018年9月12日星期三 UTC+8下午7:58:50,Matthias Krack写道: 
> Hi Tianshu Jiang 
>
> without providing the CP2K output files of your 1-core and 12-cores runs, 
> it is quite unlikely that you will get any reasonable hint from someone in 
> this forum. 
>
> Matthias 
>
> On Wednesday, 12 September 2018 04:41:07 UTC+2, Tianshu Jiang in Beijing 
>  wrote: 
> Hi everyone in cp2k community, 
>
> I am using cp2k to calculate the pdos of graphene, but the time spent to 
> completing the calculation when I use 1 core and 12 cores is the same. 
> The version I compile is Linux-x86-64-gfortran, and I use cp2k.psmp to 
> finish the job. 
> But from the *.out file I get that in both situation (1 core and 12 
> cores), the job finished after half an hour from beginning. 
> My question is how can I accelerate the calculation using parallel 
> computing ? 
>
> The following is my inputfile. Thanks for your reply ! 
> &GLOBAL 
>   PROJECT trilayerABCIso 
>   RUN_TYPE ENERGY 
>   PRINT_LEVEL MEDIUM 
> &END GLOBAL 
>
> &FORCE_EVAL 
>   METHOD Quickstep 
>   &DFT 
>     BASIS_SET_FILE_NAME  BASIS_MOLOPT 
>     POTENTIAL_FILE_NAME  POTENTIAL 
>
>     &POISSON 
>       PERIODIC XYZ 
>     &END POISSON 
>     &SCF 
>       SCF_GUESS ATOMIC 
>       EPS_SCF 1.0E-6 
>       MAX_SCF 300 
>
>       # The following settings help with convergence: 
>       ADDED_MOS 100 
>       CHOLESKY INVERSE 
>       &SMEAR ON 
>         METHOD FERMI_DIRAC 
>         ELECTRONIC_TEMPERATURE [K] 300 
>       &END SMEAR 
>       &DIAGONALIZATION 
>         ALGORITHM STANDARD 
>         EPS_ADAPT 0.01 
>       &END DIAGONALIZATION 
>       &MIXING 
>         METHOD BROYDEN_MIXING 
>         ALPHA 0.2 
>         BETA 1.5 
>         NBROYDEN 8 
>       &END MIXING 
>     &END SCF 
>     &XC 
>       &XC_FUNCTIONAL PBE 
>       &END XC_FUNCTIONAL 
>     &END XC 
>     &PRINT 
>       &PDOS 
>         # print all projected DOS available: 
>         NLUMO -1 
>         # split the density by quantum number: 
>         COMPONENTS 
>       &END 
>       &E_DENSITY_CUBE ON 
>           STRIDE 1 1 1 
>       &END E_DENSITY_CUBE 
>     &END PRINT 
>   &END DFT 
>
>   &SUBSYS 
>     &CELL 
>       # create a hexagonal unit cell: 
>       ABC  [angstrom] 2.4612 2.4612 26.72 
>       ALPHA_BETA_GAMMA 90. 90. 60. 
>       SYMMETRY HEXAGONAL 
>       PERIODIC XYZ 
>       # and replicate this cell (see text): 
>       MULTIPLE_UNIT_CELL 6 6 1 
>     &END CELL 
>     &TOPOLOGY 
>       # also replicate the topology (see text): 
>       MULTIPLE_UNIT_CELL 6 6 1 
>     &END TOPOLOGY 
>     &COORD 
>       SCALED 
>       # ABC stacked 
>       C 1./3  1./3  0. 
>       C 0.    3./3  0. 
>       C 1./3  1./3  1./8 
>       C 2./3  2./3  1./8 
>       C 2./3  2./3  2./8 
>       C 3./3  0.    2./8 
>     &END 
>     &KIND C 
>       ELEMENT C 
>       BASIS_SET TZVP-MOLOPT-GTH 
>       POTENTIAL GTH-PADE-q4 
>     &END KIND 
>   &END SUBSYS 
>
> &END FORCE_EVAL 
>
>
>
>
>
>
>   
>   -- 
>  You received this message because you are subscribed to the Google Groups 
> "cp2k" group. 
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to cp2k+... at googlegroups.com <javascript:>. 
>  To post to this group, send email to cp... at googlegroups.com <javascript:>. 
>
>  Visit this group at https://groups.google.com/group/cp2k. 
>  For more options, visit https://groups.google.com/d/optout. 
>   
>
> [attachment "bilayerIso.out" removed by Jürg Hutter/at/UZH] 
> [attachment "bilayerIsoPara.out" removed by Jürg Hutter/at/UZH] 
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20180916/3d638a38/attachment.htm>


More information about the CP2K-user mailing list