[CP2K:10729] Re: How to accelerate the calculation of pdos ?
hut... at chem.uzh.ch
hut... at chem.uzh.ch
Fri Sep 14 07:28:01 UTC 2018
Hi
I see in both of your outputs that you are running with
1 MPI process and 36 OpenMP threads
GLOBAL| Total number of message passing processes 1
GLOBAL| Number of threads for this process 36
GLOBAL| Total number of message passing processes 1
GLOBAL| Number of threads for this process 36
With this setting you will get identical outputs. I don't see where your claim
for 12 threads for the second run is coming from.
regards
Juerg Hutter
--------------------------------------------------------------
Juerg Hutter Phone : ++41 44 635 4491
Institut für Chemie C FAX : ++41 44 635 6838
Universität Zürich E-mail: hut... at chem.uzh.ch
Winterthurerstrasse 190
CH-8057 Zürich, Switzerland
---------------------------------------------------------------
-----cp... at googlegroups.com wrote: -----
To: "cp2k" <cp... at googlegroups.com>
From: "Tianshu Jiang in Beijing"
Sent by: cp... at googlegroups.com
Date: 09/14/2018 03:49AM
Subject: [CP2K:10729] Re: How to accelerate the calculation of pdos ?
Hi Krack, thanks for your reply.
In the attachment, bilayerIso.out is the file output by 1 core and bilayerIsoPara.out is the file output by 12cores.
I have no idea where the problem is.
在 2018年9月12日星期三 UTC+8下午7:58:50,Matthias Krack写道:
Hi Tianshu Jiang
without providing the CP2K output files of your 1-core and 12-cores runs, it is quite unlikely that you will get any reasonable hint from someone in this forum.
Matthias
On Wednesday, 12 September 2018 04:41:07 UTC+2, Tianshu Jiang in Beijing wrote:
Hi everyone in cp2k community,
I am using cp2k to calculate the pdos of graphene, but the time spent to completing the calculation when I use 1 core and 12 cores is the same.
The version I compile is Linux-x86-64-gfortran, and I use cp2k.psmp to finish the job.
But from the *.out file I get that in both situation (1 core and 12 cores), the job finished after half an hour from beginning.
My question is how can I accelerate the calculation using parallel computing ?
The following is my inputfile. Thanks for your reply !
&GLOBAL
PROJECT trilayerABCIso
RUN_TYPE ENERGY
PRINT_LEVEL MEDIUM
&END GLOBAL
&FORCE_EVAL
METHOD Quickstep
&DFT
BASIS_SET_FILE_NAME BASIS_MOLOPT
POTENTIAL_FILE_NAME POTENTIAL
&POISSON
PERIODIC XYZ
&END POISSON
&SCF
SCF_GUESS ATOMIC
EPS_SCF 1.0E-6
MAX_SCF 300
# The following settings help with convergence:
ADDED_MOS 100
CHOLESKY INVERSE
&SMEAR ON
METHOD FERMI_DIRAC
ELECTRONIC_TEMPERATURE [K] 300
&END SMEAR
&DIAGONALIZATION
ALGORITHM STANDARD
EPS_ADAPT 0.01
&END DIAGONALIZATION
&MIXING
METHOD BROYDEN_MIXING
ALPHA 0.2
BETA 1.5
NBROYDEN 8
&END MIXING
&END SCF
&XC
&XC_FUNCTIONAL PBE
&END XC_FUNCTIONAL
&END XC
&PRINT
&PDOS
# print all projected DOS available:
NLUMO -1
# split the density by quantum number:
COMPONENTS
&END
&E_DENSITY_CUBE ON
STRIDE 1 1 1
&END E_DENSITY_CUBE
&END PRINT
&END DFT
&SUBSYS
&CELL
# create a hexagonal unit cell:
ABC [angstrom] 2.4612 2.4612 26.72
ALPHA_BETA_GAMMA 90. 90. 60.
SYMMETRY HEXAGONAL
PERIODIC XYZ
# and replicate this cell (see text):
MULTIPLE_UNIT_CELL 6 6 1
&END CELL
&TOPOLOGY
# also replicate the topology (see text):
MULTIPLE_UNIT_CELL 6 6 1
&END TOPOLOGY
&COORD
SCALED
# ABC stacked
C 1./3 1./3 0.
C 0. 3./3 0.
C 1./3 1./3 1./8
C 2./3 2./3 1./8
C 2./3 2./3 2./8
C 3./3 0. 2./8
&END
&KIND C
ELEMENT C
BASIS_SET TZVP-MOLOPT-GTH
POTENTIAL GTH-PADE-q4
&END KIND
&END SUBSYS
&END FORCE_EVAL
--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com.
To post to this group, send email to cp... at googlegroups.com.
Visit this group at https://groups.google.com/group/cp2k.
For more options, visit https://groups.google.com/d/optout.
[attachment "bilayerIso.out" removed by Jürg Hutter/at/UZH]
[attachment "bilayerIsoPara.out" removed by Jürg Hutter/at/UZH]
More information about the CP2K-user
mailing list