[CP2K-user] [CP2K:12283] Running Cp2k in parallel using thread in a PC
Chn
chen... at gmail.com
Fri Oct 4 07:07:20 UTC 2019
Hi,
I have solved this problem, it's because that the environment variables I
tried to set by command 'export' are invalid. Maybe some problems wrong
with my os.
Thanks a lot for your reply!
regards,
chn
在 2019年9月30日星期一 UTC+8下午10:34:57,Pierre-André Cazade写道:
>
> Hi,
>
> I find it strange the error you get. CP2K OMP+MPI works fine for me. Just
> for the sake of validation, could you try :
>
> export OMP_NUM_THREADS=X
>
> mpirun -np Y cp2k.psmp -i input > output
>
> rather than
>
> mpirun -np Y cp2k.psmp -i input -o output
>
> Just to check.
>
> Regards,
> Pierre
>
> On Monday, September 30, 2019 at 4:38:32 AM UTC+1, Chn wrote:
>>
>> Hi,
>> The keyword NPROC_REP has a default value of {1}, it should be only one
>> output *-r.out file in a job. So can I explain that I got six files just
>> because I used six threads..? Is it normal in a parallel job..?
>> regards,
>> chn
>>
>> 在 2019年9月29日星期日 UTC+8上午3:08:34,Ari Paavo Seitsonen写道:
>>>
>>> Hello,
>>>
>>> Maybe you received the six files because of this:
>>>
>>> <<
>>>
>>> NPROC_REP {Integer}
>>>
>>> *Specify the number of processors to be used per replica environment
>>> (for parallel runs). In case of mode selective calculations more than one
>>> replica will start a block Davidson algorithm to track more than only one
>>> frequency* [Edit on GitHub
>>> <https://github.com/cp2k/cp2k/blob/master/src/motion/input_cp2k_vib.F#L83>]
>>>
>>>
>>> This keyword cannot be repeated and it expects precisely one integer.
>>>
>>> Default value: 1>>
>>>
>>>
>>> https://manual.cp2k.org/trunk/CP2K_INPUT/VIBRATIONAL_ANALYSIS.html#list_NPROC_REP
>>>
>>> Greetings from Paris,
>>>
>>> apsi
>>>
>>> Le sam. 28 sept. 2019 à 14:14, Pierre-André Cazade <
>>> pie... at gmail.com> a écrit :
>>>
>>>> Hi Nikhil,
>>>>
>>>> As you are using a mix of MPI and OpenMP, you have to use the
>>>> executable with the extension psmp.
>>>>
>>>> You can find a table describing all the executables in the section 3 of
>>>> the “how to compile” page:
>>>>
>>>> https://www.cp2k.org/howto:compile
>>>>
>>>> Yet, it does not explain why your calculation behaved as if there were
>>>> 6 independent calculations.
>>>>
>>>> Please try the same calculation with the psmp executable and let me
>>>> know how it goes.
>>>>
>>>> Regards,
>>>> Pierre
>>>>
>>>> Get Outlook for iOS <https://aka.ms/o0ukef>
>>>>
>>>> ------------------------------
>>>> *From:* c... at googlegroups.com on behalf of Chn <ch... at gmail.com>
>>>> *Sent:* Saturday, September 28, 2019 3:03 a.m.
>>>> *To:* cp2k
>>>> *Subject:* Re: [CP2K:12283] Running Cp2k in parallel using thread in a
>>>> PC
>>>>
>>>> Hi Pierre,
>>>> I tried to combine openMP with MPI as you mentioned above when I do
>>>> vibrational analysis. I required 6 MPI threads and got 6 output files named
>>>> as *-r-number.out, however in each file it printed that:
>>>> GLOBAL| Total number of message passing processes
>>>> 1
>>>> GLOBAL| Number of threads for this process
>>>> 1
>>>> GLOBAL| This output is from process
>>>> 0
>>>> also I used 'top' command and found that only 6 cores were busy.
>>>> I use 2 x 24 core processor, and set:
>>>> export OMP_NUM_THREADS=8
>>>> mpirun -n 6 /lib/CP2K/cp2k/exe/local/cp2k.popt -i project.inp -o
>>>> output.out
>>>> Any suggestion will be greatly appreciated..
>>>>
>>>>
>>>>
>>>> 在 2019年9月20日星期五 UTC+8下午10:45:55,Pierre Cazade写道:
>>>>>
>>>>> Hello Nikhil,
>>>>>
>>>>> Withe command "mpirun -n 42 cp2k.pop -i inp.inp -o -out.out", you are
>>>>> requesting 42 MPI threads and not 42 OpenMP threads. MPI usually relies on
>>>>> replicated data which means that, for a poorly program software, it will
>>>>> request a total amount of memory which the amount of memory required by a
>>>>> scalar execution times the number of threads. This can very quickly become
>>>>> problematic, in particular for QM calculations. OpenMP, however relies on
>>>>> shared memory, the data is normally not replicated but shared between
>>>>> threads and therefore, in an ideal scenario, the amount of memory needed
>>>>> for 42 OpenMP threads is the same as a single one.
>>>>>
>>>>> This might explains why you calculation freezes. You are out of
>>>>> memory. On your workstation, you should only use the executable "cp2k.ssmp"
>>>>> which is the OpenMP version. Then you don't need the mpirun command:
>>>>>
>>>>> cp2k.ssmp -i inp.inp -o -out.out
>>>>>
>>>>> To control the number of OpenMP threads, set the env variable:
>>>>> OMP_NUM_THREADS, e.g. in bash, export OMP_NUM_THREADS=48
>>>>>
>>>>> Now, if you need to balance between MPI and OpenMP, you should use the
>>>>> executable named cp2k.psmp. Here is such an example:
>>>>>
>>>>> export OMP_NUM_THREADS=24
>>>>> mpirun -n 2 cp2k.psmp -i inp.inp -o -out.out
>>>>>
>>>>> In this example, I am requesting two MPI threads and each of them can
>>>>> use up to 24 OpenMP threads.
>>>>>
>>>>> Hope this clarifies things for you.
>>>>>
>>>>> Regards,
>>>>> Pierre
>>>>>
>>>>> On 20/09/2019 14:09, Nikhil Maroli wrote:
>>>>>
>>>>> Dear all,
>>>>>
>>>>> I have installed all the versions of CP2K in my workstation with 2 x
>>>>> 12 core processor, total thread=48
>>>>>
>>>>> I wanted to run cp2k in parallel using 42 threads, can anyone share
>>>>> the commands that i can use.
>>>>>
>>>>> I have tried
>>>>>
>>>>> mpirun -n 42 cp2k.pop -i inp.inp -o -out.out
>>>>>
>>>>> After this command there is a rise in memory to 100 % and the whole
>>>>> system freezes. (i have 128GB ram).
>>>>>
>>>>> Any suggestion will be greatly appreciated,
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "cp2k" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to c... at googlegroups.com.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com
>>>>> <https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>>
>>>>> --
>>>>> Dr Pierre Cazade, PhD
>>>>> AD3-023, Bernal Institute,
>>>>> University of Limerick,
>>>>> Plassey Park Road,
>>>>> Castletroy, co. Limerick,
>>>>> Ireland
>>>>>
>>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "cp2k" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to c... at googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/cp2k/2d26c264-d73a-44de-af89-13ff0ac86f69%40googlegroups.com
>>>> <https://groups.google.com/d/msgid/cp2k/2d26c264-d73a-44de-af89-13ff0ac86f69%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "cp2k" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to c... at googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/cp2k/AM6PR10MB26968C27CCEC14903050632EAE800%40AM6PR10MB2696.EURPRD10.PROD.OUTLOOK.COM
>>>> <https://groups.google.com/d/msgid/cp2k/AM6PR10MB26968C27CCEC14903050632EAE800%40AM6PR10MB2696.EURPRD10.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>>
>>> --
>>>
>>> -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
>>> Ari Paavo Seitsonen / Ar... at iki.fi / http://www.iki.fi/~apsi/
>>> Ecole Normale Supérieure (ENS), Département de Chimie, Paris
>>> Mobile (F) : +33 789 37 24 25 (CH) : +41 79 71 90 935
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20191004/df507a09/attachment.htm>
More information about the CP2K-user
mailing list