[CP2K-user] [CP2K:15190] Re: Does CP2K allow a multi-GPU run?
Lenard Carroll
lenardc... at gmail.com
Thu Apr 22 11:27:18 UTC 2021
Shall do. I already set it up, but it's in a long queue.
On Thu, Apr 22, 2021 at 1:22 PM Alfio Lazzaro <alfio.... at gmail.com>
wrote:
> Could you try what I suggested:
>
> export OMP_NUM_THREADS=10
> mpirun -np 4 ./cp2k.psmp -i gold.inp -o gold_pbc.out
>
> Please check the corresponding log.
>
> As I said above, you need an MPI rank per GPU and you told us that you
> have 4 GPUs, so you need 4 ranks (or multiple). With 10 you get unbalance.
>
>
> Il giorno giovedì 22 aprile 2021 alle 10:17:27 UTC+2 ASSIDUO Network ha
> scritto:
>
>> Correction, he told me to use:
>>
>> mpirun -np 10 cp2k.psmp -i gold.inp -o gold_pbc.out
>>
>> but it didn't run correctly.
>>
>> On Thu, Apr 22, 2021 at 9:51 AM Lenard Carroll <len... at gmail.com>
>> wrote:
>>
>>> He suggested I try out:
>>> mpirun -n 10 cp2k.psmp -i gold.inp -o gold_pbc.out
>>>
>>> as he is hoping that will cause the 1 GPU to use 10 CPUs over the
>>> selected 4 GPUs.
>>>
>>>
>>> On Thu, Apr 22, 2021 at 9:48 AM Alfio Lazzaro <al... at gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>> Your command to run CP2K doesn't mention MPI (mpirun, mpiexc, ...). Are
>>>> you running with multiple ranks?
>>>>
>>>> You can check those lines in the output:
>>>>
>>>> GLOBAL| Total number of message passing processes
>>>> 32
>>>> GLOBAL| Number of threads for this process
>>>> 4
>>>>
>>>> And check your numbers.
>>>> I can guess you have 1 rank and 40 threads.
>>>> To use 4 GPUs you need 4 ranks (and less threads per rank), i.e.
>>>> something like
>>>>
>>>> export OMP_NUM_THREADS=10
>>>> mpiexec -n 4 ./cp2k.psmp -i gold.inp -o gold_pbc.out
>>>>
>>>> Please check with your sysadmin on how to run with multiple MPI ranks.
>>>>
>>>> Hope it helps.
>>>>
>>>> Alfio
>>>>
>>>>
>>>>
>>>> Il giorno mercoledì 21 aprile 2021 alle 09:26:53 UTC+2 ASSIDUO Network
>>>> ha scritto:
>>>>
>>>>> This is what my PBS file looks like:
>>>>>
>>>>> #!/bin/bash
>>>>> #PBS -P <PROJECT>
>>>>> #PBS -N <JOBNAME>
>>>>> #PBS -l select=1:ncpus=40:ngpus=4
>>>>> #PBS -l walltime=08:00:00
>>>>> #PBS -q gpu_4
>>>>> #PBS -m be
>>>>> #PBS -M none
>>>>>
>>>>> module purge
>>>>> module load chpc/cp2k/8.1.0/cuda10.1/openmpi-4.0.0/gcc-7.3.0
>>>>> source $SETUP
>>>>> cd $PBS_O_WORKDIR
>>>>>
>>>>> cp2k.psmp -i gold.inp -o gold_pbc.out
>>>>> ~
>>>>> ~
>>>>>
>>>>>
>>>>> On Wed, Apr 21, 2021 at 9:22 AM Alfio Lazzaro <al... at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The way to use 4 GPUs per node is to use 4 MPI ranks. How many ranks
>>>>>> are you using?
>>>>>>
>>>>>> Il giorno martedì 20 aprile 2021 alle 19:44:15 UTC+2 ASSIDUO Network
>>>>>> ha scritto:
>>>>>>
>>>>>>> I'm asking, since the administrator running my country's HPC is
>>>>>>> saying that although I'm requesting access to 4 GPUs, CP2K is only using 1.
>>>>>>> I checked the following output:
>>>>>>> DBCSR| ACC: Number of devices/node
>>>>>>> 4
>>>>>>>
>>>>>>> And it shows that CP2K is picking up 4 GPUs.
>>>>>>>
>>>>>>> On Tuesday, April 20, 2021 at 3:00:17 PM UTC+2 ASSIDUO Network wrote:
>>>>>>>
>>>>>>>> I currently have access to 4 GPUs to run an AIMD simulation, but
>>>>>>>> only one of the GPUs are being used. Is there a way to use the other 3, and
>>>>>>>> if so, can you tell me how to set it up with a PBS job?
>>>>>>>
>>>>>>> --
>>>>>> You received this message because you are subscribed to the Google
>>>>>> Groups "cp2k" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it,
>>>>>> send an email to cp... at googlegroups.com.
>>>>>> To view this discussion on the web visit
>>>>>> https://groups.google.com/d/msgid/cp2k/70ba0fce-8636-4b75-940d-133ce4dbf0can%40googlegroups.com
>>>>>> <https://groups.google.com/d/msgid/cp2k/70ba0fce-8636-4b75-940d-133ce4dbf0can%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>> .
>>>>>>
>>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "cp2k" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to cp... at googlegroups.com.
>>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/cp2k/92e4f88d-fde8-4127-ab5f-0b98bbbba8ebn%40googlegroups.com
>>>> <https://groups.google.com/d/msgid/cp2k/92e4f88d-fde8-4127-ab5f-0b98bbbba8ebn%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> --
> You received this message because you are subscribed to the Google Groups
> "cp2k" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to cp... at googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/cp2k/59a635d8-0f0c-4dc5-abaf-b8bbe3c18da5n%40googlegroups.com
> <https://groups.google.com/d/msgid/cp2k/59a635d8-0f0c-4dc5-abaf-b8bbe3c18da5n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20210422/2a37f0ae/attachment.htm>
More information about the CP2K-user
mailing list