[CP2K-user] [CP2K:19391] Help with running cp2k in parallel with Slurm

Ava Rajh ava.rajh at gmail.com
Thu Oct 19 10:16:53 UTC 2023


Sorry for the confusion, I read the output wrong, 
and if I put the ntasks in the header of the batch file I get the same 
result as running just with srun, so the number of tasks determines the 
number of programs running with one MPI rank each. 

On Thursday, October 19, 2023 at 10:20:39 AM UTC+2 Ava Rajh wrote:

>
> Hi Matthias, 
>
> Thank you for the reply
>
> I tried both, first just with the srun command and as part of the #SBATCH 
> file. 
> If I run the "srun" command and specify the number of tasks I get the 
> before mentioned multiple copies of the same program with 1 MPI rank each. 
> The same happens if I run the command outside the container without the 
> srun and specify the number of tasks with just: mpiexec -n 4 
>
> and if I put the ntasks in the header of the batch file, then I just get 
> one copy of the program with 1 MPI rank, no matter how many -ntasks are 
> defined. 
>
> Any idea about how to start my calculation would be very helpful, thank 
> you. 
> kind regards, Ava
>
>
> On Wednesday, October 18, 2023 at 7:21:22 PM UTC+2 Krack Matthias wrote:
>
>> Hi Ava
>>
>>  
>>
>> Do you run the “srun” command as part of a SLURM batch job file with a 
>> #SBATCH header section or interactively?
>>
>> Your guess is right, the –ntasks flag defines the number of MPI ranks.
>>
>>  
>>
>> Best
>>
>>  
>>
>> Matthias
>>
>>  
>>
>> *From: *cp... at googlegroups.com <cp... at googlegroups.com> on behalf of Ava 
>> Rajh <ava.... at gmail.com>
>> *Date: *Wednesday, 18 October 2023 at 14:17
>> *To: *cp2k <cp... at googlegroups.com>
>> *Subject: *[CP2K:19379] Help with running cp2k in parallel with Slurm
>>
>> Dear all, 
>>
>> I am trying to run cp2k on our HPC cluster and I am new in doing any kind 
>> of parallel computing and work on a cluster, so I would appreciate if some 
>> help and I apologize if I am missing something obvious. 
>>
>> I am trying to use Cp2k in combination with apptainer  and I followed 
>> instructions at https://github.com/cp2k/cp2k/tree/master/tools/apptainer  
>>
>>
>>  
>>
>> I have a .sif file in my work directory and if I work within it (Running 
>> MPI within the container), everything works perfectly and I can set the 
>> number of MPI threads. 
>>
>>  
>>
>> But when trying to run it through slurm, I can't seem to be able to set 
>> the number of MPI processes per node. If i start a command like: 
>>
>> srun --ntasks=2 apptainer run -B $PWD cp2k-2023.2_mpich_generic_psmp.sif 
>> cp2k -i H2O-32.in
>>
>> it just starts 2 instances of the program that run at the same time, and 
>> for each, the total number of message passing processes is 1. I am able to 
>> set and change the number of OpenMP threads though. 
>>
>> So my question would be first, am I wrongly assuming that --ntasks X 
>> should correspond to the number of MPI threads? And if I am, how would I 
>> set it. 
>>
>>  
>>
>> Please let me know if I need to provide any more information to diagnose 
>> the issue. 
>>
>> Thank you very much for the help and kind regards, 
>>
>> Ava Rajh
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "cp2k" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to cp2k+uns... at googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/cp2k/2393b350-7de0-4599-97e0-4f52ffec0c5en%40googlegroups.com 
>> <https://groups.google.com/d/msgid/cp2k/2393b350-7de0-4599-97e0-4f52ffec0c5en%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/1a89bd22-860c-4922-a2b6-bd20232ca770n%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20231019/5631d3e7/attachment-0001.htm>


More information about the CP2K-user mailing list