[CP2K-user] [CP2K:16885] Re: CP2K scaling with Intel ONEAPI MPI + ethernet

abin abbott.cn at gmail.com
Mon Apr 25 10:41:03 UTC 2022


Try to switch to InfiniBand network. 
Just forget the ethernet adapters if you are working heavy MPI workloads. 

On Tuesday, February 1, 2022 at 12:37:03 AM UTC+8 Tat wrote:

> Dear all,
> we are trying to improve the suboptimal scaling of CP2K we're experiencing 
> on a linux cluster with several physical nodes: the execution on 2 or more 
> nodes appears to be significantly slower than on a single one. 
> The system has nodes with 32-core Xeon Silver processors with 
> hyperthreading, Gigabit ethernet and the execution is done according to the 
> parameters provided by the plan.sh script, i.e.
>
> for 1 node:
> *mpirun -np 16 -genv I_MPI_PIN_DOMAIN=auto -genv I_MPI_PIN_ORDER=bunch 
> -genv OMP_PLACES=threads -genv OMP_PROC_BIND=SPREAD -genv OMP_NUM_THREADS=4 
> ~/cp2k-8.2/exe/Linux-x86-64-intelx/cp2k.psmp job.inp*
>
> for 2 nodes:
>
> *mpirun -r ssh -perhost 16 -host linux1,linux2 -genv I_MPI_PIN_DOMAIN=auto 
> -genv I_MPI_PIN_ORDER=bunch -genv OMP_PLACES=threads -genv 
> OMP_PROC_BIND=SPREAD -genv OMP_NUM_THREADS=4 
> ~/cp2k-8.2/exe/Linux-x86-64-intelx/cp2k.psmp job.inp*
>
> CP2K PSMP was compiled using Intel ONEAPI mpiifort 2021.3.0.
>
> What could be done to improve the performance? Can network communication 
> or SSH cause the bottleneck? 
> Any suggestions or references would be much appreciated.
> Thanks &regards,
>
> Attila
>

-- 
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/14b8e5b2-ac3b-4fa8-810e-bb5d0ec175efn%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20220425/7a3a1b1f/attachment.htm>


More information about the CP2K-user mailing list