MPI_wait problem in cp2k 4.1 with openmpi_2.0.0

Alfio Lazzaro alfio.... at
Wed Mar 29 08:14:08 UTC 2017

OK, I answered to another email related to your problem, where I said that 
Intel Xeon is a x86-64 architecture. IA64 is the Intel Itanium. Therefore, 
please us the x86-64 arch file as a template. Anyway, this is not really 
related to your problem with OpenMPI (I hope so!)...

Concerning your last email, yes, please attach the CP2K logs.
Then, have you tried to compile CP2K 4.1 with the same CP2K 2.1 libraries 
(or vice versa)?


Il giorno lunedì 27 marzo 2017 11:38:52 UTC+2, jim wang ha scritto:
> Hi, everybody!
> I am using cp2k 4.1 for the testing in our new cluster. But strangly, the 
> result showed that the cp2k 4.1 version is 3 to 4 times slower than cp2k 
> 2.1 version built on the same cluster. After examining the output file 
> genertated by both binary file running the same job, I found out that the 
> MPI_wait function may be the key problem.
> Here is the result of time consumed by MPI_wait function:
> 1. cp2k 4.1: MPI_wait time:1131(s) , Total run time: 1779(s)
> 2. cp2k 2.1: MPI_wait time:68(s), Total run time: 616(s)
> How can I determine whether the problem should be with our cluster or the 
> compilation?
> Hope you guys can give me some hints on the version comparison.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the CP2K-user mailing list