[CP2K-user] [CP2K:19613] Re: Error: Parameter ‘mpi_comm_null’ at (1) has not been declared
Frederick Stein
f.stein at hzdr.de
Fri Dec 1 08:15:02 UTC 2023
Dear Mikhail,
If diagonalizations are not too expensive, you may turn off ELPA by setting
`PREFERRED_DIAG_LIBRARY SCALAPACK` in the &GLOBAL section. Otherwise, you
should try a different kernel (keyword ELPA_KERNEL in the &GLOBAL section,
see the manual and check the output file for the kernel in use).
Best regards,
Frederick
Mikhail Povarnitsyn schrieb am Freitag, 1. Dezember 2023 um 01:21:09 UTC+1:
> Dear Frederick,
>
> Thank you for the message. I've read the manual and found out the
> processor type: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz. By using gcc-9.3
> and the command gcc -march=native -E -v - </dev/null 2>&1 | grep cc1, I
> obtained on the particular Intel-based node that the actual value of native
> is broadwell.
>
> Then, I used the toolchain with the option --target-cpu=broadwell.
> Installation of libs and cp2k compilation passed successfully; however, I
> received a warning message:
> *** WARNING in fm/cp_fm_elpa.F:522 :: Setting real_kernel for ELPA failed
> ***
> when running VIBRATIONAL_ANALYSIS.
>
> It seems that ELPA is not working for me with the option
> --target-cpu=broadwell.
> What steps can I take to resolve this issue?
>
> Best regards,
> Mikhail
>
> On Wednesday, November 29, 2023 at 8:36:23 PM UTC+1 Frederick Stein wrote:
>
>> Dear Mikhail,
>>
>> It is to be expected that both arch files have the -march=native option.
>> My idea is mostly about ensuring that the compilation of the toolchain and
>> CP2K was performed for the correct architecture. The actual values of
>> -march and -mtune are determined by the compiler automatically. What
>> options work in your case depends on your compiler and the CPU. Have you
>> already consulted the manual of your supercomputing center? If it is about
>> fine-tuning, ask your supercomputing center for help or consult the CPU
>> specs and the manual of your compiler or just with leave it like that as
>> the main part of the performance comes from libraries. In case of doubt,
>> you may also check the timing report of CP2K at the very end of the output
>> file or ask here for further help regarding performance tuning (in a
>> different thread as that is feature-dependent).
>>
>> HTH
>> Frederick
>>
>> Mikhail Povarnitsyn schrieb am Mittwoch, 29. November 2023 um 19:14:25
>> UTC+1:
>>
>>> Dear Frederick,
>>>
>>> Thank you for this. Following your suggestion, I ran the toolchain job
>>> on a node of interest, and upon successful completion, I observed that the
>>> 'local.psmp' arch file is identical to one obtained by compilation on the
>>> head node. In both cases, the -march=native option is present.
>>>
>>> Did I correctly understand your idea of achieving node-dependent
>>> compilation? Where I can find the actual values of -march, -mtune, and
>>> other relevant parameters during the toolchain step?
>>>
>>> Best regards,
>>> Mikhail
>>>
>>> On Wednesday, November 29, 2023 at 1:05:39 PM UTC+1 Frederick Stein
>>> wrote:
>>>
>>>> Dear Mikhail,
>>>>
>>>> Did you try to compile the code as part of a job on the given machine?
>>>> Then, the compiler should be able grep the correct flags.
>>>>
>>>> Best,
>>>> Frederick
>>>>
>>>> Mikhail Povarnitsyn schrieb am Mittwoch, 29. November 2023 um 13:00:52
>>>> UTC+1:
>>>>
>>>>> Dear Frederick,
>>>>>
>>>>> I appreciate your continued assistance. Given that we have a mixture
>>>>> of processor types (Intel Xeon and AMD EPYC), determining the optimal
>>>>> -march and -mtune options (now 'native', by default) is currently not
>>>>> straightforward.
>>>>>
>>>>> Best regards,
>>>>> Mikhail
>>>>> On Wednesday, November 29, 2023 at 9:52:35 AM UTC+1 Frederick Stein
>>>>> wrote:
>>>>>
>>>>>> Dear Mikhail,
>>>>>>
>>>>>> I am not quite the expert for these machine-dependent options. I
>>>>>> personally ignore this error message. Machine-related optimizations
>>>>>> dependent on the actual setup (CPU, cross-compilation, ...) and the
>>>>>> compiler. If you compile for a supercomputing cluster, it is recommended to
>>>>>> make use of the center-provided compiler wrappers as they may have better
>>>>>> optimized MPI libraries or setup for their machine. You may check the
>>>>>> manual of your compiler for further information on machine-dependent
>>>>>> options in case you know the CPU and its instructions set (For Gfortran:
>>>>>> https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html).
>>>>>>
>>>>>> Maybe, some of the other folks are of better help here.
>>>>>>
>>>>>> Best,
>>>>>> Frederick
>>>>>> Mikhail Povarnitsyn schrieb am Mittwoch, 29. November 2023 um
>>>>>> 00:41:41 UTC+1:
>>>>>>
>>>>>>> Dear Frederick,
>>>>>>>
>>>>>>> I wanted to express my gratitude for your advice on removing
>>>>>>> -D__MPI_F08; it was immensely helpful.
>>>>>>>
>>>>>>> Upon comparing the performance of the 'cp2k.popt' code across
>>>>>>> versions 2023.2, 7.1, and 9.1, I observed a consistent runtime of
>>>>>>> approximately 25 minutes, with minor variations within a few seconds for
>>>>>>> all versions. However, in the output of version 2023.2, I noticed a new
>>>>>>> message:
>>>>>>>
>>>>>>> *** HINT in environment.F:904 :: The compiler target flags (generic)
>>>>>>> used *** *** to build this binary cannot exploit all extensions of this CPU
>>>>>>> model *** *** (x86_avx2). Consider compiler target flags as part of FCFLAGS
>>>>>>> and *** *** CFLAGS (ARCH file). ***
>>>>>>>
>>>>>>> I would greatly appreciate your advice on how I can enhance the
>>>>>>> performance of the parallel version.
>>>>>>>
>>>>>>> Thank you in advance for your assistance.
>>>>>>>
>>>>>>> Best regards, Mikhail
>>>>>>>
>>>>>>>
>>>>>>> On Tuesday, November 28, 2023 at 9:43:14 PM UTC+1 Frederick Stein
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Dear Mikhail,
>>>>>>>>
>>>>>>>> Can you remove the -D__MPI_F08 and recompile. I think it might be
>>>>>>>> related to an unsufficient support of the mpi_f08 module which CP2K uses by
>>>>>>>> default with OpenMPI and which is not tested with older versions of
>>>>>>>> compiler and library. Alternatively, if you have access to a later version
>>>>>>>> of the library (also with the CP2K Toolchain, add the flags
>>>>>>>> --with-openmpi=install or --with-mpich=install or additionally with
>>>>>>>> --with-gcc=install to install GCC 13).
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Frederick
>>>>>>>>
>>>>>>>> Mikhail Povarnitsyn schrieb am Dienstag, 28. November 2023 um
>>>>>>>> 18:45:51 UTC+1:
>>>>>>>>
>>>>>>>>> Dear Frederick,
>>>>>>>>>
>>>>>>>>> Thank you very much for the reply!
>>>>>>>>>
>>>>>>>>> 1) Yes, I mean OpenMPI 3.1.6.
>>>>>>>>>
>>>>>>>>> 2) 'local.psmp' file is attached, hope that is what you asked for.
>>>>>>>>>
>>>>>>>>> 3) Yes, I did the command 'source
>>>>>>>>> /user/povar/cp2k-2023.2/tools/toolchain/install/setup' after the toolchain.
>>>>>>>>>
>>>>>>>>> 4) mpifort --version
>>>>>>>>> GNU Fortran (GCC) 9.3.0
>>>>>>>>> Copyright (C) 2019 Free Software Foundation, Inc.
>>>>>>>>> This is free software; see the source for copying conditions.
>>>>>>>>> There is NO
>>>>>>>>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
>>>>>>>>> PURPOSE.
>>>>>>>>>
>>>>>>>>> Best regards,
>>>>>>>>> Mikhail
>>>>>>>>>
>>>>>>>>> On Tuesday, November 28, 2023 at 1:08:25 PM UTC+1 Frederick Stein
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Dear Mikhail,
>>>>>>>>>>
>>>>>>>>>> I suppose you mean OpenMPI 3.1.6 (MPI is just the standard
>>>>>>>>>> defining an interface for parallel programming). Could you post your arch
>>>>>>>>>> file of the parallel build? Did you source the setup file after the
>>>>>>>>>> toolchain script finished? Could you also post the output of `mpifort
>>>>>>>>>> --version`?
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>> Frederick
>>>>>>>>>>
>>>>>>>>>> Mikhail Povarnitsyn schrieb am Dienstag, 28. November 2023 um
>>>>>>>>>> 11:06:14 UTC+1:
>>>>>>>>>>
>>>>>>>>>>> Dear Developers and Users,
>>>>>>>>>>>
>>>>>>>>>>> I am attempting to install the latest version, 2023.2, using the
>>>>>>>>>>> GNU compiler (gcc 9.3.0) along with MPI 3.1.6. I employed the toolchain
>>>>>>>>>>> script as follows: './install_cp2k_toolchain.sh'.
>>>>>>>>>>>
>>>>>>>>>>> The serial version 'ssmp' has been successfully compiled.
>>>>>>>>>>> However, the compilation of the parallel version 'psmp' failed with the
>>>>>>>>>>> following error:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> /user/povar/cp2k-2023.2/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F:106:53:
>>>>>>>>>>>
>>>>>>>>>>> 106 | MPI_COMM_TYPE, PARAMETER :: mp_comm_null_handle =
>>>>>>>>>>> MPI_COMM_NULL
>>>>>>>>>>> | 1
>>>>>>>>>>> Error: Parameter ‘mpi_comm_null’ at (1) has not been declared or
>>>>>>>>>>> is a variable, which does not reduce to a constant expression
>>>>>>>>>>>
>>>>>>>>>>> /user/povar/cp2k-2023.2/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F:107:53:
>>>>>>>>>>>
>>>>>>>>>>> 107 | MPI_COMM_TYPE, PARAMETER :: mp_comm_self_handle =
>>>>>>>>>>> MPI_COMM_SELF
>>>>>>>>>>> | 1
>>>>>>>>>>> Error: Parameter ‘mpi_comm_self’ at (1) has not been declared or
>>>>>>>>>>> is a variable, which does not reduce to a constant expression
>>>>>>>>>>>
>>>>>>>>>>> /user/povar/cp2k-2023.2/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F:108:54:
>>>>>>>>>>>
>>>>>>>>>>> 108 | MPI_COMM_TYPE, PARAMETER :: mp_comm_world_handle =
>>>>>>>>>>> MPI_COMM_WORLD
>>>>>>>>>>>
>>>>>>>>>>> and other similar errors.
>>>>>>>>>>>
>>>>>>>>>>> Could you please help?
>>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Mikhail
>>>>>>>>>>>
>>>>>>>>>>>
--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/a0a520d2-c72e-4a70-aab4-1e8bc912b696n%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20231201/08f105cd/attachment.htm>
More information about the CP2K-user
mailing list