[CP2K:1155] Re: compiling with HP MPI & Open MPI
Csilla Varnai
cv... at cam.ac.uk
Wed Jul 9 11:10:43 UTC 2008
Dear Axel,
Thanks very much. I'll contact them and go through the manuals.
Csilla
Csilla Varnai
cv... at cam.ac.uk
Engineering Department
University of Cambridge
Trumpington Street, CB2 1PZ
United Kingdom
On 7 Jul 2008, at 18:33, Axel wrote:
>
>
>
> On Jul 7, 10:04 am, "C. Va'rnai" <cv... at cam.ac.uk> wrote:
>> Dear All,
>
> dear csilla,
>
>> I am trying to compile CP2K on a supercomputer, and this is my first
>> experience with supercomputers, I have used clusters so far.
>> (http://www.rz.uni-karlsruhe.de/ssck/hpxc)
>>
>> The default mpif90 compiler is HP MPI. When I compile CP2K with HP
>> MPI,
>> after a nice compilation without error message (I know, it doesn't
>> mean
>> that it should run fine), it gives segmentation fault. I don't
>> really know
>> much about HP MPI, and after fiddling around with the compiler
>> options, I
>> decided to use Open MPI I use on a cluster machine with the same
>> Linux-x86-64-intel architecture. Running CP2K in interactive mode was
>
> please note, that the machine is _not_ an x86_64 machine but an
> itanium2 (aka. IA64) machine.
>
> using OpenMPI needs a lot of care, since you have to make sure that
> you have working support for the quadrics network.
> building on such an unusual machine can be very tricky, thus
> i strongly suggest to get into contact with the user support
> staff and have them give you explicit instructions on how to
> use/link scalapack/blacs/mpi etc.
>
> please also make sure that you do _not_ compile for OpenMP
> and link with the multithreaded MKL unless you are 100% certain
> you know what you are doing. both is explained in the intel
> compiler and MKL manuals (it may need a little digging).
>
> cheers,
> axel.
>
>> successful. However, I couldn't get CP2K run properly using the batch
>> system. For the command
>>
>> job_submit -t 10 -T 60 -m 500 -p 4/1 -c d
>> "/home/ssck/wkhu00f/wkhu23e/openmpi/bin/mpirun -np 4 cp2k.popt
>> H2O.inp"
>>
>> I got the following error message:
>> cannot allocate memory for thread-local data: ABORT
>> cannot allocate memory for thread-local data: ABORT
>> cannot allocate memory for thread-local data: ABORT
>> cannot allocate memory for thread-local data: ABORT
>>
>> When I decreased the requested memory from 500 to 300 MegaBytes,
>> running
>>
>> job_submit -t 10 -T 60 -m 300 -p 4/1 -c d
>> "/home/ssck/wkhu00f/wkhu23e/openmpi/bin/mpirun -np 4 cp2k.popt
>> H2O.inp"
>>
>> it got stuck when opening the BASIS_MOLOPT file:
>>
>> GENERATE| Preliminary Number of Bonds generated: 0
>> GENERATE| Achieved consistency in connectivity generation.
>> GENERATE| Number of Bonds generated: 0
>> GENERATE| Preliminary Number of Bends generated: 0
>> GENERATE| Number of Bends generated: 0
>> GENERATE| Number of UB generated: 0
>> GENERATE| Preliminary Number of Torsions generated: 0
>> GENERATE| Number of Torsions generated: 0
>> GENERATE| Number of Impropers generated: 0
>> GENERATE| Number of 1-4 interactions generated: 0
>> CP2K: An error occurred opening the file <BASIS_MOLOPT> with the unit
>> number
>> 1 (IOSTAT = 41)
>>
>> I would be glad if you had any idea what is wrong with the HP MPI
>> compilation or what I am doing wrong with the OPen MPI compiled
>> version.
>> Thanks,
>>
>> Csilla
>>
>> The test file is the H2O.inp in the tests/QS/ directory.
>> Here is my arch file for the Open MPI run:
>>
>> INTEL_INC=/opt/intel/mkl/10.0.011/include
>> INTEL_LIB=/opt/intel/mkl/10.0.011/lib/em64t
>>
>> FFTW3_INC=/software/all/fftw/include
>> FFTW3_LIB=/software/all/fftw/lib
>>
>> ACML_INC=/software/all/acml/acml4.0/ifort64_mp/include
>> ACML_LIB=/software/all/acml/acml4.0/ifort64_mp/lib
>>
>> CC = icc CPP = FC = mpif90 LD = mpif90 AR = ar -r DFLAGS = -D__INTEL
>> -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK -D__FFTW3 -D__FFTACML
>> CPPFLAGS = -C -traditional $(DFLAGS) -I$(INTEL_INC) #-I$(FFTW3_INC)
>> -I$(ACML_INC) FCFLAGS = $(DFLAGS) -I$(INTEL_INC) -I$(FFTW3_INC)
>> -I$(ACML_INC) -O0 -openmp -xW -heap-arrays 64 -fpp -free LDFLAGS =
>> $(FCFLAGS) LIBS = -L$(INTEL_LIB) -L$(FFTW3_LIB) -L$(ACML_LIB)
>> -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64
>> -lmkl_sequential -lmkl_core -lfftw3 -lacml_mp -lpthread
>> #$(ACML_LIB)/libacml_mp.a
>>
>> OBJECTS_ARCHITECTURE = machine_intel.o
>>
>> The one for HP MPI differs only in linking -lmkl_blacs_lp64 instead
>> of the
>> openmpi version.
> >
More information about the CP2K-user
mailing list