Re: 回复: [CP2K:3247] status of hybrid OpenMP+MPI version
Axel
akoh... at gmail.com
Thu May 12 14:00:05 UTC 2011
On Wednesday, May 11, 2011 11:05:03 PM UTC-4, Ross, Sun wrote:
>
> Hi,
> 1. For openmp (1.4.2):
> ./configure --prefix=.../openmpi142 --enable-mpi-threads --enable-shared
> --with-threads=posix --enable-mpi-f90 CC=cc cpp=cpp CXX=c++ FC=ifort
>
there no need to hook ifort permanently into the OpenMPI installation.
the more elegant way is to do a symlink from opal_wrapper to, e.g., mpiifort
and then add a file mpiifort-wrapper-data.txt to $OMPI_HOME/share/openmpi:
project=Open MPI
project_short=OMPI
version=1.4.3
language=Fortran 77
compiler_env=F77
compiler_flags_env=FFLAGS
compiler=ifort
extra_includes=
preprocessor_flags=
compiler_flags=
linker_flags= -static-intel -threads
libs=-lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic
-lnsl -lutil -lm -ldl
required_file=
includedir=${includedir}
libdir=${libdir}
> 2. for cp2k.popt:
>
we're after getting cp2k.psmp working. not cp2k.popt
> INTEL_INC=/opt/intel/Compiler/11.1/072/mkl/include
> FFTW3_INC=.../fftw322/include
>
> CC = cc
> CPP =
> FC = mpif90 -FR
> #FC = mpif90
> LD = mpif90 -i_dynamic -openmp
> AR = ar -r
> #DFLAGS = -D__INTEL -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK
> -D__FFTW3
> DFLAGS = -D__INTEL -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK -D__FFTW3
> -D__LIBINT
> CPPFLAGS = -C -traditional $(DFLAGS) -I$(INTEL_INC)
> FCFLAGS = $(DFLAGS) -I$(INTEL_INC) -I$(FFTW3_INC) -O2 -xW -heap-arrays 64
> -funroll-loops -fpp -free
> FCFLAGS2 = $(DFLAGS) -I$(INTEL_INC) -I$(FFTW3_INC) -O1 -xW -heap-arrays 64
> -funroll-loops -fpp -free
> LDFLAGS = $(FCFLAGS) -I$(INTEL_INC) -L/opt/intel/mkl/10.1.0.015/lib/em64t
> #
> LIBS = -L/opt/intel/mkl/10.1.0.015/lib/em64t -lmkl_scalapack -lmkl_em64t
> -lmkl_blacs_openmpi_lp64 -lguide -lpthread -lstdc++\
>
this may cause some issues unless you define OMP_NUM_THREADS=1 by default.
an IMO better
solution is to link all intel libraries statically (so you don't have to
mess with LD_LIBRARY_PATH
after the compile) and use the sequential interface. e.g.
LDFLAGS = $(FCFLAGS) -static-intel
LIBS = -L/opt/intel/Compiler/11.1/072/mkl/lib/em64t
-Wl,--start-group,-Bstatic \
-lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 \
-lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group,-Bdynamic \
-lfftw3
this also works nicely for gfortran:
LDFLAGS = $(FCFLAGS)
LIBS = -L/opt/intel/Compiler/11.1/072/mkl/lib/em64t
-Wl,--start-group,-Bstatic \
-lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 \
-lmkl_gf_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group,-Bdynamic \
-lfftw3
the resulting executables work very well on our clusters. with and without
thread support in OpenMPI (n.b.: one of the really nice things about OpenMPI
is that i can swap the two MPI compiles without having to relink cp2k).
now if only the cp2k.psmp binary would work, too. i would be a very happy
camper and my colleagues would have no more excuse to not run cp2k jobs
fast.
cheers,
axel.
> .../fftw322/lib/libfftw3.a \
> .../libint114/lib/libderiv.a \
> .../libint114/lib/libint.a
>
> OBJECTS_ARCHITECTURE = machine_intel.o
>
>
> graphcon.o: graphcon.F
> $(FC) -c $(FCFLAGS2)
> ----------------------------------------------------------------
> "..." stands for your own direction.
>
>
> Best regards,
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20110512/6e8406c9/attachment.htm>
More information about the CP2K-user
mailing list