OpenMPI problems after compilation

wei flamefl... at gmail.com
Thu Jul 7 09:02:48 UTC 2011


Hi,

多谢回复~~
不过我觉着这个应该不是主要问题,因为我还编译了标准版的blas, blacs, lapack, scalapack (netlib),还是出现
同样的问题。
另外在最新的intel编译器(intel 12)里的mkl已经不区分em64t和intel64了。以前的版本是必须用em64t的。

Thanks for the reply, but I think it won't be main problem, as I tried
the standard math library from netlib (blas, blacs, lapack,
scalapack,), the error is still there.

On Jul 6, 9:04 am, "Huiqun Zhou" <hqz... at nju.edu.cn> wrote:
> Hi,
> I haven't used the version of MKL you are using. In the "past", stuffs in
> ..../mkl/lib/intel64 are for IA_64 (itanium) architecture, the right
> libraries
> for em64t (named intel 64 now) should be in .../mkl/lib/em64t. Please
> check your MKL installation tree to see if there is em64t directory, if yes,
> you should use libraries in that folder, i think.
>
> zhou huiqun
> @nanjing university, china
>
>
>
>
>
>
>
> ----- Original Message -----
> From: "wei" <flamefl... at gmail.com>
> To: "cp2k" <cp... at googlegroups.com>
> Sent: Monday, July 04, 2011 4:51 AM
> Subject: [CP2K:3353] OpenMPI problems after compilation
>
> > Dear cp2k developers and users,
>
> > I am trying to compile the most recent CP2K on our new cluster(LINUX,
> > CENTOS 5.6) with openmpi-1.4.3, ifort-12 and the mkl library inside
> > ifort-12 (no openMP).  There was no error during the compilation, but
> > very often the parallel jobs I submitted are killed by the mpi
> > software, while the job with 1 MPI core works fine.
> > --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 3 in communicator MPI_COMM_WORLD with
> > errorcode 1.
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> > --------------------------------------------------------------------------
> > mpiexec has exited due to process rank 3 with PID 14821 on node
> > linuxbmc1218.rz.RWTH-Aachen.DE exiting without calling "finalize".
> > This may have caused other processes in the application to be
> > terminated by signals sent by mpiexec (as reported here).
> > --------------------------------------------------------------------------
> > Here is my ARCH file:
> > CC       = mpicc
> > CPP      =
> > FC       = mpif90
> > LD       = mpif90
> > AR       = ar -r
> > DFLAGS   = -D__INTEL -D__FFTSG -D__FFTW3 -D__parallel -D__BLACS -
> > D__SCALAPACK
> > CPPFLAGS =
> > INTEL_INC= /opt/intel/Compiler/12.0/3.174/rwthlnk/mkl/include
> > MKLPATH  = /opt/intel/Compiler/12.0/3.174/rwthlnk/mkl/lib/intel64
>
> > FCFLAGS  = $(DFLAGS) -I$(INTEL_INC) -I/home/wz160145/software/fftwmpi/
> > include -O2 -msse2 -heap-arrays 64 -funroll-loops -fpp -free
>
> > FCFLAGS2 = $(DFLAGS) -I$(INTEL_INC) -I/home/wz160145/software/fftwmpi/
> > include -O1 -msse2 -heap-arrays 64 -fpp -free
>
> > LDFLAGS  = $(FCFLAGS) -I$(INTEL_INC) -I/home/wz160145/software/fftwmpi/
> > include
>
> > LIBS     = /home/wz160145/software/fftwmpi/lib/libfftw3.a      $
> > (MKLPATH)/libmkl_scalapack_lp64.a           $(MKLPATH)/
> > libmkl_solver_lp64_sequential.a           -Wl,--start-group           $
> > (MKLPATH)/libmkl_intel_lp64.a            $(MKLPATH)/
> > libmkl_sequential.a            $(MKLPATH)/libmkl_core.a            $
> > (MKLPATH)/libmkl_blacs_openmpi_lp64.a            -Wl,--end-group -
> > lpthread
>
> > OBJECTS_ARCHITECTURE = machine_intel.o
>
> > graphcon.o: graphcon.F
> >        $(FC) -c $(FCFLAGS2) $<
>
> > The fftw3 library is compiled by myself under the same environment. Is
> > there any obvious error in my ARCH file?  Or it is just the problem of
> > their MPI software? Any help will be highly appreciated!  Thanks in
> > advance.
>
> > Best wishes,
> > Wei ZHANG
> > PhD student, Institute for Theoretical Solid State Physics
> > RWTH Aachen University
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "cp2k" group.
> > To post to this group, send email to cp... at googlegroups.com.
> > To unsubscribe from this group, send email to
> > cp2k+uns... at googlegroups.com.
> > For more options, visit this group at
> >http://groups.google.com/group/cp2k?hl=en.


More information about the CP2K-user mailing list