[CP2K:4648] any known issues for cp2k/2.4 with Openmpi 1.6.5?

Jörg Saßmannshausen j.sassma... at ucl.ac.uk
Mon Sep 30 09:37:38 UTC 2013


Hi Cristiano,

I got version 2.4 compiled with OpenMPI 1.6.5 and gfortran 4.7.2 and ATLAS 
3.10.0.

However, I also had the problem with the cholesky_decompose like you had.

In the end what I done was either simply changing the number of 
processors/cores I was using and the problem vanished, if I remember 
correctly, or try and use 'random' for the initial guess. It was one of the 
two I think to remember.

In any case, yes, it is possible to use OpenMPI 1.6.5 and I am using that in 
production as well. I concluded that the cholesky_decompose had nothing to do 
wit the the OpenMPI version I was using.

I hope that helps a bit.

All the best from a mild but grey London

Jörg

On Monday 30 September 2013 09:41:44 cristia... at gmail.com wrote:
> Hi!
> thanks for tyour suggestion
> I'll try with the 1.7.2 release and then I'll post if everything will go in
> the right way.
> Cristiano
> 
> Il giorno venerdì 27 settembre 2013 16:30:22 UTC+2, Ari Paavo Seitsonen ha
> 
> scritto:
> > Dear Cristiano,
> > 
> >   This might not really help you, but I remember that on the "big"
> > 
> > computer at our university OpenMPI 1.6.5 gave some error (I don't
> > remember what the error was though), and thus I still use the version
> > 1.4.5 when compiling. Did you try with for example version 1.7.2?
> > 
> >     Greetings,
> >     
> >        apsi
> > 
> > 2013/9/27 <crist... at gmail.com <javascript:>>
> > 
> >> Hi all,
> >> 
> >> I've succesfully compiled cp2k.popt with intel openmpi/1.6.5,
> >> mkl/11.0.1, libint/1.1.4
> >> 
> >> my arch file (Linux-x86-64-intel.popt):
> >> 
> >> 
> >> INTEL_LIB = $(INTEL_MKL)/lib/intel64
> >> INTEL_INC = -I$(INTEL_MKL)/include -I$(INTEL_MKL)/include/fftw
> >> INTEL_SCALAPACK = -L$(INTEL_LIB) -lmkl_scalapack_lp64
> >> -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
> >> 
> >> LIBINT_PATH = <my_path to libint-1.1.4>
> >> LIBINT_LIB  = <my_path to
> >> cp2k-2.4.0>/tools/hfx_tools/libint_tools/libint_cpp_wrapper.o
> >> $(LIBINT_PATH)/lib/libderiv.a $(LIBINT_PATH)/lib/libint.a -lstdc++
> >> 
> >> 
> >> CC       = icc
> >> CPP      =
> >> FC       = mpif90
> >> LD       = mpif90
> >> AR       = ar -r
> >> DFLAGS   = -D__HAS_NO_ISO_C_BINDING -D__INTEL -D__FFTSG -D__parallel
> >> -D__BLACS -D__SCALAPACK -D__FFTW3 -D__LIBINT -D__FFTMKL
> >> CPPFLAGS = -C -traditional $(DFLAGS) -I$(INTEL_INC)
> >> FCFLAGS  = $(DFLAGS) -I$(INTEL_INC) -O1 -heap-arrays 64 -fpp -free
> >> FCFLAGS2 = $(DFLAGS) -I$(INTEL_INC) -O1 -heap-arrays 64 -fpp -free
> >> LDFLAGS  = $(FCFLAGS)
> >> LIBS     = -L$(INTEL_LIB) -mkl $(LIBINT_LIB) $(INTEL_SCALAPACK)
> >> 
> >> OBJECTS_ARCHITECTURE = machine_intel.o
> >> 
> >> 
> >> graphcon.o: graphcon.F
> >> 
> >>         $(FC) -c $(FCFLAGS2) $<
> >> 
> >> but when I run my test with input file
> >> 
> >> &GLOBAL
> >> 
> >>   PROJECT test01
> >>   RUN_TYPE MD
> >>   PRINT_LEVEL LOW
> >> 
> >> &END GLOBAL
> >> &FORCE_EVAL
> >> 
> >>   METHOD QS
> >>   &DFT
> >>   
> >>     BASIS_SET_FILE_NAME ../../BASIS_MOLOPT
> >>     POTENTIAL_FILE_NAME ../../GTH_POTENTIALS
> >>     &MGRID
> >>     
> >>       CUTOFF 400
> >>       NGRIDS 5
> >>     
> >>     &END MGRID
> >>     &QS
> >>     &END QS
> >>     &SCF
> >>     
> >>       CHOLESKY OFF
> >>       MAX_SCF 40
> >>       SCF_GUESS ATOMIC
> >>       &OT
> >>       
> >>         ENERGY_GAP 0.001
> >>         MINIMIZER CG
> >>         PRECONDITIONER FULL_ALL
> >>       
> >>       &END OT
> >>       &OUTER_SCF ON
> >>       
> >>         EPS_SCF 1.0E-6
> >>         MAX_SCF 10
> >>       
> >>       &END OUTER_SCF
> >>       &PRINT
> >>       
> >>         &RESTART
> >>         &END
> >>         &RESTART_HISTORY
> >>         &END
> >>       
> >>       &END PRINT
> >>     
> >>     &END SCF
> >>     &XC
> >>     
> >>       &XC_FUNCTIONAL PBE
> >>       &END XC_FUNCTIONAL
> >>       &VDW_POTENTIAL
> >>       
> >>         POTENTIAL_TYPE PAIR_POTENTIAL
> >>         &PAIR_POTENTIAL
> >>         
> >>            REFERENCE_FUNCTIONAL PBE
> >>            PARAMETER_FILE_NAME my_parameter.dat
> >>            TYPE DFTD3
> >>            R_CUTOFF [angstrom] 30
> >>         
> >>         &END PAIR_POTENTIAL
> >>      
> >>      &END VDW_POTENTIAL
> >>    
> >>    &END XC
> >>    
> >>     &END DFT
> >>    
> >>    &SUBSYS
> >>    
> >>      &CELL
> >>      
> >>        ABC 16.826 16.136 30.0
> >>      
> >>      &END CELL
> >>      &TOPOLOGY
> >>      
> >>        COORDINATE XYZ
> >>        COORD_FILE_NAME ./test01_ini.xyz
> >>        CONNECTIVITY OFF
> >>      
> >>      &END TOPOLOGY
> >>      
> >>        ...
> >>        ...
> >>        ...
> >>    
> >>    &END SUBSYS
> >> 
> >> &END FORCE_EVAL
> >> &MOTION
> >> 
> >>   &MD
> >>   
> >>     ENSEMBLE NVE
> >>     STEPS 10000
> >>     TIMESTEP 0.5
> >>     TEMPERATURE 310.0
> >>     TEMP_TOL 50.0
> >>   
> >>   &END MD
> >>   &PRINT
> >>   
> >>     &RESTART
> >>     
> >>       &EACH
> >>       
> >>         MD 1
> >>       
> >>       &END
> >>     
> >>     &END
> >>     &TRAJECTORY
> >>     
> >>       &EACH
> >>       
> >>         MD 1
> >>       
> >>       &END EACH
> >>     
> >>     &END TRAJECTORY
> >>   
> >>   &END PRINT
> >> 
> >> &END MOTION
> >> 
> >> cp2k.popt exit with:
> >>  ***********************************************************************
> >>  ** *** 11:26:10 ERRORL2 in
> >>  cp_dbcsr_cholesky:cp_dbcsr_cholesky_decompose *** *** processor 0  ::
> >>  err=-300 condition FAILED at line 123             ***
> >>  **********************************************************************
> >>  ***
> >>  
> >>  
> >>  ===== Routine Calling Stack =====
> >>  
> >>            11 cp_dbcsr_cholesky_decompose
> >>            10 qs_ot_get_derivative
> >>            
> >>             9 ot_mini
> >>             8 ot_scf_mini
> >>             7 qs_scf_loop_do_ot
> >>             6 scf_env_do_scf_inner_loop
> >>             5 scf_env_do_scf
> >>             4 qs_energies_scf
> >>             3 qs_forces
> >>             2 qs_mol_dyn_low
> >>             1 CP2K
> >> 
> >> but the same run goes fine with a cp2k.popt compiled with intel
> >> openmpi/1.4.4, mkl/10.2.2, libint/1.1.4
> >> 
> >> does anyone know if cp2k/2.4 dosen't work fine with openmpi/1.6.5?
> >> any suggestions?
> >> 
> >> thanks for help!!!!
> >> 
> >> Cristiano
> > 
> > -=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-=*=-
> > =*=-
> > 
> >   Ari P Seitsonen / Ari.P... at iki.fi <javascript:> /
> > 
> > http://www.iki.fi/~apsi/
> > 
> >   Physikalisch-Chemisches Institut der Universität Zürich
> >   Tel: +41 44 63 55 44 97  /  Mobile: +41 79 71 90 935

-- 
*************************************************************
Jörg Saßmannshausen
University College London
Department of Chemistry
Gordon Street
London
WC1H 0AJ 

email: j.sassma... at ucl.ac.uk
web: http://sassy.formativ.net

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html




More information about the CP2K-user mailing list