[CP2K:3300] regression?
Alin Marin Elena
alinm... at gmail.com
Sat Jun 18 15:31:03 UTC 2011
Hi Urban,
I have got it running, thank you!
Using openmpi instead of mpt seems to have done the trick... for both gcc
4.5.1 and intel 2011.1.107
Cause you mentioned bounds checking before...
when using the intel compilers both 2011.1.107 and 2011.4.191 it fails
with some backtrace of the form (for this example).
forrtl: severe (408): fort: (2): Subscript #1 of the array RECV_META has value
1 which is greater than the upper bound of 0
Image PC Routine Line Source
cp2k.p0 000000000897FCEA Unknown Unknown Unknown
cp2k.p0 000000000897E865 Unknown Unknown Unknown
cp2k.p0 0000000008920C96 Unknown Unknown Unknown
cp2k.p0 00000000088D5005 Unknown Unknown Unknown
cp2k.p0 00000000088D5459 Unknown Unknown Unknown
cp2k.p0 000000000828160F dbcsr_transformat 2771
dbcsr_transformations.F
cp2k.p0 0000000000429E94 cp_dbcsr_interfac 644
cp_dbcsr_interface.F
cp2k.p0 00000000038C28C8 cp_dbcsr_operatio 2170
cp_dbcsr_operations.F
cp2k.p0 0000000003891D09 cp_dbcsr_operatio 640
cp_dbcsr_operations.F
cp2k.p0 000000000584BA56 qs_initial_guess_ 493
qs_initial_guess.F
cp2k.p0 00000000017EB7BC qs_scf_mp_scf_env 2264 qs_scf.F
cp2k.p0 00000000017D857C qs_scf_mp_init_sc 1791 qs_scf.F
cp2k.p0 00000000017B2DEC qs_scf_mp_scf_ 366 qs_scf.F
cp2k.p0 0000000001097D60 qs_energy_mp_qs_e 224
qs_energy.F
cp2k.p0 0000000001095561 qs_energy_mp_qs_e 115
qs_energy.F
cp2k.p0 00000000010D16B6 qs_force_mp_qs_fo 227 qs_force.F
cp2k.p0 000000000062FCEF force_env_methods 219
force_env_methods.F
cp2k.p0 0000000000413642 cp2k_runs_mp_cp2k 331
cp2k_runs.F
cp2k.p0 00000000004271E7 cp2k_runs_mp_run_ 1115
cp2k_runs.F
cp2k.p0 000000000040EEE4 MAIN__ 291 cp2k.F
cp2k.p0 000000000040BCAC Unknown Unknown Unknown
libc.so.6 00007F07F4EBFBC6 Unknown Unknown Unknown
cp2k.p0 000000000040BBA9 Unknown Unknown Unknown
any ideas?
regards,
Alin
On Fri 17 Jun 2011 22:55:50 Urban Borštnik wrote:
> Dear Alin,
>
> I ran your input with a gfortran-compiled version of the latest cvs (the
> 2.2 development branch) with 24 and 48 processes and could not reproduce
> a crash (with bounds checking).
>
> The first thing I would try is to either reduce the optimization level
> for all files or selectively for "distribution_1d_types" and
> "distribution_methods.F" (in the same way as for "graphcon.F"--take a
> look at your arch file).
>
> The other thing is to get a stack trace to see where the segmentation
> fault occurs...
>
> Cheers,
> Urban
>
> On Fri, 2011-06-17 at 20:32 +0100, Alin Marin Elena wrote:
> > Hi All,
> >
> > I have just compiled cp2k, the last cvs and I discovered a very strange
> > behaviour...
> > I have a small system, 23 atoms and I want to perform some simple DFT
> > calculation. I try to run it in parallel.
> > If I use 12 cores I get my numbers... (HF12good)
> > If I use 24 cores I get a segmentation fault... (HF24broken)
> > If I use 23 cores I get my numbers... (HF23good)
> >
> > When I use an older version of cp2k few months...
> > If I use 24 cores I get my numbers... (HF24good)
> >
> > for all the successfull runs I got the same energy
> > alin at stokes2:~/playground/alin> grep "ENERGY| " HF*good/HF.out
> > HF12good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
> > -146.062369186077234
> > HF23good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
> > -146.062369186076523
> > HF24good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
> > -146.062369186077234
> >
> > Inputs and outputs for all tests can be found in the attachement (or
> > link,if the list rejects it)... to make the attach smaller I removed
> > the .cube files.
> > https://rapidshare.com/files/1317606759/hftests.tar.bz2
> >
> > All the binaries are compiled with the same settings compilers and libs
> > alin at stokes2:~/playground/alin/HF24broken> module list
> >
> > Currently Loaded Modulefiles:
> > 1) intel-mkl/10.2.6.038 2) intel-fc/2011.1.107 3)
> > intel-cc/2011.1.107>
> > 4) mpt/2.01 (sgi mpi implementation)
> >
> > alin at stokes2:~/playground/cp2k> cat arch/stokes-intel.popt
> > # this works on my setup on abbaton and baphomet
> > # intel toolchain/mkl/openmpi... parallel threaded and optimised
> > #
> > #
> > CC = icc
> > CPP =
> > FC = ifort
> > LD = ifort
> > AR = ar -r
> > DFLAGS = -D__INTEL -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK
> > -D__FFTMKL -I$(MPI_ROOT)/include
> > CPPFLAGS =
> > FCFLAGS = $(DFLAGS) -O3 -xSSE4.2 -heap-arrays -fpp -free
> > FCFLAGS2 = $(DFLAGS) -O1 -heap-arrays -fpp -free
> > LDFLAGS = $(FCFLAGS) -L{$MKLROOT}/lib/em64t
> > LIBS = -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential
> > -lmkl_core - lmkl_blacs_sgimpt_lp64 -lmpi
> >
> > OBJECTS_ARCHITECTURE = machine_intel.o
> >
> >
> > graphcon.o: graphcon.F
> >
> > $(FC) -c $(FCFLAGS2) $<
> >
> > Any suggestion much appreciated,
> >
> > regards,
> >
> > Alin
--
Without Questions there are no Answers!
_____________________________________________________________________
Alin Marin ELENA
Advanced Molecular Simulation Research Laboratory
School of Physics, University College Dublin
----
Ardionsamblú Móilíneach Saotharlann Taighde
Scoil na Fisice, An Coláiste Ollscoile, Baile Átha Cliath
-----------------------------------------------------------------------------------
http://alin.elenaworld.net
______________________________________________________________________
-------------- next part --------------
Q29udGVudC1UeXBlOiBhcHBsaWNhdGlvbi9wZ3Atc2lnbmF0dXJlOyBuYW1lPSJzaWduYXR1cmUu
YXNjIg0KQ29udGVudC1EZXNjcmlwdGlvbjogVGhpcyBpcyBhIGRpZ2l0YWxseSBzaWduZWQgbWVz
c2FnZSBwYXJ0Lg0KDQotLS0tLUJFR0lOIFBHUCBTSUdOQVRVUkUtLS0tLQpWZXJzaW9uOiBHbnVQ
RyB2Mi4wLjE3IChHTlUvTGludXgpCgppUUVjQkFBQkFnQUdCUUpOL01TN0FBb0pFQ3MrTEVZV1Nq
VW9lMjBILzFwcWVUVzlIamYwQTQ2OTNmYkVJTzJWClZUVlkrQ0IxZEJ1Q00vWHU1OTBPSXdhQnUx
Z001Wmd6YU90bE9mMWtxcUhqeS83SWxmR28rYldnWnVwWUlvSVEKL1NTbzVsd0RBU0hKQ1EwelVN
VGRWMGg1TlBoY1c0ZlBKbGV3Zk1JdFl1YWpPVC9UUHhXa2JNYVFyaUgyS0JnbAprVStPaWtHcEVF
NXNTNUw3RWpnKzR1b2lISXFFa3VpWlByWUN1aFJlWG1vWVVtdGxvZmEyUjZHZXFRNlZXU1dECndS
VVZIVHhWd1RSNXo0YXVRRDd6V3B3MXBseTVmeW5JQ2ZxdGo5K0FGWTJmcDgrZ2VqUXUxT1FWWkdU
ODNMSDEKVDRHOHJxRy9Sc05tY1hWQlU3WEJuY1JRYTkzeVgwVjhOSWpFQ1pPYTBjd1Y4T0FRdTU5
OVhuaHo4azFWbzNJPQo9NEQyNQotLS0tLUVORCBQR1AgU0lHTkFUVVJFLS0tLS0K
More information about the CP2K-user
mailing list