[CP2K:3300] regression?

Alin Marin Elena alinm... at gmail.com
Fri Jun 17 22:57:06 UTC 2011


I did not manage... sorry

On 17 June 2011 23:56, Alin Marin Elena <alinm... at gmail.com> wrote:
> Hi Urban,
>
> Thank you very much for your answer!
> I have tried with gcc 4.5.1 all the rest of the environment the
> same... and I still get the same problem.
> What is your environment please?
> I will did not manage to get a backtrace yet... as the segfault seems
> to be somewhere in an external library... I will try later with a -O0
> -g -fcheck=all -ftraceback...
>
> regards,
> Alin
>
>
> On 17 June 2011 21:55, Urban Borštnik <urban.b... at gmail.com> wrote:
>> Dear Alin,
>>
>> I ran your input with a gfortran-compiled version of the latest cvs (the
>> 2.2 development branch) with 24 and 48 processes and could not reproduce
>> a crash (with bounds checking).
>>
>> The first thing I would try is to either reduce the optimization level
>> for all files or selectively for "distribution_1d_types" and
>> "distribution_methods.F" (in the same way as for "graphcon.F"--take a
>> look at your arch file).
>>
>> The other thing is to get a stack trace to see where the segmentation
>> fault occurs...
>>
>> Cheers,
>> Urban
>>
>> On Fri, 2011-06-17 at 20:32 +0100, Alin Marin Elena wrote:
>>> Hi All,
>>>
>>> I have just compiled cp2k, the last cvs and I discovered a very strange
>>> behaviour...
>>> I have a small system, 23 atoms and I want to perform some simple DFT
>>> calculation. I try to run it in parallel.
>>> If I use 12 cores I get my numbers... (HF12good)
>>> If I use 24 cores I get a segmentation fault... (HF24broken)
>>> If I use 23 cores I get my numbers... (HF23good)
>>>
>>> When I use an older version of cp2k few months...
>>> If I use 24 cores I get my numbers... (HF24good)
>>>
>>> for all the successfull runs I got the same energy
>>> alin at stokes2:~/playground/alin> grep "ENERGY| " HF*good/HF.out
>>> HF12good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
>>> -146.062369186077234
>>> HF23good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
>>> -146.062369186076523
>>> HF24good/HF.out: ENERGY| Total FORCE_EVAL ( QS ) energy (a.u.):
>>> -146.062369186077234
>>>
>>> Inputs and outputs for all tests can be found in the attachement (or link,if
>>> the list rejects it)... to make the attach smaller I removed the .cube files.
>>> https://rapidshare.com/files/1317606759/hftests.tar.bz2
>>>
>>> All the binaries are compiled with the same settings compilers and libs
>>> alin at stokes2:~/playground/alin/HF24broken> module list
>>> Currently Loaded Modulefiles:
>>>   1) intel-mkl/10.2.6.038   2) intel-fc/2011.1.107    3) intel-cc/2011.1.107
>>> 4) mpt/2.01 (sgi mpi implementation)
>>>
>>> alin at stokes2:~/playground/cp2k> cat arch/stokes-intel.popt
>>> # this works on my setup on abbaton and baphomet
>>> # intel toolchain/mkl/openmpi... parallel threaded and optimised
>>> #
>>> #
>>> CC       = icc
>>> CPP      =
>>> FC       = ifort
>>> LD       = ifort
>>> AR       = ar -r
>>> DFLAGS   = -D__INTEL -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK -D__FFTMKL
>>> -I$(MPI_ROOT)/include
>>> CPPFLAGS =
>>> FCFLAGS  = $(DFLAGS)  -O3 -xSSE4.2 -heap-arrays  -fpp -free
>>> FCFLAGS2 = $(DFLAGS)  -O1  -heap-arrays  -fpp -free
>>> LDFLAGS  = $(FCFLAGS) -L{$MKLROOT}/lib/em64t
>>> LIBS     = -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -
>>> lmkl_blacs_sgimpt_lp64 -lmpi
>>>
>>> OBJECTS_ARCHITECTURE = machine_intel.o
>>>
>>>
>>> graphcon.o: graphcon.F
>>>         $(FC) -c $(FCFLAGS2) $<
>>>
>>>
>>>
>>> Any suggestion much appreciated,
>>>
>>> regards,
>>>
>>> Alin
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups "cp2k" group.
>> To post to this group, send email to cp... at googlegroups.com.
>> To unsubscribe from this group, send email to cp2k+uns... at googlegroups.com.
>> For more options, visit this group at http://groups.google.com/group/cp2k?hl=en.
>>
>>
>



More information about the CP2K-user mailing list