Problem running MPI version of NPT
Rad
rad.... at arl.army.mil
Mon Dec 17 13:28:02 UTC 2007
Dear All,
This issue also popped up in an optimization run. I managed to
complete the run finally by bumping up the value of MPI_GROUP_MAX to
4096. So this issue is closed.
Thanks
Rad
On Nov 2, 9:13 am, Teodoro Laino <teodor... at gmail.com> wrote:
> Rad,
>
> sorry for being a little bit annoying but unless you won't give us
> more information it's quite difficult to say something.
>
> Can you answer the following questions:
>
> 1) After how many steps of NPT MD you get that error message?
>
> 2) Even if it's a fairly large system can you please post the input
> file?
>
> Thanks,
> teo
>
> On 2 Nov 2007, at 15:08,Radwrote:
>
>
>
>
>
> > Thanks everybody. Few points to share; I have been running MPI
> > calculations successfully (geo optimization, NVE ensemble etc) on the
> > same cluster without any issues . (some of the sytems are large
> > with thousands of atoms in the unit cell). I am going to run the
> > sample present (graphite2) in the regtest on MPI version and see what
> > happens. I am also compiling cp2k on other architectures including a
> > cray m/c. Some time next week I will be able to run the same case in
> > all thses machines. I have to run MPI version because I am getting
> > ready to do NPT on a fairly large system. So please keep providing me
> > suggestions to try and resolve this issue. Please also let me know of
> > the compilers to try , we have pgi, g95 etc.
>
> >Rad
>
> > On Nov 2, 9:00 am, "Nichols A. Romero" <naro... at gmail.com> wrote:
> >>Rad,
>
> >> Is this NPT issue reproducible on other computer platforms?
>
> >> Please test that for us if you can.
>
> >> On 11/2/07, Juerg Hutter <hut... at pci.uzh.ch> wrote:
>
> >>> Hi
>
> >>> this could be a problem of CP2K or the compiler (or the
> >>> MPI installation).
> >>> If it is a problem of CP2K the obvious question is why
> >>> didn't it show up before. Can you run a small system
> >>> with NPT in parallel? If the error persists please send the
> >>> input. Another thing to test would be if the error
> >>> depends on the number of CPUs.
> >>> CP2K generates and frees MPI groups during calculation.
> >>> If the free command is not matching it is possible that
> >>> the number of groups keeps increasing (similar to a
> >>> memory leak). It is possible that your input causes a
> >>> new route in the code where this happens.
>
> >>> Another possibility is that either the compiler or
> >>> the installed MPI has a not working installation for
> >>> the freeing of communicators.
>
> >>> regards
>
> >>> Juerg Hutter
>
> >>> ----------------------------------------------------------
> >>> Juerg Hutter Phone : ++41 44 635 4491
> >>> Physical Chemistry Institute FAX : ++41 44 635 6838
> >>> University of Zurich E-mail: hut... at pci.uzh.ch
> >>> Winterthurerstrasse 190
> >>> CH-8057 Zurich, Switzerland
> >>> ----------------------------------------------------------
>
> >>> On Thu, 1 Nov 2007,Radwrote:
>
> >>>> Dear All,
>
> >>>> I am trying to perform an NPT ensemble with a MPI compiled code and
> >>>> run into the following error:
>
> >>>> Please set the environment variable MPI_GROUP_MAX for additional
> >>>> space.
> >>>> MPI has run out of internal group entries.
> >>>> Please set the environment variable MPI_GROUP_MAX for additional
> >>>> space.
> >>>> The current value of MPI_GROUP_MAX is 512
>
> >>>> I have no problem running the calculation with the serially
> >>>> compiled
> >>>> code (I tried both NPT_I and NPT_F). I tried the MPI run with cell
> >>>> having 56 atoms, expanded to a supercell with 224 atoms, changed
> >>>> the
> >>>> ranks to 64, 32, 16, 8, temperatures 2.5 K , 200 K, 300 K, various
> >>>> pressures (1 bar, 50 bars) etc and I get the same error.
>
> >>>> The code is compiled on a IA64 Linux cluster using Intel compiler
> >>>> (version 9.1).
>
> >>>> Please let me know if you have any suggestions and would like to
> >>>> know
> >>>> whether the NPT portion is tested for different MPI
> >>>> architectures. If
> >>>> it has been tested on a particular arch let me know I will run
> >>>> it on
> >>>> the same arch.
>
> >>>> Thanks
> >>>>Rad
>
> >> --
> >> Nichols A. Romero, Ph.D.
> >> DoD User Productivity Enhancement and Technology Transfer (PET) Group
> >> High Performance Technologies, Inc.
> >> Reston, VA
> >> 443-567-8328 (C)
> >> 410-278-2692 (O)- Hide quoted text -
>
> >> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -
More information about the CP2K-user
mailing list