<div dir="ltr">Hi David,<div>Thank you so much for the arch. But could not installed properly. I have requested our cluster administrator for the MPI path.. And I will try to compile it. However, I have successfully installed cp2k-2.5. in our native cluster. Mkl and MPI libraries are installed properly in the native cluster. Still there is a problem of MPI. When I have tried to run a test file I got the following error</div><div>"********************************************</div><div> *** ERROR in open_file (MODULE cp_files) ***</div><div> ********************************************</div><div><br></div><div> *** The specified OLD file <H2O-3x3.inp> cannot be opened. It does not ***</div><div> *** exist. ***</div><div><br></div><div> *** Program stopped at line number 375 of MODULE cp_files ***</div><div><br></div><div> ===== Routine Calling Stack =====</div><div><br></div><div> 1 create_cp2k_input_reading</div><div> CP2K| Abnormal program termination, stopped by process number 0</div><div>--------------------------------------------------------------------------</div><div>MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD</div><div>with errorcode 1.</div><div><br></div><div>NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.</div><div>You may or may not see output from other processes, depending on</div><div>exactly when Open MPI kills them."</div><div><br></div><div>Can you kindly tell me why did I get such error? </div><div>Thank you very much,</div><div>Kalyanashis Jana</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jul 28, 2015 at 3:05 PM, Rolf David <span dir="ltr"><<a href="mailto:rolf.d...@gmail.com" target="_blank">rolf.d...@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>
<p>I don't know how to proper compile those two. But if you have <span style="color:rgb(136,136,136)">mpiicc/mpiicpc/</span><span style="color:rgb(136,136,136)">mpiifort </span> in /home1/bganguly/intel/compilers_and_libraries_2016.0.079/linux/mpi/intel64/bin directory. </p><p><br></p><p>Try this arch file (hoping the arrangement of folder didn't change much fro 11 to 14)</p><p><br></p><p></p></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">INTEL_DIR=Where icc is probably: /opt/intel/Compiler/11.1/073/ (find / -name "icc")<br>INTEL_INC=$(INTEL_DIR)/include<br> INTEL_LIB=$(INTEL_DIR)/lib/intel64<br>MKL_DIR=/home1/bganguly/intel/mkl (or where mkl is, even in opt try find / -name "libfftw3xf_intel.a" you'll find the FFTW_LIB dir and you can go back to the mkl root)<br> MKL_INC=$(MKL_DIR)/include<br> MKL_LIB=$(MKL_DIR)/lib/intel64<br>FFTW_INC=$(MKL_INC)/fftw<br>FFTW_LIB=$(MKL_DIR)/interfaces/fftw3xf<br>LIBXC_DIR=/home1/bganguly/libxc-2.2.2<br>LIBXC_INC=$(LIBXC_DIR)/include<br>LIBXC_LIB=$(LIBXC_DIR)/lib<br>INC=-I$(INTEL_INC) -I$(MKL_INC) -I$(FFTW_INC) -I$(LIBXC_INC)<span class=""><br><br>CC = icc<br>CPP =<br></span>FC = mpiifort<br>LD = mpiifort<span class=""><br>AR = ar -r<br>DFLAGS = -D__INTEL -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK -D__FFTW3 -D__LIBXC2<br>CPPFLAGS =<br></span>FCFLAGS = $(DFLAGS) $(INC) -O3 -msse2 -heap-arrays 64 -funroll-loops -fpp -free<br>FCFLAGS2 = $(DFLAGS) $(INC) -O1 -msse2 -heap-arrays 64 -fpp -free<br>LDFLAGS = $(FCFLAGS)<br> <br>LIBS = -L$(MKL_LIB) -Wl,-rpath,$(MKL_LIB) \<br> -lmkl_scalapack_lp64 \<br> -lmkl_blacs_intelmpi_lp64 -lmkl_intel_lp64 \<br> -lmkl_sequential -lmkl_core \<br> -lstdc++ \<br> $(FFTW_LIB)/libfftw3xf_intel.a \<br> $(LIBXC_LIB)/libxcf90.a $(LIBXC_LIB)/libxc.a \<br> -lpthread -lm<span class=""><br>graphcon.o: graphcon.F<br> $(FC) -c $(FCFLAGS2) $<"</span></blockquote><br><div><div><br></div></div><div><br></div><div>Of course I don't know how the O3/O1 on files for the intel 11.</div><div>But for the 14, it's (and it's with libint so the qs_vxc behave differently without I guess) :</div><div><br></div><div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">FCFLAGS = $(DFLAGS) $(INC) -O2 -heap-arrays 64 -funroll-loops -fpp -free (+ specific processor vectorisation/opt like your -msse2)<br>FCFLAGS2 = $(DFLAGS) $(INC) -O1 -heap-arrays 64 -fpp -free (+ specific processor vectorisation/opt like your -msse2)<br>FCFLAGS3 = $(DFLAGS) $(INC) -O0 -heap-arrays 64 -fpp -free (+ specific processor vectorisation/opt like your -msse2)<br># In order to avoid segv when HF exchange for example<br>qs_vxc_atom.o: qs_vxc_atom.F<br> $(FC) -c $(FCFLAGS2) $<<br># <a href="https://groups.google.com/forum/#!topic/cp2k/G67XV-dyk5E" target="_blank">https://groups.google.com/forum/#!topic/cp2k/G67XV-dyk5E</a><br># -O1 on Intel Compiler<br>external_potential_types.o: external_potential_types.F<br> $(FC) -c $(FCFLAGS2) $<<br>qs_linres_current.o: qs_linres_current.F<br> $(FC) -c $(FCFLAGS2) $<<br># <a href="https://groups.google.com/forum/#!topic/cp2k/G67XV-dyk5E" target="_blank">https://groups.google.com/forum/#!topic/cp2k/G67XV-dyk5E</a><br># -O0 on Intel Compiler<br>mp2_optimize_ri_basis.o: mp2_optimize_ri_basis.F<br> $(FC) -c $(FCFLAGS3) $<</blockquote><span class="HOEnZb"><font color="#888888">
<p>
</p></font></span></div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Rolf</div></font></span><div><div class="h5"><br>On Tuesday, July 28, 2015 at 10:59:38 AM UTC+2, Kalyanashis Jana wrote:<blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div>Hi David,<br><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">If you have installed Intel MPI somewhere in /home1/bganguly/intel, there should be a impi folder (like there is a mkl folder).<div>It's by default like
<p>intel/impi/$IMPI_VERSION/intel64/bin</p><p><br></p></div></div></blockquote><div>Yeah, I have installed the MPI library in /home1/bganguly/intel directory... </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><p></p><p>And you have here: mpirun/mpiexec/mpiicc/mpiicpc/mpiifort.</p><p><br></p></div></div></blockquote><div>These commands are there in /home1/bganguly/intel/compilers_and_libraries_2016.0.079/linux/mpi/intel64/bin directory. </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><p></p><p>The mpi you have (sgi/mpt/mpt-2.01/) is the SGI Message Passing Toolkit (SGI MPI if i'm correct)</p><p><br></p><p>Which version on Intel MPI did you installed ? Part of a package (like intel parallel studio XE cluster) or as a seperate bundle ? What are your version of intel compilers (11 ? 12 ? 14 ? 15 ?).</p><p>Or you're trying to use a MPI (SGI or other) with intel compilers ?</p></div></div></blockquote><div><br></div><div>I have installed mkl library as a separate bundle and the version was 11. But I have installed the mkl library and the MPI library in the same directory. After that, I do not find out mkl library related folder such as mkl or other folder are not Can you please tell me the proper way to compile these two library?</div><div><br></div></div><div><div dir="ltr"><div><div>Thanks with regards,</div><div dir="ltr">Kalyanashis Jana<br></div></div></div></div>
</div></div>
</blockquote></div></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div>Thanks with regards</div><div dir="ltr">Kalyanashis Jana<br></div></div></div></div>
</div>