<div dir="ltr"><div>Hello,</div><div><br></div>Im using GROMACS with GPU since 2014. Running Cp2k doest shows any information about GPU. Im using ubuntu 16 LTS</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Sep 20, 2019 at 9:17 PM Pierre Cazade <<a href="mailto:pierre.a...@gmail.com">pierre.a...@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
Hi Nikhil,<br>
<br>
This is an excellent question. I did not try the GPU version of CP2K
yet. I am actually trying to compile it on the cluster that I am
using.<br>
<br>
Normally, you only need to install CUDA libraries and set up the
environment variables properly. Then, the executable detects the
presence of the GPU automatically, provided you have installed the
driver from nvidia. At least, this is how gromacs behaves, for
example. Which linux distribution are you using?<br>
<br>
If you use the GPU, avoid using too many threads. Ideally, one per
GPU.<br>
<br>
Regards,<br>
Pierre<br>
<br>
PS: Regarding your previous post: rather than "mpirun -n 2", try
"mpirun -np 2". Finally, on a multiple node calculation on a
cluster, you can use "mpirun -np 8 -ppn 2". The "-np" tells mpirun
the total number of MPI threads requested and the "-ppn" tells how
many threads per node you want. In the present example, I am using 4
nodes and I want 2 MPI threads for each of them, so a total of 8. Of
course, don't forget to set the OMP_NUM_TREADS as well. <br>
<br>
<br>
<br>
<div class="gmail-m_-7961939612517273423moz-cite-prefix">On 20/09/2019 16:29, Nikhil Maroli
wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Thank you very much for your reply.
<div dir="auto">Could you please tell me how to use GPU in cp2k?</div>
<div dir="auto">I have installed all the libraries and compiled
with cuda. I couldn't find any instructions to assign GPU for
the calculations.</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Fri, Sep 20, 2019, 8:15 PM
Pierre Cazade <<a href="mailto:pierre.a...@gmail.com" target="_blank">pierre.a...@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF"> Hello Nikhil,<br>
<br>
Withe command "mpirun -n 42 cp2k.pop -i inp.inp -o
-out.out", you are requesting 42 MPI threads and not 42
OpenMP threads. MPI usually relies on replicated data which
means that, for a poorly program software, it will request a
total amount of memory which the amount of memory required
by a scalar execution times the number of threads. This can
very quickly become problematic, in particular for QM
calculations. OpenMP, however relies on shared memory, the
data is normally not replicated but shared between threads
and therefore, in an ideal scenario, the amount of memory
needed for 42 OpenMP threads is the same as a single one.<br>
<br>
This might explains why you calculation freezes. You are out
of memory. On your workstation, you should only use the
executable "cp2k.ssmp" which is the OpenMP version. Then you
don't need the mpirun command:<br>
<br>
cp2k.ssmp -i inp.inp -o -out.out<br>
<br>
To control the number of OpenMP threads, set the env
variable: OMP_NUM_THREADS, e.g. in bash, export
OMP_NUM_THREADS=48<br>
<br>
Now, if you need to balance between MPI and OpenMP, you
should use the executable named cp2k.psmp. Here is such an
example:<br>
<br>
export OMP_NUM_THREADS=24<br>
mpirun -n 2 cp2k.psmp -i inp.inp -o -out.out<br>
<br>
In this example, I am requesting two MPI threads and each of
them can use up to 24 OpenMP threads.<br>
<br>
Hope this clarifies things for you.<br>
<br>
Regards,<br>
Pierre<br>
<br>
<div class="gmail-m_-7961939612517273423m_-6364226107811981161moz-cite-prefix">On
20/09/2019 14:09, Nikhil Maroli wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Dear all,
<div><br>
</div>
<div>I have installed all the versions of CP2K in my
workstation with 2 x 12 core processor, total
thread=48</div>
<div><br>
</div>
<div>I wanted to run cp2k in parallel using 42 threads,
can anyone share the commands that i can use.</div>
<div><br>
</div>
<div>I have tried </div>
<div><br>
</div>
<div>mpirun -n 42 cp2k.pop -i inp.inp -o -out.out</div>
<div><br>
</div>
<div>After this command there is a rise in memory to 100
% and the whole system freezes. (i have 128GB ram).</div>
<div><br>
</div>
<div>Any suggestion will be greatly appreciated,</div>
</div>
-- <br>
You received this message because you are subscribed to
the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails
from it, send an email to <a href="mailto:cp...@googlegroups.com" rel="noreferrer" target="_blank">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com?utm_medium=email&utm_source=footer" rel="noreferrer" target="_blank">https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com</a>.<br>
</blockquote>
<br>
<pre class="gmail-m_-7961939612517273423m_-6364226107811981161moz-signature" cols="72">--
Dr Pierre Cazade, PhD
AD3-023, Bernal Institute,
University of Limerick,
Plassey Park Road,
Castletroy, co. Limerick,
Ireland</pre>
</div>
-- <br>
You received this message because you are subscribed to the
Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from
it, send an email to <a href="mailto:cp...@googlegroups.com" rel="noreferrer" target="_blank">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/91cedcdb-79a0-a5f3-cf51-fde52abbba49%40gmail.com?utm_medium=email&utm_source=footer" rel="noreferrer" target="_blank">https://groups.google.com/d/msgid/cp2k/91cedcdb-79a0-a5f3-cf51-fde52abbba49%40gmail.com</a>.<br>
</blockquote>
</div>
-- <br>
You received this message because you are subscribed to the Google
Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it,
send an email to <a href="mailto:cp...@googlegroups.com" target="_blank">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/CAMEzy6RbFpKfoc7dr94Z%3DNCzdzotQQCMA8W7w1x%3DLZe9zXXeNA%40mail.gmail.com?utm_medium=email&utm_source=footer" target="_blank">https://groups.google.com/d/msgid/cp2k/CAMEzy6RbFpKfoc7dr94Z%3DNCzdzotQQCMA8W7w1x%3DLZe9zXXeNA%40mail.gmail.com</a>.<br>
</blockquote>
<br>
<pre class="gmail-m_-7961939612517273423moz-signature" cols="72">--
Dr Pierre Cazade, PhD
AD3-023, Bernal Institute,
University of Limerick,
Plassey Park Road,
Castletroy, co. Limerick,
Ireland</pre>
</div>
<p></p>
-- <br>
You received this message because you are subscribed to the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp...@googlegroups.com" target="_blank">cp...@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/5162a14a-ff8a-1a52-bcf6-68bab95c1f22%40gmail.com?utm_medium=email&utm_source=footer" target="_blank">https://groups.google.com/d/msgid/cp2k/5162a14a-ff8a-1a52-bcf6-68bab95c1f22%40gmail.com</a>.<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Regards,</div>Nikhil Maroli<div><br></div></div></div></div></div>