<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0cm;
font-size:10.0pt;
font-family:"Courier New";}
span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:"Consolas",serif;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
--></style>
</head>
<body lang="EN-IE" link="blue" vlink="purple" style="word-wrap:break-word">
<div class="WordSection1">
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">Hi Matthew,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">Unfortunately, there’s no single way to determine the best MPI/OpenMP load. It is system, calculation type, and hardware dependant. I recommend testing the performance. The first thing you could
try is check if your CPUs are multithreaded. For example, if they are made of 34 cores and 2 virtual cores per physical core (68 virtual cores in total), you could try OMP_NUM_THREADS=2 and keep your mpirun -np (34*#nodes).<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">Roughly speaking, MPI creates multiple replica of the calculation (called process), each replica dealing with part of the calculation. CP2K is efficiently parallelized with MPI. OpenMP generated
multiple threads on the fly, generally to parallelize a loop. OpenMP can be used in a MPI thread but not the other way around. Typically, having more MPI processed consumes more memory than the same number of OpenMP threads. To use multiple nodes, MPI is mandatory
and more efficient. These are generalities and, again, combining both is best but the ideal ratio varies. Testing is the best course of action, check which combination yields the largest number of ps/day with the minimum hardware resources. Doubling the hardware
does not double the output, so increasing the number of nodes becomes a waste of resources at some point. A rule of thumb, if the increase in output is less than 75-80% of the ideal case, then, it is not worth it.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">As you can see, there is a lot of try and error, no systematic rule I am afraid.<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">Regards,<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US">Pierre<o:p></o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="mso-fareast-language:EN-US"><o:p> </o:p></span></p>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:12.0pt;color:black">From:
</span></b><span style="font-size:12.0pt;color:black">cp2k@googlegroups.com <cp2k@googlegroups.com> on behalf of Matthew Graneri <mhvg1994@gmail.com><br>
<b>Date: </b>Wednesday, 18 May 2022 at 10:35<br>
<b>To: </b>cp2k <cp2k@googlegroups.com><br>
<b>Subject: </b>Re: [CP2K:16997] Running Cp2k in parallel using thread in a PC<o:p></o:p></span></p>
</div>
<p class="MsoNormal">Hi Pierre,<o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I found this really valuable! Unfortunately, being very new to AIMD and very unfamiliar with computation in general, I was wondering if I might be able to get some advice? We have a HPC at my university where each node has 34 processors,
and ~750 GB RAM available for use. It runs on a slurm queuing system.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Until now, I've run all my jobs using: <span style="font-family:"Courier New"">
mpirun -np $SLURM_NTASKS cp2k.popt -I input.inp -o output.out</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">where <span style="font-family:"Courier New"">$SLURM_NTASKS</span> is whatever number of processors I've allocated to the job via the
<span style="font-family:"Courier New"">--ntasks=x</span> flag.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">So instead, I'm thinking it might be more appropriate to use the .psmp executable, but I'm not sure what the difference between the OpenMP and the MPI threads are, and what kind of ratios between the OMP and MPI threads would be most effective
for speeding up an AIMD job, and how many threads of each type you can add before the parallelisation becomes less efficient.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Do you (or anyone else) have any advice on the matter? Is it better to have more OMP or MPI threads? And how many OMP threads per MPI thread would be appropriate? What kinds of ratios are most effective at speeding up calculations?<o:p></o:p></p>
</div>
<div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</div>
<div>
<p class="MsoNormal">I would really appreciate any help I can get!<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Regards,<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Matthew<o:p></o:p></p>
</div>
<div>
<div>
<p class="MsoNormal">On Friday, September 20, 2019 at 10:45:55 PM UTC+8 pierre.an...@gmail.com wrote:<o:p></o:p></p>
</div>
<blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-right:0cm">
<div>
<p class="MsoNormal">Hello Nikhil,<br>
<br>
Withe command "mpirun -n 42 cp2k.pop -i inp.inp -o -out.out", you are requesting 42 MPI threads and not 42 OpenMP threads. MPI usually relies on replicated data which means that, for a poorly program software, it will request a total amount of memory which
the amount of memory required by a scalar execution times the number of threads. This can very quickly become problematic, in particular for QM calculations. OpenMP, however relies on shared memory, the data is normally not replicated but shared between threads
and therefore, in an ideal scenario, the amount of memory needed for 42 OpenMP threads is the same as a single one.<br>
<br>
This might explains why you calculation freezes. You are out of memory. On your workstation, you should only use the executable "cp2k.ssmp" which is the OpenMP version. Then you don't need the mpirun command:<br>
<br>
cp2k.ssmp -i inp.inp -o -out.out<br>
<br>
To control the number of OpenMP threads, set the env variable: OMP_NUM_THREADS, e.g. in bash, export OMP_NUM_THREADS=48<br>
<br>
Now, if you need to balance between MPI and OpenMP, you should use the executable named cp2k.psmp. Here is such an example:<br>
<br>
export OMP_NUM_THREADS=24<br>
mpirun -n 2 cp2k.psmp -i inp.inp -o -out.out<br>
<br>
In this example, I am requesting two MPI threads and each of them can use up to 24 OpenMP threads.<br>
<br>
Hope this clarifies things for you.<br>
<br>
Regards,<br>
Pierre<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><o:p> </o:p></p>
<div>
<p class="MsoNormal">On 20/09/2019 14:09, Nikhil Maroli wrote:<o:p></o:p></p>
</div>
</div>
<div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">Dear all, <o:p></o:p></p>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I have installed all the versions of CP2K in my workstation with 2 x 12 core processor, total thread=48<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I wanted to run cp2k in parallel using 42 threads, can anyone share the commands that i can use.<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">I have tried <o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">mpirun -n 42 cp2k.pop -i inp.inp -o -out.out<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">After this command there is a rise in memory to 100 % and the whole system freezes. (i have 128GB ram).<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
<div>
<p class="MsoNormal">Any suggestion will be greatly appreciated,<o:p></o:p></p>
</div>
</div>
</blockquote>
</div>
<div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">-- <br>
You received this message because you are subscribed to the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to
<span class="MsoHyperlink">cp2k+uns...@googlegroups.com</span>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com?utm_medium=email&utm_source=footer" target="_blank">
https://groups.google.com/d/msgid/cp2k/39284c57-f6eb-463e-81a6-3a123596a9f2%40googlegroups.com</a>.<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
<br>
<o:p></o:p></p>
<pre>-- <o:p></o:p></pre>
<pre>Dr Pierre Cazade, PhD<o:p></o:p></pre>
<pre>AD3-023, Bernal Institute,<o:p></o:p></pre>
<pre>University of Limerick,<o:p></o:p></pre>
<pre>Plassey Park Road,<o:p></o:p></pre>
<pre>Castletroy, co. Limerick,<o:p></o:p></pre>
<pre>Ireland<o:p></o:p></pre>
</div>
</blockquote>
</div>
<p class="MsoNormal">-- <br>
You received this message because you are subscribed to the Google Groups "cp2k" group.<br>
To unsubscribe from this group and stop receiving emails from it, send an email to
<a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br>
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/010a2dd7-dc2c-4475-8a9b-17cdbb10d20dn%40googlegroups.com?utm_medium=email&utm_source=footer">
https://groups.google.com/d/msgid/cp2k/010a2dd7-dc2c-4475-8a9b-17cdbb10d20dn%40googlegroups.com</a>.<o:p></o:p></p>
</div>
</body>
</html>
<p></p>
-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/DB4P195MB19951191769BA8E5B2B536B4AED19%40DB4P195MB1995.EURP195.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/DB4P195MB19951191769BA8E5B2B536B4AED19%40DB4P195MB1995.EURP195.PROD.OUTLOOK.COM</a>.<br />