<div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>ok. it may be version specific, too. </div><div><br></div><div><div>[akohlmey@g002 input]$ rpm -qif /usr/lib64/libnuma.so.1 </div>
<div>Name : numactl Relocations: (not relocatable)</div><div>Version : 2.0.3 Vendor: Red Hat, Inc.</div><div>Release : 9.el6 Build Date: Thu Jun 17 10:46:17 2010</div>
</div><div class="im"><div></div></div></blockquote><div><br>The version that I use is the latest stable one. But, I don't believe that the error come from there. I still have to take a look on this libnuma support.<br>
</div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>yes, this kind of behavior is what i would have expected.</div><div>this should also help with the internal threading in OpenMPI.</div>
</blockquote><div><br>The main goal is to avoid memory allocations and access from different MPIs on remote NUMA nodes. But, If you want to pin also threads you can try the Linear strategy, which will pin process and threads. <br>
</div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="im"><div><br></div></div><div>please have a look at the attached file. you'll see that there</div>
<div>are some entries that don't look right. particularly the node</div><div>names are all that of MPI rank 0.</div></blockquote><div><br>I did some changes to fix this. Could you try the latest version of CP2K?<br> </div>
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="im"><div><br></div></div><div>yes. our MPI installation is configured by default to have a 1:1 core to MPI</div>
<div>rank mapping (since there is practically nobody yet using MPI+OpenMP)</div><div>with memory affinity for giving people the best MPI-only performance.</div><div><br></div></blockquote><div><br>Ok. So, for threads, even with this installation you can not specify their cores?<br>
</div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div></div><div>at the end of the attached file i include a copy of the wrapper script,</div>
<div>that is OpenMPI specific (since that is the only MPI library installed).</div></blockquote><div><br>thanks for the script. <br> <br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div><br></div><div>overall, it looks to me like that default settings are giving a desirable </div><div>processor and memory affinity (which is great) that is consistent with</div><div>the best settings i could get using my wrapper script, but the diagnostics</div>
<div>seems to be off and may be confusing people, particularly technical</div><div>support in computing centers, that are often too literal and assume </div><div>that any software is always giving 100% correct information. ;-)</div>
</blockquote><div><br>Now, it should work :) Let me know if you find new bugs.<br><br>Considering your machine, the cores number problem comes from the fact that I was using the number that the OS gives to the cores. Now, I'm using the logical ones. BTW, is your machine intel? <br>
<br></div><blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br></div><div>cheers,</div><div> axel.</div><br clear="all"></blockquote></div>
<br>cheers,<br><br>Christiane Pousa Ribeiro<br> <br>