The error is:<div><br /></div><div><font face="Courier New">```</font></div><div><font face="Courier New">LIBXSMM_VERSION: develop-1.17-3834 (25693946)<br />CLX/DP TRY JIT STA COL<br /> 0..13 2 2 0 0<br /> 14..23 0 0 0 0<br /> 24..64 0 0 0 0<br />Registry and code: 13 MB + 16 KB (gemm=2)<br />Command (PID=2607388): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i H2O-9.inp -o H2O-9.out<br />Uptime: 5.288243 s<br /><br />===================================================================================<br />= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br />= RANK 0 PID 2607388 RUNNING AT r21c01b10<br />= KILLED BY SIGNAL: 11 (Segmentation fault)<br />===================================================================================<br /><br />===================================================================================<br />= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES<br />= RANK 1 PID 2607389 RUNNING AT r21c01b10<br />= KILLED BY SIGNAL: 9 (Killed)<br />===================================================================================<br /></font></div><div><font face="Courier New">```</font></div><div><br /></div><div>and the last 20 lines:</div><div><br /></div><div><font face="Courier New">```</font></div><div><font face="Courier New"> 000000:000002<< 13 76 pw_copy 0.001<br /> Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002>> 13 19 pw_derive star<br /> t Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002<< 13 19 pw_derive 0.00<br /> 2 Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002>> 13 168 pw_pool_create_pw<br /> start Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002>> 14 97 pw_create_c1d<br /> start Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002<< 14 97 pw_create_c1d<br /> 0.000 Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002<< 13 168 pw_pool_create_pw<br /> 0.000 Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002>> 13 77 pw_copy start<br /> Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002<< 13 77 pw_copy 0.001<br /> Hostmem: 693 MB GPUmem: 0 MB<br /> 000000:000002>> 13 20 pw_derive star<br /> t Hostmem: 693 MB GPUmem: 0 MB</font></div><div><font face="Courier New">```</font><br /><br /></div><div>Thanks!</div><div class="gmail_quote"><div dir="auto" class="gmail_attr">piątek, 18 października 2024 o 17:18:39 UTC+2 Frederick Stein napisał(a):<br/></div><blockquote class="gmail_quote" style="margin: 0 0 0 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">Please pick one of the failing tests. Then, add the TRACE keyword to the &GLOBAL section and then run the test manually. This increases the size of the output file dramatically (to some million lines). Can you send me the last ~20 lines of the output?<br><div class="gmail_quote"><div dir="auto" class="gmail_attr">bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I'm using do_regtests.py script, not make regtesting, but I assume it makes no difference. As I mentioned in previous message for `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp with `--ompthreads 2` I observe similar errors as for psmp with the same setting, I provide example output as attachment. <div><br></div><div>Thanks</div><div>Bartosz<br><br></div><div class="gmail_quote"><div dir="auto" class="gmail_attr">piątek, 18 października 2024 o 16:24:16 UTC+2 Frederick Stein napisał(a):<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Dear Bartosz,<br></div><div>What happens if you set the number of OpenMP threads to 1 (add '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the ssmp?</div><div>Best,</div><div>Frederick<br></div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Frederick,<div><br></div><div>thanks again for help. So I have tested different simulation variants and I know that the problem occurs when using OMP. For MPI calculations without OMP all tests pass. I have also tested the effect of the <font face="Courier New">`OMP_PROC_BIND` </font>and <font face="Courier New">`OMP_PLACES`</font> parameters and apart from the effect on simulation time, they have no significant effect on the presence of errors. Below are the results for ssmp:</div><div><br></div><div><font face="Courier New">```</font></div><div><font face="Courier New">OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time <br>spread, threads, 3850, 4144, 4, 290, 186min<br>spread, cores, 3831, 4144, 3, 310, 183min<br>spread, sockets, 3864, 4144, 3, 277, 104min<br>close, threads, 3879, 4144, 3, 262, 171min<br>close, cores, 3854, 4144, 0, 290, 168min<br>close, sockets, 3865, 4144, 3, 276, 104min<br>master, threads, 4121, 4144, 0, 23, 1002min<br>master, cores, 4121, 4144, 0, 23, 986min<br>master, sockets, 3942, 4144, 3, 199, 219min<br>false, threads, 3918, 4144, 0, 226, 178min<br>false, cores, 3919, 4144, 3, 222, 176min<br>false, sockets, 3856, 4144, 4, 284, 104min<br>```</font></div><div><br></div><div>and psmp:</div><div><br></div><div><font face="Courier New">```</font></div><div><font face="Courier New">OMP_PROC_BIND, OMP_PLACES, results<br>spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min<br>spread, cores, 26 / 362<br>spread, cores, 26 / 362<br>close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min<br>close, cores, 60 / 362<br>close, sockets, 13 / 362<br>master, threads, 13 / 362<br>master, cores, 79 / 362<br>master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min<br>false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min<br>false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min<br>false, sockets, 96 / 362</font></div><div><font face="Courier New">not specified, not specified, Summary: correct: 4129 / 4227; failed: 98; 263min</font><br></div><div><font face="Courier New">```</font></div><div><br></div><div>Any ideas what I could do next to have more information about the source of the problem or maybe you see a potential solution at this stage? I would appreciate any further help. <br></div><div><br></div><div>Best</div><div>Bartosz</div><div><br></div><div><br></div><div class="gmail_quote"><div dir="auto" class="gmail_attr">piątek, 11 października 2024 o 14:30:25 UTC+2 Frederick Stein napisał(a):<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Dear Bartosz,</div><div>If I am not mistaken, you used 8 OpenMP threads. The test do not run that efficiently with such a large number of threads. 2 should be sufficient.</div><div>The test result suggests that most of the functionality may work but due to a missing backtrace (or similar information), it is hard to tell why they fail. You could also try to run some of the single-node tests to assess the stability of CP2K.<br></div><div>Best,</div><div>Frederick<br></div><br><div class="gmail_quote"><div dir="auto" class="gmail_attr">bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Sorry, forgot attachments.<div><br></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div>
<p></p>
-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/463cb4b0-c840-4e7d-9bca-09f007a69925n%40googlegroups.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/463cb4b0-c840-4e7d-9bca-09f007a69925n%40googlegroups.com</a>.<br />