<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Dear Matthias,</p>
    <p>Thank you for advice. This guide me towards localization of the
      problem. <br>
    </p>
    <p>I have two testing system with the same atomic structure but
      different functionals, namely common PBE and hybrid PBE0. Both
      works in hybrid mode in CP2K v8 without changing <span
        lang="EN-US">OMP_STACKSIZE. In new version of CP2K v2024.1 it
        become very important to set </span><span lang="EN-US">OMP_STACKSIZE=44m.</span><span
        lang="EN-US"></span> for PBE model. value below 44m will rise
      segfault. But for PBE0 model there is no reasonable value to
      finish calculations even 1024m rise segfault.</p>
    <p>According to htop statistics I found that new 2024.1 version took
      more SHM (amount of shared memory used by a task) per process --
      approx 1.3 times more ~ 135 M.</p>
    <p>At the same time in case of old v8 CP2K there is no influence of
      <span lang="EN-US">OMP_STACKSIZE at all. Even very small values
        doesn't rise error and calculations finished normally.</span></p>
    <p><span lang="EN-US">Is there something related to OMP_STACKSIZE in
        configuration of CP2K source code. Some kind of limit or -D
        parameter?</span></p>
    <p><span lang="EN-US">I found related unresolved issue in mailing
        list:</span></p>
    <p><span lang="EN-US"><a class="moz-txt-link-freetext" href="https://groups.google.com/g/cp2k/c/40Ods3HYW5g">https://groups.google.com/g/cp2k/c/40Ods3HYW5g</a></span></p>
    <p><span lang="EN-US">I made some test with different cutoffs as
        well and in new 2024.1 it always crashed in PBE0.</span></p>
    <p><span lang="EN-US">Best regards,<br>
        Eugene<br>
      </span></p>
    <div class="moz-cite-prefix">On 3/19/24 17:02, Krack Matthias wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:ZRAP278MB08278F9087D15079913994C1F42C2@ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM">
      <meta name="Generator"
        content="Microsoft Word 15 (filtered medium)">
      <div class="WordSection1">
        <p class="MsoNormal"><span lang="DE-CH">HI Eugene</span></p>
        <p class="MsoNormal"><span lang="DE-CH"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">If you haven’t tried
            yet, you can set the environment variable OMP_STACKSIZE=64m.</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">HTH</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">Matthias</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <div id="mail-editor-reference-message-container">
          <div>
            <div>
              <p class="MsoNormal">
                <b><span>From: </span></b><span><a class="moz-txt-link-abbreviated" href="mailto:cp2k@googlegroups.com">cp2k@googlegroups.com</a>
                  <a class="moz-txt-link-rfc2396E" href="mailto:cp2k@googlegroups.com"><cp2k@googlegroups.com></a> on behalf of Eugene
                  <a class="moz-txt-link-rfc2396E" href="mailto:roginovicci@gmail.com"><roginovicci@gmail.com></a><br>
                  <b>Date: </b>Tuesday, 19 March 2024 at 13:43<br>
                  <b>To: </b>cp2k <a class="moz-txt-link-rfc2396E" href="mailto:cp2k@googlegroups.com"><cp2k@googlegroups.com></a><br>
                  <b>Subject: </b>[CP2K:20043] Hybrid MPI+OpenMP is
                  broken in v2024.1?</span></p>
            </div>
            <p class="MsoNormal">Hi, I've finally compiled CP2K v2024.1
              using cmake build system (which was a long story
              accompanied with cmake modules fixing). Anyway I have two
              nodes for testing based on xeon 2011 v4 processors
              (-march=broadwell) running on Almalinux 9. I have the
              following library compiled and installed:</p>
            <div>
              <p class="MsoNormal">1. linint 2.6.0 (options are
                 --enable-fortran  --with-pic  --enable-shared as
                suggested in toolchain build script)</p>
            </div>
            <div>
              <p class="MsoNormal">2. libxsmm-1.17 (with option
                INTRINSICS=1)</p>
            </div>
            <div>
              <p class="MsoNormal">3. libxc-6.1.0</p>
            </div>
            <div>
              <p class="MsoNormal">4. dbcsr-2.6.0</p>
            </div>
            <div>
              <p class="MsoNormal">5. Elpa </p>
            </div>
            <div>
              <p class="MsoNormal"> </p>
            </div>
            <div>
              <p class="MsoNormal">Everything was build with Intel
                oneAPI 2023.0.0 compilator using MKL and intel MPI
                libraries. <br>
                The compiled binary cp2k.psmp works quite well in MPI
                mode (OMP_NUM_THREADS=1), but hybrid mode filed to run
                properly. A can see the general MPI processe do fire up
                OMP threads as necessary at the beginning, the
                calculations run and make initialization unless "SCF
                WAVEFUNCTION OPTIMIZATION" starts. There is no debug
                information except message about Segmentation fault
                which rise termination of mpi process on child node. I
                spent hours to localize the problem but I'm pretty sure
                this is not due to node configuration since old v8.2
                version do works in hybrid mode even being compiled with
                older intel compilator.</p>
            </div>
            <div>
              <p class="MsoNormal"> </p>
            </div>
            <div>
              <p class="MsoNormal">Any hints are very welcome,<br>
                Eugene</p>
            </div>
            <div>
              <p class="MsoNormal"> </p>
            </div>
            <p class="MsoNormal">-- <br>
              You received this message because you are subscribed to
              the Google Groups "cp2k" group.<br>
              To unsubscribe from this group and stop receiving emails
              from it, send an email to
              <a href="mailto:cp2k+unsubscribe@googlegroups.com"
                moz-do-not-send="true" class="moz-txt-link-freetext">cp2k+unsubscribe@googlegroups.com</a>.<br>
              To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/52bbc3d3-e857-4f21-a33f-aa42e30d106an%40googlegroups.com?utm_medium=email&utm_source=footer"
                moz-do-not-send="true">
https://groups.google.com/d/msgid/cp2k/52bbc3d3-e857-4f21-a33f-aa42e30d106an%40googlegroups.com</a>.</p>
          </div>
        </div>
      </div>
      -- <br>
      You received this message because you are subscribed to a topic in
      the Google Groups "cp2k" group.<br>
      To unsubscribe from this topic, visit <a
href="https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe"
        moz-do-not-send="true" class="moz-txt-link-freetext">https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe</a>.<br>
      To unsubscribe from this group and all its topics, send an email
      to <a href="mailto:cp2k+unsubscribe@googlegroups.com"
        moz-do-not-send="true" class="moz-txt-link-freetext">cp2k+unsubscribe@googlegroups.com</a>.<br>
      To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/ZRAP278MB08278F9087D15079913994C1F42C2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer"
        moz-do-not-send="true">https://groups.google.com/d/msgid/cp2k/ZRAP278MB08278F9087D15079913994C1F42C2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM</a>.<br>
    </blockquote>
  </body>
</html>

<p></p>

-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/b7c23615-29ca-474d-884e-f861eef46093%40gmail.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/b7c23615-29ca-474d-884e-f861eef46093%40gmail.com</a>.<br />