<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Dear <span lang="EN-US">Matthias,</span></p>
    <p><span lang="EN-US">I'm really appreciate for your valuable
        comments. Your help is very fruitful, thank you!</span></p>
    <p><span lang="EN-US">Actually, where is nothing special in output
        file the calculations simple freeze after the message"<br>
      </span></p>
    <div
class="highlight highlight-source-shell notranslate position-relative overflow-auto"
      dir="auto"
style="box-sizing: border-box; position: relative !important; overflow: visible !important; margin-bottom: 16px; background-color: rgb(255, 255, 255); color: rgb(31, 35, 40); font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", "Noto Sans", Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji"; font-size: 14px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;">
      <pre class="notranslate"
style="box-sizing: border-box; font-family: var(--fontStack-monospace, ui-monospace, SFMono-Regular, SF Mono, Menlo, Consolas, Liberation Mono, monospace); font-size: 11.9px; margin-top: 0px; margin-bottom: 0px; overflow-wrap: normal; padding: 16px; overflow: auto; line-height: 1.45; color: var(--fgColor-default, var(--color-fg-default)); background-color: var(--bgColor-muted, var(--color-canvas-subtle)); border-radius: 6px; word-break: normal; min-height: 52px;">HFX_MEM_INFO<span
      class="pl-k"
style="box-sizing: border-box; color: var(--color-prettylights-syntax-keyword);">|</span> Est. max. program size before HFX [MiB]:                    1283</pre>
      <div
        class="zeroclipboard-container position-absolute right-0 top-0"
style="box-sizing: border-box; position: absolute !important; top: 0px !important; right: 0px !important;"><svg
          aria-hidden="true" height="16" viewBox="0 0 16 16" width="16"
          data-view-component="true"
          class="octicon octicon-copy js-clipboard-copy-icon m-2"><path
d="M0 6.75C0 5.784.784 5 1.75 5h1.5a.75.75 0 0 1 0 1.5h-1.5a.25.25 0 0 0-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 0 0 .25-.25v-1.5a.75.75 0 0 1 1.5 0v1.5A1.75 1.75 0 0 1 9.25 16h-7.5A1.75 1.75 0 0 1 0 14.25Z"></path><path
d="M5 1.75C5 .784 5.784 0 6.75 0h7.5C15.216 0 16 .784 16 1.75v7.5A1.75 1.75 0 0 1 14.25 11h-7.5A1.75 1.75 0 0 1 5 9.25Zm1.75-.25a.25.25 0 0 0-.25.25v7.5c0 .138.112.25.25.25h7.5a.25.25 0 0 0 .25-.25v-7.5a.25.25 0 0 0-.25-.25Z"></path></svg></div>
    </div>
    <p><span lang="EN-US">There is already a mess in different CP2K
        versions and supporting libraries installed, but I'm trying to
        control the situation...</span></p>
    <p><span lang="EN-US">So before sending a proper output I emerged a
        test using only MPI parallelization and it was always failed
        when hybrid functional is used. But I swear it worked before. So
        I grabbed dependency libraries form version 8 set and feed them
        to version 2024. I found libint cause the problem. According to
        toolchain scripts the 2.6.0 version is recommended with LMAX=5.
        I took the source from libint-v2.6.0-cp2k-lmax-4.tgz traball and
        compiled with options --enable-fortran --with-pic
        --enable-shared and this combination seems to work in hybrid
        MPI+OMP mode. Although </span><span lang="EN-US">OMP_STACKSIZE
        requires to be 4 times as much with respect to version 8.</span></p>
    <p><span lang="EN-US">Why CP2K is so sensitive to libint in Hybrid
        functional calculations?<br>
      </span></p>
    <p><span lang="EN-US"><br>
      </span></p>
    <p><span lang="EN-US">Thank you in advance,<br>
        Eugene<br>
      </span></p>
    <p><span lang="EN-US"><br>
      </span></p>
    <div class="moz-cite-prefix">On 3/20/24 16:49, Krack Matthias wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:ZRAP278MB0827C82560F5067265202C0BF4332@ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM">
      <meta name="Generator"
        content="Microsoft Word 15 (filtered medium)">
      <div class="WordSection1">
        <p class="MsoNormal"><span lang="DE-CH">Dear Eugene</span></p>
        <p class="MsoNormal"><span lang="DE-CH"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">The cp2k code has been
            increasingly functionalized with OpenMP directives since
            version 8 including the loops in the PW part operating with
            large arrays. By contrast to the GNU compiler, the Intel
            compiler requires now in most cases an explicit increase of
            the OMP_STACKSIZE. Your problem with the PBE0 run could have
            also other reasons. Without an input/output example showing
            the problem, it is difficult to provide further hints.</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">Best</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <p class="MsoNormal"><span lang="EN-US">Matthias</span></p>
        <p class="MsoNormal"><span lang="EN-US"> </span></p>
        <div id="mail-editor-reference-message-container">
          <div>
            <div>
              <p class="MsoNormal">
                <b><span>From: </span></b><span><a class="moz-txt-link-abbreviated" href="mailto:cp2k@googlegroups.com">cp2k@googlegroups.com</a>
                  <a class="moz-txt-link-rfc2396E" href="mailto:cp2k@googlegroups.com"><cp2k@googlegroups.com></a> on behalf of Eugene
                  <a class="moz-txt-link-rfc2396E" href="mailto:roginovicci@gmail.com"><roginovicci@gmail.com></a><br>
                  <b>Date: </b>Wednesday, 20 March 2024 at 13:16<br>
                  <b>To: </b><a class="moz-txt-link-abbreviated" href="mailto:cp2k@googlegroups.com">cp2k@googlegroups.com</a>
                  <a class="moz-txt-link-rfc2396E" href="mailto:cp2k@googlegroups.com"><cp2k@googlegroups.com></a><br>
                  <b>Subject: </b>Re: [CP2K:20045] Hybrid MPI+OpenMP is
                  broken in v2024.1?</span></p>
            </div>
            <p>Dear Matthias,</p>
            <p>Thank you for advice. This guide me towards localization
              of the problem.
            </p>
            <p>I have two testing system with the same atomic structure
              but different functionals, namely common PBE and hybrid
              PBE0. Both works in hybrid mode in CP2K v8 without
              changing
              <span lang="EN-US">OMP_STACKSIZE. In new version of CP2K
                v2024.1 it become very important to set
                OMP_STACKSIZE=44m.</span> for PBE model. value below 44m
              will rise segfault. But for PBE0 model there is no
              reasonable value to finish calculations even 1024m rise
              segfault.</p>
            <p>According to htop statistics I found that new 2024.1
              version took more SHM (amount of shared memory used by a
              task) per process -- approx 1.3 times more ~ 135 M.</p>
            <p>At the same time in case of old v8 CP2K there is no
              influence of
              <span lang="EN-US">OMP_STACKSIZE at all. Even very small
                values doesn't rise error and calculations finished
                normally.</span></p>
            <p><span lang="EN-US">Is there something related to
                OMP_STACKSIZE in configuration of CP2K source code. Some
                kind of limit or -D parameter?</span></p>
            <p><span lang="EN-US">I found related unresolved issue in
                mailing list:</span></p>
            <p><span lang="EN-US"><a
                  href="https://groups.google.com/g/cp2k/c/40Ods3HYW5g"
                  moz-do-not-send="true" class="moz-txt-link-freetext">https://groups.google.com/g/cp2k/c/40Ods3HYW5g</a></span></p>
            <p><span lang="EN-US">I made some test with different
                cutoffs as well and in new 2024.1 it always crashed in
                PBE0.</span></p>
            <p><span lang="EN-US">Best regards,<br>
                Eugene</span></p>
            <div>
              <p class="MsoNormal">On 3/19/24 17:02, Krack Matthias
                wrote:</p>
            </div>
            <blockquote>
              <div>
                <p class="MsoNormal">
                  <span lang="DE-CH">HI Eugene</span></p>
                <p class="MsoNormal">
                  <span lang="DE-CH"> </span></p>
                <p class="MsoNormal">
                  <span lang="EN-US">If you haven’t tried yet, you can
                    set the environment variable OMP_STACKSIZE=64m.</span></p>
                <p class="MsoNormal">
                  <span lang="EN-US"> </span></p>
                <p class="MsoNormal">
                  <span lang="EN-US">HTH</span></p>
                <p class="MsoNormal">
                  <span lang="EN-US"> </span></p>
                <p class="MsoNormal">
                  <span lang="EN-US">Matthias</span></p>
                <p class="MsoNormal">
                  <span lang="EN-US"> </span></p>
                <div id="mail-editor-reference-message-container">
                  <div>
                    <div>
                      <p class="MsoNormal">
                        <b>From: </b><a
                          href="mailto:cp2k@googlegroups.com"
                          moz-do-not-send="true"
                          class="moz-txt-link-freetext">cp2k@googlegroups.com</a>
                        <a href="mailto:cp2k@googlegroups.com"
                          moz-do-not-send="true">
                          <cp2k@googlegroups.com></a> on behalf of
                        Eugene <a href="mailto:roginovicci@gmail.com"
                          moz-do-not-send="true">
                          <roginovicci@gmail.com></a><br>
                        <b>Date: </b>Tuesday, 19 March 2024 at 13:43<br>
                        <b>To: </b>cp2k <a
                          href="mailto:cp2k@googlegroups.com"
                          moz-do-not-send="true"><cp2k@googlegroups.com></a><br>
                        <b>Subject: </b>[CP2K:20043] Hybrid MPI+OpenMP
                        is broken in v2024.1?</p>
                    </div>
                    <p class="MsoNormal">
                      Hi, I've finally compiled CP2K v2024.1 using cmake
                      build system (which was a long story accompanied
                      with cmake modules fixing). Anyway I have two
                      nodes for testing based on xeon 2011 v4 processors
                      (-march=broadwell) running on Almalinux 9. I have
                      the following library compiled and installed:</p>
                    <div>
                      <p class="MsoNormal">
                        1. linint 2.6.0 (options are  --enable-fortran
                         --with-pic  --enable-shared as suggested in
                        toolchain build script)</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        2. libxsmm-1.17 (with option INTRINSICS=1)</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        3. libxc-6.1.0</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        4. dbcsr-2.6.0</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        5. Elpa </p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                         </p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        Everything was build with Intel oneAPI 2023.0.0
                        compilator using MKL and intel MPI libraries. <br>
                        The compiled binary cp2k.psmp works quite well
                        in MPI mode (OMP_NUM_THREADS=1), but hybrid mode
                        filed to run properly. A can see the general MPI
                        processe do fire up OMP threads as necessary at
                        the beginning, the calculations run and make
                        initialization unless "SCF WAVEFUNCTION
                        OPTIMIZATION" starts. There is no debug
                        information except message about Segmentation
                        fault which rise termination of mpi process on
                        child node. I spent hours to localize the
                        problem but I'm pretty sure this is not due to
                        node configuration since old v8.2 version do
                        works in hybrid mode even being compiled with
                        older intel compilator.</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                         </p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                        Any hints are very welcome,<br>
                        Eugene</p>
                    </div>
                    <div>
                      <p class="MsoNormal">
                         </p>
                    </div>
                    <p class="MsoNormal">
                      -- <br>
                      You received this message because you are
                      subscribed to the Google Groups "cp2k" group.<br>
                      To unsubscribe from this group and stop receiving
                      emails from it, send an email to
                      <a href="mailto:cp2k+unsubscribe@googlegroups.com"
                        moz-do-not-send="true"
                        class="moz-txt-link-freetext">cp2k+unsubscribe@googlegroups.com</a>.<br>
                      To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/52bbc3d3-e857-4f21-a33f-aa42e30d106an%40googlegroups.com?utm_medium=email&utm_source=footer"
                        moz-do-not-send="true">
https://groups.google.com/d/msgid/cp2k/52bbc3d3-e857-4f21-a33f-aa42e30d106an%40googlegroups.com</a>.</p>
                  </div>
                </div>
              </div>
              <p class="MsoNormal">-- <br>
                You received this message because you are subscribed to
                a topic in the Google Groups "cp2k" group.<br>
                To unsubscribe from this topic, visit <a
href="https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe"
                  moz-do-not-send="true" class="moz-txt-link-freetext">
https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe</a>.<br>
                To unsubscribe from this group and all its topics, send
                an email to <a
                  href="mailto:cp2k+unsubscribe@googlegroups.com"
                  moz-do-not-send="true" class="moz-txt-link-freetext">
                  cp2k+unsubscribe@googlegroups.com</a>.<br>
                To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/ZRAP278MB08278F9087D15079913994C1F42C2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer"
                  moz-do-not-send="true">
https://groups.google.com/d/msgid/cp2k/ZRAP278MB08278F9087D15079913994C1F42C2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM</a>.</p>
            </blockquote>
            <p class="MsoNormal">-- <br>
              You received this message because you are subscribed to
              the Google Groups "cp2k" group.<br>
              To unsubscribe from this group and stop receiving emails
              from it, send an email to
              <a href="mailto:cp2k+unsubscribe@googlegroups.com"
                moz-do-not-send="true" class="moz-txt-link-freetext">cp2k+unsubscribe@googlegroups.com</a>.<br>
              To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/b7c23615-29ca-474d-884e-f861eef46093%40gmail.com?utm_medium=email&utm_source=footer"
                moz-do-not-send="true">
https://groups.google.com/d/msgid/cp2k/b7c23615-29ca-474d-884e-f861eef46093%40gmail.com</a>.</p>
          </div>
        </div>
      </div>
      -- <br>
      You received this message because you are subscribed to a topic in
      the Google Groups "cp2k" group.<br>
      To unsubscribe from this topic, visit <a
href="https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe"
        moz-do-not-send="true" class="moz-txt-link-freetext">https://groups.google.com/d/topic/cp2k/TFgAsWkpnW0/unsubscribe</a>.<br>
      To unsubscribe from this group and all its topics, send an email
      to <a href="mailto:cp2k+unsubscribe@googlegroups.com"
        moz-do-not-send="true" class="moz-txt-link-freetext">cp2k+unsubscribe@googlegroups.com</a>.<br>
      To view this discussion on the web visit <a
href="https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827C82560F5067265202C0BF4332%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer"
        moz-do-not-send="true">https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827C82560F5067265202C0BF4332%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM</a>.<br>
    </blockquote>
  </body>
</html>

<p></p>

-- <br />
You received this message because you are subscribed to the Google Groups "cp2k" group.<br />
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="mailto:cp2k+unsubscribe@googlegroups.com">cp2k+unsubscribe@googlegroups.com</a>.<br />
To view this discussion on the web visit <a href="https://groups.google.com/d/msgid/cp2k/7cdacb12-cb1e-4f56-9c1f-268b98e7b77c%40gmail.com?utm_medium=email&utm_source=footer">https://groups.google.com/d/msgid/cp2k/7cdacb12-cb1e-4f56-9c1f-268b98e7b77c%40gmail.com</a>.<br />