From matthias.krack at psi.ch Tue Oct 1 07:52:21 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Tue, 1 Oct 2024 07:52:21 +0000 Subject: [CP2K-user] [CP2K:20734] Adsorption of organic molecules on metal surface In-Reply-To: <29df6ea1-a061-4ee5-af17-aa4cd407dee0n@googlegroups.com> References: <29df6ea1-a061-4ee5-af17-aa4cd407dee0n@googlegroups.com> Message-ID: Hi The input file ?molecule.input? has a syntax error. The ?&END KIND? for the ?&KIND O? section is missing as indicated by the the indentation. Further issues are: (1) the pseudo potential for C should end with ?q4?, (2) EPS_DEFAULT should be smaller, e.g. 1.0E-12, (3) the cutoff should be larger, e.g. 400 Ry, (4) CHO has an odd number of electrons which requires LSD. HTH Matthias From: cp2k at googlegroups.com on behalf of Hanaa Sari Date: Tuesday, 1 October 2024 at 03:33 To: cp2k Subject: [CP2K:20733] Adsorption of organic molecules on metal surface Dear All, I am a new user of CP2K. I am trying to study the adsorption of organic molecules on metal surface. When I run the input file (in attachment) to optimize the molecule I have this message: invoking MPI_ABORT causes Open MPI to kill all MPI processes. ? You may or may not see output from other processes, depending on exactly when Open MPI kills them ? In the other side I try to optimize Ni(111) bulk and slab consisiting layers ? and the only calculation that converge is that of the elementary cell . As soon as I increase the number of atoms the calculation do not converge. Could someone provide an example input file? knowing that I am using the version cp2k 2024.1 Thank you. ? -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/29df6ea1-a061-4ee5-af17-aa4cd407dee0n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827FD87F7FBE1001384A562F4772%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From belizsertcan at gmail.com Tue Oct 1 15:51:55 2024 From: belizsertcan at gmail.com (=?UTF-8?Q?Beliz_G=C3=B6kmen?=) Date: Tue, 1 Oct 2024 08:51:55 -0700 (PDT) Subject: [CP2K-user] [CP2K:20736] Re: Polarizability calculation In-Reply-To: <14b6a4b8-369f-4967-aaab-d99ac550305fn@googlegroups.com> References: <14b6a4b8-369f-4967-aaab-d99ac550305fn@googlegroups.com> Message-ID: <2e6aba77-0eda-41ff-bc73-41d896510047n@googlegroups.com> Hi Simone, Usually CP2K prints asterisks when the value is too large to be printed. Could you please attach your input and output? Best, Beliz On Wednesday 25 September 2024 at 00:18:24 UTC+2 Simone Ritarossi wrote: > Hi, I'm calculating the Raman spectrum of a system and in the output I > have that some values of the polarizability tensor are asterisks > (divergences?). What problem could there be? > Thank you for any help or suggestions. > > Simone > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/2e6aba77-0eda-41ff-bc73-41d896510047n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnzmichela at gmail.com Wed Oct 2 17:02:15 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Wed, 2 Oct 2024 10:02:15 -0700 (PDT) Subject: [CP2K-user] [CP2K:20737] 6-31+G(d) basis set? + anion questions In-Reply-To: References: Message-ID: <7ece4ffc-0c3d-4bfc-92cb-9dd17b3f2de5n@googlegroups.com> Thank you, Dr. Krack, I really appreciate it! This worked :) On Monday, September 30, 2024 at 3:19:27?AM UTC-4 Krack Matthias wrote: > Dear Michela > > > > You can download these basis sets from Basis Set Exchange > in CP2K format. If I am not mistaken, > the 631++G basis set includes also a diffuse function for the first row > elements H and He by contrast to 6-31+G as the only difference. > > > > You need to set CHARGE -1, if you want to simulate an CH3O- anion. This > will add automatically a compensating background charge of +1 for periodic > calculation (?PERIODIC xyz? which is the default). For a non-periodic > calculation with ?PERIODIC none? in the &CELL and &POISSON sections, select > an appropriate POISSION_SOLVER > > like MT. > > > > HTH > > > > Matthias > > > > *From: *cp... at googlegroups.com on behalf of > Michela Benazzi > *Date: *Sunday, 29 September 2024 at 15:43 > *To: *cp2k > *Subject: *[CP2K:20732] 6-31+G(d) basis set? + anion questions > > Good morning everyone, > > I was able to find the 6-31G and 6-31++G basis set in > data/EMSL_BASIS_SETS, but not the single + basis set (6-31+G(d)). Can > anyone show me where to find it?? > > Another question is about representing the methoxide anion for a geometry > opt job - is it enough to make coordinate for CH3O? Do I have to add a > CHARGE? I am generally confused about setting up simulations with ions and > I would love someone to verify. > > > > Thank you so much for your time - I appreciate the help! > > Michela > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/cp2k/a502cc1f-8ef7-4e5f-8725-7138d3bda977n%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/7ece4ffc-0c3d-4bfc-92cb-9dd17b3f2de5n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnzmichela at gmail.com Wed Oct 2 17:27:59 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Wed, 2 Oct 2024 10:27:59 -0700 (PDT) Subject: [CP2K-user] [CP2K:20738] Gallium + CO2 lack of convergence Message-ID: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> Good morning dear CP2K community, how are you? You may know me from previous posts on liquid Al (+CO2) MD troubleshooting. All of your responses have been super helpful so far, and I am coming here again for a different liquid metal. My simulations with pure liquid Gallium have been less troublesome than all of my liquid Al simulations, but the MD with 1 CO2 molecule won't converge. Can I please get some help troubleshooting? Thank you, Michela -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/01922cb9-0de8-4887-bacb-da4a2ed78a28n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga64C1.in Type: application/octet-stream Size: 2729 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga64C1.xyz Type: chemical/x-xyz Size: 2574 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 29620709.out Type: application/octet-stream Size: 40985 bytes Desc: not available URL: From mayank.dodia at gmail.com Thu Oct 3 03:25:18 2024 From: mayank.dodia at gmail.com (mayank...@gmail.com) Date: Wed, 2 Oct 2024 20:25:18 -0700 (PDT) Subject: [CP2K-user] [CP2K:20739] Setting up time dependent MM partial charges Message-ID: Hi, I have a solute-solvent in which the solute is excited via external radiation and I track the resultant solvent dynamics due to evolution of the electronic density on the solute (~100 atoms). A simple and cost-effective way for mimicing this process is to transform the evolving electronic density into time-dependent partial atomic charges on the solute at a suitable timestep \delta_t (~25 a.u.) The current implementation I have is: 1) setting the partial charges in cp2k input at t=n*\delta_t (n=0,1,2..) 2) running cp2k simulation for t+\delta_t 3) stop cp2k run 4) update the input file for MM charges for t+(n+1)*\delta_t 5) redo 2) until the simulation is done The main issue here is that I have to keep on running and stopping cp2k at every 5 steps to update the MM charges, which is a significant overhead on the calculation cost. Is there a more efficient way to do so within the cp2k output? Best Regards, Mayank -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/c2be5e6f-440f-4fb9-af1a-20bf480d7c6en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Thu Oct 3 12:23:26 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 3 Oct 2024 05:23:26 -0700 (PDT) Subject: [CP2K-user] [CP2K:20740] Re: Gallium + CO2 lack of convergence In-Reply-To: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> Message-ID: <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> Dear Michela, The basis set you are using is of poor quality. The coordinates you sent show a rather strange C-O bond length The cell is very small, but still there is vacuum space among the replicas in all directions, it is a rather weird choice of coordinates. Is there a reason why you are using GAPW? Regards Marcella On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com wrote: > Good morning dear CP2K community, > > how are you? You may know me from previous posts on liquid Al (+CO2) MD > troubleshooting. All of your responses have been super helpful so far, and > I am coming here again for a different liquid metal. > > My simulations with pure liquid Gallium have been less troublesome than > all of my liquid Al simulations, but the MD with 1 CO2 molecule won't > converge. Can I please get some help troubleshooting? > > Thank you, > > Michela > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/6d229766-9b30-422f-8ce5-154ace7da6f8n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnzmichela at gmail.com Thu Oct 3 13:02:10 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Thu, 3 Oct 2024 06:02:10 -0700 (PDT) Subject: [CP2K-user] [CP2K:20741] Re: Gallium + CO2 lack of convergence In-Reply-To: <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> Message-ID: <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> Hi Marcella, thank you for your kind response and your time. 1) I just switched to double zeta quality a few hours ago, but my MD just crashed because, weirdly, it converged for the first few SCF loops, but then it stopped converging (attached the output file here to explain). 2) I am using GAPW because I found that augmented plane waves method worked really well with my liquid Al systems before. that method is also reported in DFT literature for liquid Ga. Rationalizing it, I think it works because it samples regions of space with different charge densities with more accuracy. Do you think I should consider something else? 3) I used a cell size to represent the density of liquid Ga with the # of atoms I have. I prepared my coordinates with a Python script, then relaxed the geometry in Avogadro2 software and inserted CO2 such that it was a distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do you have any suggestions to prepare a structure? I am leaving 1-2 A on all sides from the unit cell boundaries because I have been worried about Ga atoms being too close to neighbors across periodic boundaries. Michela On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi wrote: > Dear Michela, > > The basis set you are using is of poor quality. > The coordinates you sent show a rather strange C-O bond length > The cell is very small, but still there is vacuum space among the replicas > in all directions, it is a rather weird choice of coordinates. > > Is there a reason why you are using GAPW? > > Regards > Marcella > On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com > wrote: > >> Good morning dear CP2K community, >> >> how are you? You may know me from previous posts on liquid Al (+CO2) MD >> troubleshooting. All of your responses have been super helpful so far, and >> I am coming here again for a different liquid metal. >> >> My simulations with pure liquid Gallium have been less troublesome than >> all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >> converge. Can I please get some help troubleshooting? >> >> Thank you, >> >> Michela >> > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/1342769e-8153-4387-b5a8-b18c6b084635n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 44553907.out Type: application/octet-stream Size: 167407 bytes Desc: not available URL: From marci.akira at gmail.com Thu Oct 3 14:20:25 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 3 Oct 2024 07:20:25 -0700 (PDT) Subject: [CP2K-user] [CP2K:20742] Re: Gallium + CO2 lack of convergence In-Reply-To: <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> Message-ID: <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> Dear Michela, The procedure you describe does not sound very appropriate to me. You should first obtain a liquid system, without solute. I suppose you should check for density and other properties and have a sufficiently large box. Then you can create a cavity in the equilibrated liquid and insert the solute, still with the right C-O bond length. If the SCF does not converge anymore after a few steps it is probably because of the coordinates. The concentration of CO2 seems rather high. You can use GAPW. It is more commonly used for all electron calculations. With PP, GPW is as accurate. Regards Marcella On Thursday, October 3, 2024 at 3:02:10?PM UTC+2 bnzmi... at gmail.com wrote: > Hi Marcella, > > thank you for your kind response and your time. > > 1) I just switched to double zeta quality a few hours ago, but my MD just > crashed because, weirdly, it converged for the first few SCF loops, but > then it stopped converging (attached the output file here to explain). > > 2) I am using GAPW because I found that augmented plane waves method > worked really well with my liquid Al systems before. that method is also > reported in DFT literature for liquid Ga. Rationalizing it, I think it > works because it samples regions of space with different charge densities > with more accuracy. Do you think I should consider something else? > > 3) I used a cell size to represent the density of liquid Ga with the # of > atoms I have. I prepared my coordinates with a Python script, then relaxed > the geometry in Avogadro2 software and inserted CO2 such that it was a > distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do you > have any suggestions to prepare a structure? I am leaving 1-2 A on all > sides from the unit cell boundaries because I have been worried about Ga > atoms being too close to neighbors across periodic boundaries. > > Michela > On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi wrote: > >> Dear Michela, >> >> The basis set you are using is of poor quality. >> The coordinates you sent show a rather strange C-O bond length >> The cell is very small, but still there is vacuum space among the >> replicas in all directions, it is a rather weird choice of coordinates. >> >> Is there a reason why you are using GAPW? >> >> Regards >> Marcella >> On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com >> wrote: >> >>> Good morning dear CP2K community, >>> >>> how are you? You may know me from previous posts on liquid Al (+CO2) MD >>> troubleshooting. All of your responses have been super helpful so far, and >>> I am coming here again for a different liquid metal. >>> >>> My simulations with pure liquid Gallium have been less troublesome than >>> all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >>> converge. Can I please get some help troubleshooting? >>> >>> Thank you, >>> >>> Michela >>> >> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/30f8cec8-8b86-4dc1-a499-4f2af91e947an%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bnzmichela at gmail.com Thu Oct 3 15:02:46 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Thu, 3 Oct 2024 08:02:46 -0700 (PDT) Subject: [CP2K-user] [CP2K:20743] Re: Gallium + CO2 lack of convergence In-Reply-To: <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> Message-ID: <335efca4-9014-4d60-8c0e-42d4ab964000n@googlegroups.com> Hi Marcella, Thank you again! 1) should I run GEO_OPT on the pure Ga liquid, then add CO2? 2) How should I form a cavity artificially without disrupting the newly equilibrated Ga structure? 3) I only have one molecule of CO2 in there - How should I go about lowering the concentration? Should I just increase my #Ga atoms and add 1 CO2 molecule? Best, Michela On Thursday, October 3, 2024 at 10:20:25?AM UTC-4 Marcella Iannuzzi wrote: > > Dear Michela, > > The procedure you describe does not sound very appropriate to me. > You should first obtain a liquid system, without solute. > I suppose you should check for density and other properties and have a > sufficiently large box. > Then you can create a cavity in the equilibrated liquid and insert the > solute, still with the right C-O bond length. > If the SCF does not converge anymore after a few steps it is probably > because of the coordinates. > The concentration of CO2 seems rather high. > > You can use GAPW. It is more commonly used for all electron calculations. > With PP, GPW is as accurate. > > Regards > Marcella > > > > On Thursday, October 3, 2024 at 3:02:10?PM UTC+2 bnzmi... at gmail.com wrote: > >> Hi Marcella, >> >> thank you for your kind response and your time. >> >> 1) I just switched to double zeta quality a few hours ago, but my MD just >> crashed because, weirdly, it converged for the first few SCF loops, but >> then it stopped converging (attached the output file here to explain). >> >> 2) I am using GAPW because I found that augmented plane waves method >> worked really well with my liquid Al systems before. that method is also >> reported in DFT literature for liquid Ga. Rationalizing it, I think it >> works because it samples regions of space with different charge densities >> with more accuracy. Do you think I should consider something else? >> >> 3) I used a cell size to represent the density of liquid Ga with the # of >> atoms I have. I prepared my coordinates with a Python script, then relaxed >> the geometry in Avogadro2 software and inserted CO2 such that it was a >> distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do you >> have any suggestions to prepare a structure? I am leaving 1-2 A on all >> sides from the unit cell boundaries because I have been worried about Ga >> atoms being too close to neighbors across periodic boundaries. >> >> Michela >> On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi wrote: >> >>> Dear Michela, >>> >>> The basis set you are using is of poor quality. >>> The coordinates you sent show a rather strange C-O bond length >>> The cell is very small, but still there is vacuum space among the >>> replicas in all directions, it is a rather weird choice of coordinates. >>> >>> Is there a reason why you are using GAPW? >>> >>> Regards >>> Marcella >>> On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com >>> wrote: >>> >>>> Good morning dear CP2K community, >>>> >>>> how are you? You may know me from previous posts on liquid Al (+CO2) MD >>>> troubleshooting. All of your responses have been super helpful so far, and >>>> I am coming here again for a different liquid metal. >>>> >>>> My simulations with pure liquid Gallium have been less troublesome than >>>> all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >>>> converge. Can I please get some help troubleshooting? >>>> >>>> Thank you, >>>> >>>> Michela >>>> >>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/335efca4-9014-4d60-8c0e-42d4ab964000n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Thu Oct 3 17:22:47 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 3 Oct 2024 10:22:47 -0700 (PDT) Subject: [CP2K-user] [CP2K:20744] Re: Gallium + CO2 lack of convergence In-Reply-To: <335efca4-9014-4d60-8c0e-42d4ab964000n@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> <335efca4-9014-4d60-8c0e-42d4ab964000n@googlegroups.com> Message-ID: <8cb7ec66-c7da-4798-b626-d9f93489be9en@googlegroups.com> Hi If you have a good equilibrated Ga liquid box (right density and low stress tensor) I wouldn't run GEO_OPT Obviously introducing CO2 is going to change the conditions, but I would anyway start with a NVT run for a first equilibration and then run NPT to re-equilibrate the volume. I would add the CO2 molecule and remove all the Ga atoms within a certain radius from the center of mass of the molecule, and then run the equilibrations as described above. The only way to lower the concentration is to increase the amount of Ga, i.e., increase the box. Regards Marcella On Thursday, October 3, 2024 at 5:02:46?PM UTC+2 bnzmi... at gmail.com wrote: > Hi Marcella, > > Thank you again! > > 1) should I run GEO_OPT on the pure Ga liquid, then add CO2? > 2) How should I form a cavity artificially without disrupting the newly > equilibrated Ga structure? > 3) I only have one molecule of CO2 in there - How should I go about > lowering the concentration? Should I just increase my #Ga atoms and add 1 > CO2 molecule? > > Best, > > Michela > On Thursday, October 3, 2024 at 10:20:25?AM UTC-4 Marcella Iannuzzi wrote: > >> >> Dear Michela, >> >> The procedure you describe does not sound very appropriate to me. >> You should first obtain a liquid system, without solute. >> I suppose you should check for density and other properties and have a >> sufficiently large box. >> Then you can create a cavity in the equilibrated liquid and insert the >> solute, still with the right C-O bond length. >> If the SCF does not converge anymore after a few steps it is probably >> because of the coordinates. >> The concentration of CO2 seems rather high. >> >> You can use GAPW. It is more commonly used for all electron calculations. >> With PP, GPW is as accurate. >> >> Regards >> Marcella >> >> >> >> On Thursday, October 3, 2024 at 3:02:10?PM UTC+2 bnzmi... at gmail.com >> wrote: >> >>> Hi Marcella, >>> >>> thank you for your kind response and your time. >>> >>> 1) I just switched to double zeta quality a few hours ago, but my MD >>> just crashed because, weirdly, it converged for the first few SCF loops, >>> but then it stopped converging (attached the output file here to explain). >>> >>> 2) I am using GAPW because I found that augmented plane waves method >>> worked really well with my liquid Al systems before. that method is also >>> reported in DFT literature for liquid Ga. Rationalizing it, I think it >>> works because it samples regions of space with different charge densities >>> with more accuracy. Do you think I should consider something else? >>> >>> 3) I used a cell size to represent the density of liquid Ga with the # >>> of atoms I have. I prepared my coordinates with a Python script, then >>> relaxed the geometry in Avogadro2 software and inserted CO2 such that it >>> was a distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do >>> you have any suggestions to prepare a structure? I am leaving 1-2 A on all >>> sides from the unit cell boundaries because I have been worried about Ga >>> atoms being too close to neighbors across periodic boundaries. >>> >>> Michela >>> On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi wrote: >>> >>>> Dear Michela, >>>> >>>> The basis set you are using is of poor quality. >>>> The coordinates you sent show a rather strange C-O bond length >>>> The cell is very small, but still there is vacuum space among the >>>> replicas in all directions, it is a rather weird choice of coordinates. >>>> >>>> Is there a reason why you are using GAPW? >>>> >>>> Regards >>>> Marcella >>>> On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com >>>> wrote: >>>> >>>>> Good morning dear CP2K community, >>>>> >>>>> how are you? You may know me from previous posts on liquid Al (+CO2) >>>>> MD troubleshooting. All of your responses have been super helpful so far, >>>>> and I am coming here again for a different liquid metal. >>>>> >>>>> My simulations with pure liquid Gallium have been less troublesome >>>>> than all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >>>>> converge. Can I please get some help troubleshooting? >>>>> >>>>> Thank you, >>>>> >>>>> Michela >>>>> >>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/8cb7ec66-c7da-4798-b626-d9f93489be9en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.cvitkovich at gmail.com Thu Oct 3 22:18:02 2024 From: l.cvitkovich at gmail.com (Lukas C) Date: Thu, 3 Oct 2024 15:18:02 -0700 (PDT) Subject: [CP2K-user] [CP2K:20745] Re: cp2k V_XC_CUBE In-Reply-To: <4323ee22-7159-48ed-a1c8-f8cba2ed5abcn@googlegroups.com> References: <4323ee22-7159-48ed-a1c8-f8cba2ed5abcn@googlegroups.com> Message-ID: <18dd0059-8b5a-41e1-af74-09a9f30cba27n@googlegroups.com> Dear CP2K users, I want to understand the XC contribution to the overall potential in detail. Therefore I would like to come back to the Andres' question above. I have the same problem. There is no separate cube file for V_XC using: &V_XC_CUBE ON ADD_LAST NUMERIC &END V_XC_CUBE The section &V_HARTREE_CUBE is set to OFF. If it is ON *v_hartree-1_0.cube appears. I use a PBE0 functional with the following setup: &XC &XC_FUNCTIONAL &PBE SCALE_X 1.0 SCALE_C 1.0 &END &PBE_HOLE_T_C_LR CUTOFF_RADIUS 2.0 SCALE_X 0.0 &END &END XC_FUNCTIONAL &HF &SCREENING EPS_SCHWARZ 1.0E-6 SCREEN_ON_INITIAL_P Y &END &MEMORY MAX_MEMORY 1000 EPS_STORAGE_SCALING 0.1 &END &INTERACTION_POTENTIAL POTENTIAL_TYPE TRUNCATED CUTOFF_RADIUS 2.0 T_C_G_DATA ${fdir}t_c_g.dat &END FRACTION 0.00 &END &END XC Am I missing something? Is there an alternative way to find out the XC potential? Thanks for your help! Best, Lukas oandr... at gmail.com schrieb am Mittwoch, 14. September 2022 um 17:51:22 UTC+2: > Dear Cp2k group, > > I am trying to print both Hartree and xc as cube files. > > I am using > > &V_XC_CUBE > > STRIDE 2 2 2 > > &END V_XC_CUBE > > > &V_HARTREE_CUBE > > STRIDE 2 2 2 > > &END V_HARTREE_CUBE > > > However, I only get the Hartree. cube file. I removed the V_HARTREE_CUBE > section and left the V_XC_CUBE but I still get a file as Hartree and not > the xc cube file. > > is this correct? am I misunderstanding something? should I obtain two > independent cube files for Hartree and xc.? > > I was wondering if you could give me advice on this. > > I am using cp2k 9.1 in daint, > > > best, > > > Andres Ortega-Guerrero > > > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/18dd0059-8b5a-41e1-af74-09a9f30cba27n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From angus.gentles at gmail.com Fri Oct 4 13:33:12 2024 From: angus.gentles at gmail.com (Angus Gentles) Date: Fri, 4 Oct 2024 06:33:12 -0700 (PDT) Subject: [CP2K-user] [CP2K:20745] cp2k Hybrids + Smearing Message-ID: <81dcc10c-a50b-48d9-b60b-f98f0dfc933en@googlegroups.com> Dear all, Does anyone know if there are plans to implement smearing in hybrid functional calculations? cheers Angus Gentles -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/81dcc10c-a50b-48d9-b60b-f98f0dfc933en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hutter at chem.uzh.ch Sat Oct 5 08:31:25 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Sat, 5 Oct 2024 08:31:25 +0000 Subject: [CP2K-user] [CP2K:20746] cp2k Hybrids + Smearing In-Reply-To: <81dcc10c-a50b-48d9-b60b-f98f0dfc933en@googlegroups.com> References: <81dcc10c-a50b-48d9-b60b-f98f0dfc933en@googlegroups.com> Message-ID: Hi can you specify in more detail what combination of options you are missing? I don't have any problems to run hybrid functionals together with the option SMEAR. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Angus Gentles Sent: Friday, October 4, 2024 3:33 PM To: cp2k Subject: [CP2K:20745] cp2k Hybrids + Smearing Dear all, Does anyone know if there are plans to implement smearing in hybrid functional calculations? cheers Angus Gentles -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/81dcc10c-a50b-48d9-b60b-f98f0dfc933en%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB0759365C1998B07B01C304A59F732%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From hanaa.sarimohammed at gmail.com Sun Oct 6 22:20:33 2024 From: hanaa.sarimohammed at gmail.com (Hanaa Sari) Date: Sun, 6 Oct 2024 15:20:33 -0700 (PDT) Subject: [CP2K-user] [CP2K:20747] Optimization of Bulk and slab Message-ID: <8ceba3a5-8747-46b8-a307-3341f95c9adcn@googlegroups.com> Dear All, I am a new user of CP2K. I am trying to to optimize Ru(111) bulk and slab consisiting 4 layers. When I run the input file (in attachment) the only calculation that converge is that of the elementary cell . As soon as I increase the number of atoms (slab) the calculation do not converge. Could someone please point out my mistakes? knowing that I am using the version cp2k 2024.1 Thank you. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/8ceba3a5-8747-46b8-a307-3341f95c9adcn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ru-slab.inp Type: chemical/x-gamess-input Size: 11311 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ru-Bulk.inp Type: chemical/x-gamess-input Size: 1820 bytes Desc: not available URL: From shivam-gupta at pharmafoods.co.jp Mon Oct 7 08:24:57 2024 From: shivam-gupta at pharmafoods.co.jp (Shivam Gupta) Date: Mon, 7 Oct 2024 01:24:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20748] start with CP2k Message-ID: Hello everyone, I'm new to CP2K and would like to use it to study antigen-antibody interactions. As I'm unsure how to begin, could anyone recommend a tutorial or beginner-friendly protocol to help me get started? Thank you! -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/bdfd896b-8d8b-4f7a-9354-f52d4e38e401n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Mon Oct 7 08:46:30 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Mon, 7 Oct 2024 08:46:30 +0000 Subject: [CP2K-user] [CP2K:20749] Optimization of Bulk and slab In-Reply-To: <8ceba3a5-8747-46b8-a307-3341f95c9adcn@googlegroups.com> References: <8ceba3a5-8747-46b8-a307-3341f95c9adcn@googlegroups.com> Message-ID: Hi in Ru-slab.inp, cell size and atomic positions should match. The slab atoms do not fit into the defined simulation cell. Moreover, the atomic coordinates seem to be not scaled coordinates which typically range between 0 and 1. HTH Matthias From: cp2k at googlegroups.com on behalf of Hanaa Sari Date: Monday, 7 October 2024 at 07:55 To: cp2k Subject: [CP2K:20747] Optimization of Bulk and slab Dear All, I am a new user of CP2K. I am trying to to optimize Ru(111) bulk and slab consisiting 4 layers. When I run the input file (in attachment) the only calculation that converge is that of the elementary cell . As soon as I increase the number of atoms (slab) the calculation do not converge. Could someone please point out my mistakes? knowing that I am using the version cp2k 2024.1 Thank you. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/8ceba3a5-8747-46b8-a307-3341f95c9adcn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827ABE1A97DD65DB1A732E0F47D2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Mon Oct 7 13:10:09 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Mon, 7 Oct 2024 06:10:09 -0700 (PDT) Subject: [CP2K-user] [CP2K:20750] slow slurm regtests Message-ID: Hi all, I am trying to run regtests using the slurm sbatch script, but what I am observing is their extremely slow execution. After looking at the task, I can see that only 4 CPUs are being used (out of 48 set). It looks as if each task is run one after the other, i.e. 2 MPI x 2 OMP = 4 CPU. I have already tried different `mpiexec` command settings and changed the `srun` command, but this did not help. When using 4 nodes the task also runs on only 4 CPU of a single node. I don't quite understand why the system reports 2 GPUs when the `nvidia-smi --query-gpu=gpu_name --format=csv,noheader | wc -l` command is called, so I modified do_regtest.py to force 0 GPUs, but that didn't change anything either. The instructions at https://www.cp2k.org/dev:regtesting#run_with_sbatch are out of date, so maybe something else needs to be changed in the script? I would appreciate any help! Here is my sbatch script: ``` #!/bin/bash -l #SBATCH --time=06:00:00 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=24 #SBATCH --cpus-per-task=2 #SBATCH --ntasks-per-core=1 #SBATCH --mem=180G set -o errexit set -o nounset set -o pipefail export MPICH_OFI_STARTUP_CONNECT=1 export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} # export OMP_PROC_BIND=close # export OMP_PLACES=cores module load intel/2022b module load GCC/12.2.0 # Let the user see the currently loaded modules in the slurm log for completeness: module list CP2K_BASE_DIR="/lustre/pd01/hpc-kuchta-1716987452/software/cp2k" CP2K_TEST_DIR=${TMPDIR} CP2K_VERSION="psmp" NTASKS_SINGLE_TEST=2 NNODES_SINGLE_TEST=1 SRUN_CMD="srun --cpu-bind=verbose,cores" # to run tests across nodes (to check for communication effects), use: # NNODES_SINGLE_TEST=4 # SRUN_CMD="srun --cpu-bind=verbose,cores --ntasks-per-node 2" # the following should be sufficiently generic: mkdir -p "${CP2K_TEST_DIR}" cd "${CP2K_TEST_DIR}" cp2k_rel_dir=$(realpath --relative-to="${CP2K_TEST_DIR}" "${CP2K_BASE_DIR}/exe/local") # srun does not like `-np`, override the complete command instead: export cp2k_run_prefix="${SRUN_CMD} -N ${NNODES_SINGLE_TEST} -n ${NTASKS_SINGLE_TEST}" "${CP2K_REGEST_SCRIPT_DIR:-${CP2K_BASE_DIR}/tests}/do_regtest.py" \ --mpiranks ${NTASKS_SINGLE_TEST} \ --ompthreads ${OMP_NUM_THREADS} \ --maxtasks ${SLURM_NTASKS} \ --num_gpus 0 \ --workbasedir "${CP2K_TEST_DIR}" \ --mpiexec "mpiexec -n {N}" \ --debug \ "${cp2k_rel_dir}" \ "${CP2K_VERSION}" \ |& tee "${CP2K_TEST_DIR}/${CP2K_ARCH}.${CP2K_VERSION}.log" ``` and output after 1h of execution: ``` Loading intel/2022b Loading requirement: GCCcore/12.2.0 zlib/1.2.12-GCCcore-12.2.0 binutils/2.39-GCCcore-12.2.0 intel-compilers/2022.2.1 numactl/2.0.16-GCCcore-12.2.0 UCX/1.13.1-GCCcore-12.2.0 impi/2021.7.1-intel-compilers-2022.2.1 imkl/2022.2.1 iimpi/2022b imkl-FFTW/2022.2.1-iimpi-2022b Currently Loaded Modulefiles: 1) GCCcore/12.2.0 7) impi/2021.7.1-intel-compilers-2022.2.1 2) zlib/1.2.12-GCCcore-12.2.0 8) imkl/2022.2.1 3) binutils/2.39-GCCcore-12.2.0 9) iimpi/2022b 4) intel-compilers/2022.2.1 10) imkl-FFTW/2022.2.1-iimpi-2022b 5) numactl/2.0.16-GCCcore-12.2.0 11) intel/2022b 6) UCX/1.13.1-GCCcore-12.2.0 12) GCC/12.2.0 *************************** Testing started **************************** Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('--version',) ----------------------------- Settings --------------------------------- MPI ranks: 2 OpenMP threads: 2 GPU devices: 2 Workers: 6 Timeout [s]: 400 Work base dir: /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41 MPI exec: mpiexec -n {N} Smoke test: False Valgrind: False Keepalive: False Flag slow: False Debug: True Binary dir: /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local VERSION: psmp Flags: omp,libint,fftw3,libxc,libgrpp,pexsi,elpa,parallel,scalapack,mpi_f08,cosma,xsmm,plumed2,spglib,mkl,sirius,libvori,libbqb,libvdwxc,hdf5 ------------------------------------------------------------------------ Copying test files ... done Skipping UNIT/nequip_unittest because its requirements are not satisfied. Skipping TMC/regtest_ana_on_the_fly because its requirements are not satisfied. Skipping QS/regtest-cusolver because its requirements are not satisfied. Skipping QS/regtest-dlaf because its requirements are not satisfied. Skipping Fist/regtest-nequip because its requirements are not satisfied. Skipping Fist/regtest-allegro because its requirements are not satisfied. Skipping QS/regtest-dft-vdw-corr-4 because its requirements are not satisfied. Skipping Fist/regtest-deepmd because its requirements are not satisfied. Skipping Fist/regtest-quip because its requirements are not satisfied. Launched 362 test directories and 6 worker... Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/dbt_tas_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/dbt_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/grid_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/libcp2k_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/memory_utilities_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/parallel_rng_types_unittest.psmp'] ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('RPA_SIGMA_H2O_clenshaw.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/memory_utilities_unittest memory_utilities_unittest - OK ( 0.29 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/memory_utilities_unittest (1 of 362) done in 0.29 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('H2O_ref.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_unittest dbt_unittest - RUNTIME FAIL ( 1.61 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_unittest (2 of 362) done in 1.61 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('h2o_f01_coulomb_only.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_tas_unittest dbt_tas_unittest - RUNTIME FAIL ( 1.84 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_tas_unittest (3 of 362) done in 1.84 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('test01.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/parallel_rng_types_unittest parallel_rng_types_unittest - OK ( 2.04 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/parallel_rng_types_unittest (4 of 362) done in 2.04 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('h2o_f21.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/grid_unittest grid_unittest - OK ( 2.53 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/grid_unittest (5 of 362) done in 2.53 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('h2o_dip12.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/libcp2k_unittest libcp2k_unittest - OK ( 19.03 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/libcp2k_unittest (6 of 362) done in 19.03 sec Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('RPA_SIGMA_H2O_minimax.inp',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('RPA_SIGMA_H_minimax.inp',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('H2O_pao_exp.inp',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('RPA_SIGMA_H_clenshaw.inp',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('RPA_SIGMA_H2O_minimax_NUM_INTEG_GROUPS.inp',) Creating subprocess: ['mpiexec', '-n', '2', '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] ('H2O-5.inp',) >>> /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/QS/regtest-rpa-sigma RPA_SIGMA_H2O_clenshaw.inp -17.19226814 OK ( 83.42 sec) RPA_SIGMA_H2O_minimax.inp -17.18984039 OK ( 83.59 sec) RPA_SIGMA_H_minimax.inp -0.5150377917 OK ( 63.64 sec) RPA_SIGMA_H_clenshaw.inp -0.5150909069 OK ( 65.65 sec) RPA_SIGMA_H2O_minimax_NUM_INTEG_GROUPS.inp -17.18984039 OK ( 86.54 sec) <<< /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/QS/regtest-rpa-sigma (7 of 362) done in 382.84 sec ``` -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d9ed484a-b9aa-4b0a-89cc-138343328848n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanaa.sarimohammed at gmail.com Mon Oct 7 19:05:29 2024 From: hanaa.sarimohammed at gmail.com (Hanaa Sari) Date: Mon, 7 Oct 2024 12:05:29 -0700 (PDT) Subject: [CP2K-user] [CP2K:20751] Optimization of Bulk and slab In-Reply-To: References: <8ceba3a5-8747-46b8-a307-3341f95c9adcn@googlegroups.com> Message-ID: <61a5a8ea-76a6-4fac-a4ba-245784cf5db1n@googlegroups.com> Thanks a lot for your response. I will make the necessary adjustments. Le lundi 7 octobre 2024 ? 10:46:45 UTC+2, Krack Matthias a ?crit : > Hi > > > > in Ru-slab.inp, cell size and atomic positions should match. The slab > atoms do not fit into the defined simulation cell. Moreover, the atomic > coordinates seem to be not scaled coordinates which typically range between > 0 and 1. > > > > HTH > > > > Matthias > > > > *From: *cp... at googlegroups.com on behalf of > Hanaa Sari > *Date: *Monday, 7 October 2024 at 07:55 > *To: *cp2k > *Subject: *[CP2K:20747] Optimization of Bulk and slab > > Dear All, > > I am a new user of CP2K. > > I am trying to to optimize Ru(111) bulk and slab consisiting 4 layers. > > When I run the input file (in attachment) the only calculation that > converge is that of the elementary cell . As soon as I increase the number > of atoms (slab) the calculation do not converge. > > Could someone please point out my mistakes? > > knowing that I am using the version cp2k 2024.1 > > Thank you. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/cp2k/8ceba3a5-8747-46b8-a307-3341f95c9adcn%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/61a5a8ea-76a6-4fac-a4ba-245784cf5db1n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xw97259 at gmail.com Tue Oct 8 06:22:46 2024 From: xw97259 at gmail.com (xuan wang) Date: Mon, 7 Oct 2024 23:22:46 -0700 (PDT) Subject: [CP2K-user] [CP2K:20752] about molden file for MO visualization Message-ID: Dear cp2k users, I am now trying to dump the information from MO_molden file for MO visualization, and trying to transform the information into datagrid in the form of cube with python. I am the newer about this and want to learn about how this process can be performed, with the basic chemical princible. Is there some manu about how to teach the transformation with the asistance of python? Thank you very much. Best regards, Xuan -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ebc40d5d-52fa-489e-94b8-52e9daf81362n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Tue Oct 8 10:58:00 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Tue, 8 Oct 2024 03:58:00 -0700 (PDT) Subject: [CP2K-user] [CP2K:20753] compilation problems - LHS and RHS of an assignment statement have incompatible types Message-ID: Hi all, I recently managed to compile cp2k on our cluster, but regtests showed several errors. Most of the failures are due to the error `forrtl: severe (189): LHS and RHS of an assignment statement have incompatible types` or `forrtl: severe (153): allocatable array or pointer is not allocated`. After looking at the output from `make` I noticed that there are quite a few similar warnings there: ``` /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F(1930): warning #8100: The actual argument is an array section or assumed-shape array, corresponding dummy argument that has either the VOLATILE or ASYNCHRONOUS attribute shall be an assumed-shape array. [MSGIN] CALL mpi_isend(msgin, msglen, MPI_LOGICAL, dest, my_tag, & ------------------------^ ``` For compilation I used GCC 12.2.0 and intel 2022.2.1. My toolchain command was `./install_cp2k_toolchain.sh --mpi-mode=intelmpi --with-intel --with-gcc=system --with-plumed --with-quip --with-pexsi --with-ptscotch --with-superlu --with-fftw=no --with-hdf5`. In the attachment I provide all outputs from toolchain, make, and regtests. I'm not sure what went wrong and how should I proceed so any help will be much appreciated! Best Bartosz -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/f23087e8-772f-4484-9f38-1f2aa0874058n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bem_compilation_warnings.zip Type: application/x-zip Size: 351812 bytes Desc: not available URL: From f.stein at hzdr.de Tue Oct 8 12:07:14 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Tue, 8 Oct 2024 05:07:14 -0700 (PDT) Subject: [CP2K-user] [CP2K:20755] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: Message-ID: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> Dear Bartosz, If you want to compile with Intel, then drop the "--with-gcc" flag. Regarding Intel, we do not test Intel 2022.2 anymore. You should try the IntelOneAPI containing more recent compilers instead. We are currently testing version 2024.2. The warnings can be ignored for now, but we are aware of that issue and will make adjustments later after dropping some older compilers. Regarding the runtime errors. The error "LHS and RHS of an assignment statement have incompatible types" could be a compiler bug (see https://community.intel.com/t5/Intel-Fortran-Compiler/Segmentation-fault-due-to-assignment-of-derived-type-variable/td-p/1489823). The allocation error may also be a compiler bug as the respective array is always allocated and the routine is left directly after deallocating the array earlier in the routine. Best, Frederick bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 13:17:08 UTC+2: > Hi all, > > I recently managed to compile cp2k on our cluster, but regtests showed > several errors. Most of the failures are due to the error `forrtl: severe > (189): LHS and RHS of an assignment statement have incompatible types` or `forrtl: > severe (153): allocatable array or pointer is not allocated`. After > looking at the output from `make` I noticed that there are quite a few > similar warnings there: > > ``` > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F(1930): > warning #8100: The actual argument is an array section or assumed-shape > array, corresponding dummy argument that has either the VOLATILE or > ASYNCHRONOUS attribute shall be an assumed-shape array. [MSGIN] > CALL mpi_isend(msgin, msglen, MPI_LOGICAL, dest, my_tag, & > ------------------------^ > ``` > > For compilation I used GCC 12.2.0 and intel 2022.2.1. My toolchain command > was `./install_cp2k_toolchain.sh --mpi-mode=intelmpi --with-intel > --with-gcc=system --with-plumed --with-quip --with-pexsi --with-ptscotch > --with-superlu --with-fftw=no --with-hdf5`. In the attachment I provide > all outputs from toolchain, make, and regtests. > > I'm not sure what went wrong and how should I proceed so any help will be > much appreciated! > > Best > Bartosz > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/17064317-728e-4164-b086-edd664bd8d28n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Tue Oct 8 12:43:18 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Tue, 8 Oct 2024 05:43:18 -0700 (PDT) Subject: [CP2K-user] [CP2K:20756] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> Message-ID: <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> Hi Frederick, Thank you for your quick response! Just to be sure, if I compile the latest version of cp2k using Intel 2021 (https://www.cp2k.org/dev:compiler_support), I should no longer have the problems described? I ask because I don't see a module with Intel OneAPI 2024 on our HPC, so I am considering using either an older module or asking the admins to provide a newer one. Best Bartosz wtorek, 8 pa?dziernika 2024 o 14:07:15 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > If you want to compile with Intel, then drop the "--with-gcc" flag. > Regarding Intel, we do not test Intel 2022.2 anymore. You should try the > IntelOneAPI containing more recent compilers instead. We are currently > testing version 2024.2. > The warnings can be ignored for now, but we are aware of that issue and > will make adjustments later after dropping some older compilers. > Regarding the runtime errors. The error "LHS and RHS of an assignment > statement have incompatible types" could be a compiler bug (see > https://community.intel.com/t5/Intel-Fortran-Compiler/Segmentation-fault-due-to-assignment-of-derived-type-variable/td-p/1489823). > The allocation error may also be a compiler bug as the respective array is > always allocated and the routine is left directly after deallocating the > array earlier in the routine. > Best, > Frederick > > bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 13:17:08 UTC+2: > >> Hi all, >> >> I recently managed to compile cp2k on our cluster, but regtests showed >> several errors. Most of the failures are due to the error `forrtl: >> severe (189): LHS and RHS of an assignment statement have incompatible >> types` or `forrtl: severe (153): allocatable array or pointer is not >> allocated`. After looking at the output from `make` I noticed that there >> are quite a few similar warnings there: >> >> ``` >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F(1930): >> warning #8100: The actual argument is an array section or assumed-shape >> array, corresponding dummy argument that has either the VOLATILE or >> ASYNCHRONOUS attribute shall be an assumed-shape array. [MSGIN] >> CALL mpi_isend(msgin, msglen, MPI_LOGICAL, dest, my_tag, & >> ------------------------^ >> ``` >> >> For compilation I used GCC 12.2.0 and intel 2022.2.1. My toolchain >> command was `./install_cp2k_toolchain.sh --mpi-mode=intelmpi >> --with-intel --with-gcc=system --with-plumed --with-quip --with-pexsi >> --with-ptscotch --with-superlu --with-fftw=no --with-hdf5`. In the >> attachment I provide all outputs from toolchain, make, and regtests. >> >> I'm not sure what went wrong and how should I proceed so any help will be >> much appreciated! >> >> Best >> Bartosz >> > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/21825b1f-9a66-4d63-a223-e958db74d714n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cp2k at googlegroups.com Tue Oct 8 12:24:02 2024 From: cp2k at googlegroups.com ('daniel Storm' via cp2k) Date: Tue, 8 Oct 2024 05:24:02 -0700 (PDT) Subject: [CP2K-user] [CP2K:20756] Inconsistent geometry optimisations Message-ID: Hi everyone, Just to provide some context, in our group we use CP2k to model molecular reactivity in the solid state, so slightly different than ?standard? solid state chemistry. We run optimisations for minima and transition states (TS), whose character we confirm by running phonon calculations. Now, to the issue. I am working with a crystal structure of an iridium organometallic salt, [Ir(PONOP)(H)Me][BArF4], however, I am encountering issues with achieving consistent optimisations. Despite starting from the same experimental structure, I am getting very different SCF energies (up to 3 kcal/mol) and I have observed minor differences in the resulting geometries. I think that those geometries are close enough to not show that big of an SCF difference, but my main worry is the fact that the starting point is always the same. The convergence criteria that I am currently using is MAX_FORCE 1.0 x 10-4. I have attempted to improve this by tightening the criteria (MAX_FORCE, MAX_DR, RMS_DR and RMS_FORCE) to 1.0 x 10-6, but this didn?t fix the issue. All the optimisations are done with the PBE-D3 functional and DZVP-MOLOPT-SR-GTH basis set. So, to sum up, I run optimisations with the same starting geometry that converge to different structures, and I am not sure why or which criterion to use to select one structure over the other, etc. I have worked with other organometallic salts in the past, and this is the first time I find this issue. I am using CP2k version 2023.2 in the HPC cluster Archer-2. Has anyone seen this before? Any advice on how to proceed? Thanks in advance, Daniel. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d50cb0ef-42e9-465d-81c7-2fdacdab6d41n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Tue Oct 8 13:34:46 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Tue, 8 Oct 2024 13:34:46 +0000 Subject: [CP2K-user] [CP2K:20757] Inconsistent geometry optimisations In-Reply-To: References: Message-ID: Hi Daniel Did you try to decrease EPS_DEFAULT and to increase the PW cutoff? These input parameters mainly determine the accuracy of the atomic forces rather than the convergence thresholds for the force calculation like MAX_FORCE or MAX_DR which can also show convergence for poor forces. HTH Matthias From: 'daniel Storm' via cp2k Date: Tuesday, 8 October 2024 at 14:50 To: cp2k Subject: [CP2K:20756] Inconsistent geometry optimisations Hi everyone, Just to provide some context, in our group we use CP2k to model molecular reactivity in the solid state, so slightly different than ?standard? solid state chemistry. We run optimisations for minima and transition states (TS), whose character we confirm by running phonon calculations. Now, to the issue. I am working with a crystal structure of an iridium organometallic salt, [Ir(PONOP)(H)Me][BArF4], however, I am encountering issues with achieving consistent optimisations. Despite starting from the same experimental structure, I am getting very different SCF energies (up to 3 kcal/mol) and I have observed minor differences in the resulting geometries. I think that those geometries are close enough to not show that big of an SCF difference, but my main worry is the fact that the starting point is always the same. The convergence criteria that I am currently using is MAX_FORCE 1.0 x 10-4. I have attempted to improve this by tightening the criteria (MAX_FORCE, MAX_DR, RMS_DR and RMS_FORCE) to 1.0 x 10-6, but this didn?t fix the issue. All the optimisations are done with the PBE-D3 functional and DZVP-MOLOPT-SR-GTH basis set. So, to sum up, I run optimisations with the same starting geometry that converge to different structures, and I am not sure why or which criterion to use to select one structure over the other, etc. I have worked with other organometallic salts in the past, and this is the first time I find this issue. I am using CP2k version 2023.2 in the HPC cluster Archer-2. Has anyone seen this before? Any advice on how to proceed? Thanks in advance, Daniel. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d50cb0ef-42e9-465d-81c7-2fdacdab6d41n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827F79512B3329F4B818BCEF47E2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Tue Oct 8 13:46:14 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Tue, 8 Oct 2024 06:46:14 -0700 (PDT) Subject: [CP2K-user] [CP2K:20759] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> Message-ID: Hi Bartosz, No, Intel 2021 will be probably not work, it is older than Intel 2022. I meant something like Intel OneAPI 2023 or 2024. Best, Frederick bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 14:43:19 UTC+2: > Hi Frederick, > > Thank you for your quick response! Just to be sure, if I compile the > latest version of cp2k using Intel 2021 ( > https://www.cp2k.org/dev:compiler_support), I should no longer have the > problems described? I ask because I don't see a module with Intel OneAPI > 2024 on our HPC, so I am considering using either an older module or asking > the admins to provide a newer one. > > Best > Bartosz > > wtorek, 8 pa?dziernika 2024 o 14:07:15 UTC+2 Frederick Stein napisa?(a): > >> Dear Bartosz, >> If you want to compile with Intel, then drop the "--with-gcc" flag. >> Regarding Intel, we do not test Intel 2022.2 anymore. You should try the >> IntelOneAPI containing more recent compilers instead. We are currently >> testing version 2024.2. >> The warnings can be ignored for now, but we are aware of that issue and >> will make adjustments later after dropping some older compilers. >> Regarding the runtime errors. The error "LHS and RHS of an assignment >> statement have incompatible types" could be a compiler bug (see >> https://community.intel.com/t5/Intel-Fortran-Compiler/Segmentation-fault-due-to-assignment-of-derived-type-variable/td-p/1489823). >> The allocation error may also be a compiler bug as the respective array is >> always allocated and the routine is left directly after deallocating the >> array earlier in the routine. >> Best, >> Frederick >> >> bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 13:17:08 UTC+2: >> >>> Hi all, >>> >>> I recently managed to compile cp2k on our cluster, but regtests showed >>> several errors. Most of the failures are due to the error `forrtl: >>> severe (189): LHS and RHS of an assignment statement have incompatible >>> types` or `forrtl: severe (153): allocatable array or pointer is not >>> allocated`. After looking at the output from `make` I noticed that >>> there are quite a few similar warnings there: >>> >>> ``` >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F(1930): >>> warning #8100: The actual argument is an array section or assumed-shape >>> array, corresponding dummy argument that has either the VOLATILE or >>> ASYNCHRONOUS attribute shall be an assumed-shape array. [MSGIN] >>> CALL mpi_isend(msgin, msglen, MPI_LOGICAL, dest, my_tag, & >>> ------------------------^ >>> ``` >>> >>> For compilation I used GCC 12.2.0 and intel 2022.2.1. My toolchain >>> command was `./install_cp2k_toolchain.sh --mpi-mode=intelmpi >>> --with-intel --with-gcc=system --with-plumed --with-quip --with-pexsi >>> --with-ptscotch --with-superlu --with-fftw=no --with-hdf5`. In the >>> attachment I provide all outputs from toolchain, make, and regtests. >>> >>> I'm not sure what went wrong and how should I proceed so any help will >>> be much appreciated! >>> >>> Best >>> Bartosz >>> >> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/f73fa281-30c2-445e-b0c1-3278a7b82f41n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pototschnig.johann at gmail.com Tue Oct 8 14:35:13 2024 From: pototschnig.johann at gmail.com (Johann Pototschnig) Date: Tue, 8 Oct 2024 07:35:13 -0700 (PDT) Subject: [CP2K-user] [CP2K:20760] Re: slow slurm regtests In-Reply-To: References: Message-ID: <58256d55-fe8c-47e3-b1ec-144a10c33982n@googlegroups.com> Hi, The combination of regtest with slurm leads to all workers being run on the same processors. (This of course leads to resource starvation if there are several workers) The tests are rather small so more than 2 MPI processes are not useful and can lead to failing tests. Regarding OpenMP threads you can go a bit larger. but too large doesn't make sense as the test are quite small. Due to these limitations the tests take time, but it should be a bit faster than your setup where the workers are fighting for resources. They should finish within 10 h. I would suggest something like: #SBATCH --nodes=1 #SBATCH --ntasks-per-node=2 #SBATCH --cpus-per-task=8 The assignment of workers to different CPU's is not straightforward. best, Johann On Monday, October 7, 2024 at 3:11:45?PM UTC+2 bartosz mazur wrote: > Hi all, > > I am trying to run regtests using the slurm sbatch script, but what I am > observing is their extremely slow execution. After looking at the task, I > can see that only 4 CPUs are being used (out of 48 set). It looks as if > each task is run one after the other, i.e. 2 MPI x 2 OMP = 4 CPU. > > I have already tried different `mpiexec` command settings and changed the > `srun` command, but this did not help. When using 4 nodes the task also > runs on only 4 CPU of a single node. I don't quite understand why the > system reports 2 GPUs when the `nvidia-smi --query-gpu=gpu_name > --format=csv,noheader | wc -l` command is called, so I modified > do_regtest.py to force 0 GPUs, but that didn't change anything either. The > instructions at https://www.cp2k.org/dev:regtesting#run_with_sbatch are > out of date, so maybe something else needs to be changed in the script? > > I would appreciate any help! > > Here is my sbatch script: > > ``` > > #!/bin/bash -l > > #SBATCH --time=06:00:00 > > #SBATCH --nodes=1 > > #SBATCH --ntasks-per-node=24 > > #SBATCH --cpus-per-task=2 > > #SBATCH --ntasks-per-core=1 > > #SBATCH --mem=180G > > > > set -o errexit > > set -o nounset > > set -o pipefail > > > > export MPICH_OFI_STARTUP_CONNECT=1 > > export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK} > > # export OMP_PROC_BIND=close > > # export OMP_PLACES=cores > > > > module load intel/2022b > > module load GCC/12.2.0 > > > > # Let the user see the currently loaded modules in the slurm log for > completeness: > > module list > > > > CP2K_BASE_DIR="/lustre/pd01/hpc-kuchta-1716987452/software/cp2k" > > CP2K_TEST_DIR=${TMPDIR} > > > > CP2K_VERSION="psmp" > > > > NTASKS_SINGLE_TEST=2 > > NNODES_SINGLE_TEST=1 > > SRUN_CMD="srun --cpu-bind=verbose,cores" > > > > # to run tests across nodes (to check for communication effects), use: > > # NNODES_SINGLE_TEST=4 > > # SRUN_CMD="srun --cpu-bind=verbose,cores --ntasks-per-node 2" > > > # the following should be sufficiently generic: > > > > mkdir -p "${CP2K_TEST_DIR}" > > cd "${CP2K_TEST_DIR}" > > > > cp2k_rel_dir=$(realpath --relative-to="${CP2K_TEST_DIR}" > "${CP2K_BASE_DIR}/exe/local") > > # srun does not like `-np`, override the complete command instead: > > export cp2k_run_prefix="${SRUN_CMD} -N ${NNODES_SINGLE_TEST} -n > ${NTASKS_SINGLE_TEST}" > > > > "${CP2K_REGEST_SCRIPT_DIR:-${CP2K_BASE_DIR}/tests}/do_regtest.py" \ > > --mpiranks ${NTASKS_SINGLE_TEST} \ > > --ompthreads ${OMP_NUM_THREADS} \ > > --maxtasks ${SLURM_NTASKS} \ > > --num_gpus 0 \ > > --workbasedir "${CP2K_TEST_DIR}" \ > > --mpiexec "mpiexec -n {N}" \ > > --debug \ > > "${cp2k_rel_dir}" \ > > "${CP2K_VERSION}" \ > > |& tee "${CP2K_TEST_DIR}/${CP2K_ARCH}.${CP2K_VERSION}.log" > > > ``` > > and output after 1h of execution: > > ``` > > Loading intel/2022b > > Loading requirement: GCCcore/12.2.0 zlib/1.2.12-GCCcore-12.2.0 > > binutils/2.39-GCCcore-12.2.0 intel-compilers/2022.2.1 > > numactl/2.0.16-GCCcore-12.2.0 UCX/1.13.1-GCCcore-12.2.0 > > impi/2021.7.1-intel-compilers-2022.2.1 imkl/2022.2.1 iimpi/2022b > > imkl-FFTW/2022.2.1-iimpi-2022b > > Currently Loaded Modulefiles: > > 1) GCCcore/12.2.0 7) > impi/2021.7.1-intel-compilers-2022.2.1 > > 2) zlib/1.2.12-GCCcore-12.2.0 8) imkl/2022.2.1 > > > 3) binutils/2.39-GCCcore-12.2.0 9) iimpi/2022b > > > 4) intel-compilers/2022.2.1 10) imkl-FFTW/2022.2.1-iimpi-2022b > > > 5) numactl/2.0.16-GCCcore-12.2.0 11) intel/2022b > > > 6) UCX/1.13.1-GCCcore-12.2.0 12) GCC/12.2.0 > > > *************************** Testing started **************************** > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('--version',) > > > ----------------------------- Settings --------------------------------- > > MPI ranks: 2 > > OpenMP threads: 2 > > GPU devices: 2 > > Workers: 6 > > Timeout [s]: 400 > > Work base dir: /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41 > > MPI exec: mpiexec -n {N} > > Smoke test: False > > Valgrind: False > > Keepalive: False > > Flag slow: False > > Debug: True > > Binary dir: /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local > > VERSION: psmp > > Flags: > omp,libint,fftw3,libxc,libgrpp,pexsi,elpa,parallel,scalapack,mpi_f08,cosma,xsmm,plumed2,spglib,mkl,sirius,libvori,libbqb,libvdwxc,hdf5 > > ------------------------------------------------------------------------ > > Copying test files ... done > > Skipping UNIT/nequip_unittest because its requirements are not satisfied. > > Skipping TMC/regtest_ana_on_the_fly because its requirements are not > satisfied. > > Skipping QS/regtest-cusolver because its requirements are not satisfied. > > Skipping QS/regtest-dlaf because its requirements are not satisfied. > > Skipping Fist/regtest-nequip because its requirements are not satisfied. > > Skipping Fist/regtest-allegro because its requirements are not satisfied. > > Skipping QS/regtest-dft-vdw-corr-4 because its requirements are not > satisfied. > > Skipping Fist/regtest-deepmd because its requirements are not satisfied. > > Skipping Fist/regtest-quip because its requirements are not satisfied. > > Launched 362 test directories and 6 worker... > > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/dbt_tas_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/dbt_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/grid_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/libcp2k_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/memory_utilities_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/parallel_rng_types_unittest.psmp'] > ('/lustre/pd01/hpc-kuchta-1716987452/software/cp2k',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('RPA_SIGMA_H2O_clenshaw.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/memory_utilities_unittest > > memory_utilities_unittest > - OK ( 0.29 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/memory_utilities_unittest > (1 of 362) done in 0.29 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('H2O_ref.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_unittest > > dbt_unittest > - RUNTIME FAIL ( 1.61 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_unittest > (2 of 362) done in 1.61 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('h2o_f01_coulomb_only.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_tas_unittest > > dbt_tas_unittest > - RUNTIME FAIL ( 1.84 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/dbt_tas_unittest > (3 of 362) done in 1.84 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('test01.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/parallel_rng_types_unittest > > parallel_rng_types_unittest > - OK ( 2.04 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/parallel_rng_types_unittest > (4 of 362) done in 2.04 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('h2o_f21.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/grid_unittest > > grid_unittest > - OK ( 2.53 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/grid_unittest > (5 of 362) done in 2.53 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('h2o_dip12.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/libcp2k_unittest > > libcp2k_unittest > - OK ( 19.03 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/UNIT/libcp2k_unittest > (6 of 362) done in 19.03 sec > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('RPA_SIGMA_H2O_minimax.inp',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('RPA_SIGMA_H_minimax.inp',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('H2O_pao_exp.inp',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('RPA_SIGMA_H_clenshaw.inp',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('RPA_SIGMA_H2O_minimax_NUM_INTEG_GROUPS.inp',) > > Creating subprocess: ['mpiexec', '-n', '2', > '/lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp'] > ('H2O-5.inp',) > > >>> > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/QS/regtest-rpa-sigma > > RPA_SIGMA_H2O_clenshaw.inp > -17.19226814 OK ( 83.42 sec) > > RPA_SIGMA_H2O_minimax.inp > -17.18984039 OK ( 83.59 sec) > > RPA_SIGMA_H_minimax.inp > -0.5150377917 OK ( 63.64 sec) > > RPA_SIGMA_H_clenshaw.inp > -0.5150909069 OK ( 65.65 sec) > > RPA_SIGMA_H2O_minimax_NUM_INTEG_GROUPS.inp > -17.18984039 OK ( 86.54 sec) > > <<< > /lustre/tmp/slurm/3090305/TEST-psmp-2024-10-07_13-58-41/QS/regtest-rpa-sigma > (7 of 362) done in 382.84 sec > ``` > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/58256d55-fe8c-47e3-b1ec-144a10c33982n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsupright at gmail.com Wed Oct 9 03:49:22 2024 From: hsupright at gmail.com (sh X) Date: Tue, 8 Oct 2024 20:49:22 -0700 (PDT) Subject: [CP2K-user] [CP2K:20760] Dipole of PIMD in CP2K Message-ID: <81d9b16f-e86a-433e-8fa3-ab30e1bf6e91n@googlegroups.com> Dear all, The dipole moments output by CP2K when doing a PIMD simulation seem to be the dipole moments of all the beads individually defined. If I want to get the dipole moment of the centroid, how should I set it up? Thank you all in advance for any help. Best regards, hsuh -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/81d9b16f-e86a-433e-8fa3-ab30e1bf6e91n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cp2k at googlegroups.com Wed Oct 9 09:25:28 2024 From: cp2k at googlegroups.com ('daniel Storm' via cp2k) Date: Wed, 9 Oct 2024 02:25:28 -0700 (PDT) Subject: [CP2K-user] [CP2K:20762] Inconsistent geometry optimisations In-Reply-To: References: Message-ID: <07149c81-16d2-414a-9816-aeafe65595afn@googlegroups.com> Hi Matthias, I did have a look at the cutoff and EPS_DEFAULT at the start of my project but that was for a different issue. So I haven't tried running consecutive optimisations with these parameters but I will give that a go and get back to you. Thanks for your help, Daniel On Tuesday, October 8, 2024 at 2:35:01?PM UTC+1 Krack Matthias wrote: > Hi Daniel > > > > Did you try to decrease EPS_DEFAULT and to increase the PW cutoff? These > input parameters mainly determine the accuracy of the atomic forces rather > than the convergence thresholds for the force calculation like MAX_FORCE or > MAX_DR which can also show convergence for poor forces. > > > > HTH > > > > Matthias > > > > *From: *'daniel Storm' via cp2k > *Date: *Tuesday, 8 October 2024 at 14:50 > *To: *cp2k > *Subject: *[CP2K:20756] Inconsistent geometry optimisations > > Hi everyone, > > Just to provide some context, in our group we use CP2k to model molecular > reactivity in the solid state, so slightly different than ?standard? solid > state chemistry. We run optimisations for minima and transition states > (TS), whose character we confirm by running phonon calculations. > > Now, to the issue. I am working with a crystal structure of an iridium > organometallic salt, [Ir(PONOP)(H)Me][BArF4], however, I am encountering > issues with achieving consistent optimisations. Despite starting from the > same experimental structure, I am getting very different SCF energies (up > to 3 kcal/mol) and I have observed minor differences in the resulting > geometries. I think that those geometries are close enough to not show that > big of an SCF difference, but my main worry is the fact that the starting > point is always the same. > > The convergence criteria that I am currently using is MAX_FORCE 1.0 x 10-4. > I have attempted to improve this by tightening the criteria (MAX_FORCE, > MAX_DR, RMS_DR and RMS_FORCE) to 1.0 x 10-6, but this didn?t fix the > issue. All the optimisations are done with the PBE-D3 functional and > DZVP-MOLOPT-SR-GTH basis set. > > So, to sum up, I run optimisations with the same starting geometry that > converge to different structures, and I am not sure why or which criterion > to use to select one structure over the other, etc. > > I have worked with other organometallic salts in the past, and this is the > first time I find this issue. I am using CP2k version 2023.2 in the HPC > cluster Archer-2. > > Has anyone seen this before? Any advice on how to proceed? > > Thanks in advance, > > Daniel. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/cp2k/d50cb0ef-42e9-465d-81c7-2fdacdab6d41n%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/07149c81-16d2-414a-9816-aeafe65595afn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From himaelgmal650650 at gmail.com Wed Oct 9 14:47:17 2024 From: himaelgmal650650 at gmail.com (=?UTF-8?B?2LPYp9mK2KrZiNiq2YMgMjAw?=) Date: Wed, 9 Oct 2024 07:47:17 -0700 (PDT) Subject: [CP2K-user] [CP2K:20763] Cytotec in Kuwait 00201026560416 | Abortion pills Kuwait at a price of 145 Kuwaiti dinars Message-ID: Cytotec in Kuwait, abortion pills Kuwait, abortion pills Kuwait between fact and fiction, have Cytotec pills Kuwait really proven effective in terminating pregnancy? , Abortion pills in Kuwait are not as easy to obtain as expected, Cytotec pills in Kuwait are available with us and can be ordered through our website and numbers shown in the article, Cytotec is the trade name for Misoprostol, original Cytotec pills produced by Pfizer are available at a price of 150 Kuwaiti dinars, Cytotec pills are available in the Emirates and payment upon receipt only, Cytotec Emirates is the appropriate treatment for abortion from home and at a lower cost, abortion pills in the Emirates are recommended by doctors for abortion safely and easily without the need for cost, abortion pills in the Emirates Yes, they are available with us, the original, Cytotec pills for sale at a price of 1500 Emirati dirhams, abortion pills in the Sultanate of Oman are available with a 20% discount. > 00201026560416 > 00971507534596 Cytotec in Kuwait, abortion pills Kuwait, abortion pills Kuwait between fact and fiction, have Cytotec pills Kuwait really proven effective in terminating pregnancy? Abortion pills in Kuwait are not as easy to obtain as expected. Cytotec pills in Kuwait are available with us and can be ordered through our website and numbers shown in the article. Cytotec is the trade name for Misoprostol. Original Cytotec pills produced by Pfizer are available at a price of 150 Kuwaiti dinars. Cytotec pills in the Emirates are available and payment upon receipt only. Cytotec Emirates is the appropriate treatment for abortion from home and at a lower cost. Abortion pills in the Emirates are recommended by doctors for safe and easy abortion without the need for cost. Abortion pills in the Emirates, yes, they are available with us, the original, Cytotec pills for sale at a price of 1500 Emirati dirhams. Abortion pills in the Sultanate of Oman are available with a 20% discount. When it comes to health and the decision to have children, Cytotec pills are an option available to many women in Kuwait. Despite its obvious benefits, we must be aware of the potential side effects that may accompany its use. In this article, we will discuss important information about Cytotec pills, how to use them, and the medical considerations that should be taken into account, to ensure that an informed and appropriate decision is made for every woman. Cytotec pills are increasingly used in Kuwait, due to their distinct medical benefits. These pills contribute to: Regulating abortions Treating certain medical conditions Despite the benefits, it is always advisable to consult specialist doctors before use, to ensure proper guidance and necessary care. Healthcare in Kuwait focuses on providing the correct information to patients to achieve the best health outcomes. Cytotec pills in Kuwait: Everything you need to know Cytotec pills, also known as "Misoprostol", are used for multiple medical purposes. In Kuwait, these pills are the subject of many discussions and information. Here are some important points about Cytotec pills in Kuwait. 1. Medical Uses Cytotec pills are used to treat a variety of conditions, including: > 00201026560416 > 00971507534596 Early abortion Protecting the stomach from the effects of anti-inflammatory drugs Inducing labor in some medical conditions 2. Dosages Cytotec pills come in different doses. It is important to take them under the supervision of a specialist doctor. Common doses include: 200 micrograms to induce abortion 100 micrograms to prevent stomach ulcers 3. Side Effects You should be aware of some potential side effects, such as: Bouts of diarrhea Abdominal pain Nausea and vomiting 4. Availability in Kuwait Cytotec pills are available in many pharmacies, but it is important to get them from reliable sources. 5. Medical Advice Consulting a specialist doctor is an essential step before taking Cytotec pills. Providing the necessary information can help in making the right decision. > 00201026560416 > 00971507534596 Point Details Uses Abortion, stomach protection, labor stimulation Doses 200 mcg, 100 mcg Side effects Diarrhea, abdominal pain, nausea Availability available in pharmacies Importance of advice The necessity of consulting a doctor first 6. Conclusion Cytotec pills remain a powerful medical tool when used correctly. Knowledge is always the key to success. Therefore, it is of utmost importance to consult a doctor and learn the correct information about this medicine. I hope this information has been useful to you! Cytotec pills usage guide in Kuwait Cytotec pills (Misoprostol) are used for multiple purposes, including treating some medical conditions. It is important to know how to use them appropriately and consistently, and in this guide we show you the basic steps. 1. What are Cytotec pills? Cytotec pills are a medicine that contains Misoprostol, and are used to treat the following conditions: Assist in medical abortion Treatment of stomach ulcers Regulate the menstrual cycle 2. How to use Cytotec pills Abortion Pills Kuwait_ "Types of Abortion Pills Available in Kuwait: A Comprehensive Guide". "Legal Aspects of Using Abortion Pills in Kuwait". "How Do Abortion Pills Work? Important Information for Kuwaiti Women". "Side Effects of Abortion Pills: What Every Woman Should Know". "Medical Consultation and Its Importance Before Using Abortion Pills in Kuwait". "Privacy Issues and Confidentiality of Women's Data When Ordering Abortion Pills". "Abortion Pills Online in Kuwait: Warnings and Risks". "The Impact of Social Pressures on the Decision to Have an Abortion in Kuwaiti Society". "Abortion Pills: Medical Guidelines and Support for Women After Abortion". "Access to Safe Abortion Services: Women's Rights in Kuwait". Abortion Pills in Kuwait_ "Where to Find Abortion Pills in Kuwait? Basic Coverage". "Abortion Pills Dominate Social Discussions in Kuwait". "How to Get Abortion Pills Safely in Kuwait". "Understanding the Legal Aspects of Abortion Pills in Kuwait". "Health Guidelines on Using Abortion Pills in Kuwait". "Health Policy Updates on Abortion Pills in Kuwait". "Community Discussions on Women's Rights and Abortion Pills in Kuwait". Cytotec in Kuwait, abortion pills Kuwait, abortion pills Kuwait between fact and fiction, have Cytotec pills Kuwait really proven effective in terminating pregnancy? Abortion pills in Kuwait are not as easy to obtain as expected. Cytotec pills are available in Kuwait and can be ordered through our website and numbers shown in the article. Cytotec is the trade name for Misoprostol. Original Cytotec pills produced by Pfizer are available at a price of 150 Kuwaiti dinars. Cytotec pills are available in the Emirates and payment is upon receipt only. Cytotec Emirates is the appropriate treatment for abortion from home and at a lower cost. Abortion pills in the Emirates are recommended by doctors for safe and easy abortion without the need for cost. Abortion pills in the Emirates, yes, they are available with us, the original. Cytotec pills for sale at a price of 1500 Emirati dirhams. Abortion pills in the Sultanate of Oman are available at a 20% discount. > 00201026560416 > 00971507534596 Buy Cytotec online from Amazon Kuwait at the best prices. Cytotec Prices Dammam Al-Khobar We are here to support you #Kuwait #abortion #contraceptives #Cytotec #Kuwait #Kuwait #Kuwait #abortion_pills #Kuwait #unplanned_pregnancy Cytotec pills in Kuwait Cytotec pills in the pharmacy Are there Cytotec pills in pharmacies? Where are Cytotec pills in Kuwait? Where can I find Cytotec pills #in Kuwait? Price of Cytotec pills #in Kuwait? Are Cytotec pills guaranteed? Do Cytotec pills cause pain? Cytotec pills are cheap? Cytotec medicine in Kuwait? I took Cytotec pills? Price of Cytotec pills in Kuwait? Cytotec pills in the third month? Types of pills Cytotec Pregnancy abortion medications Do anxiety medications affect pregnancy Do medications affect the fetus during the first month? Abortion pills in Kuwait Pregnancy abortion pills Kuwait Al-Nahdi Pharmacy Abortion pills Third month abortion pills Dammam abortion pills Abortion pills in English Abortion 4 weeks Abortion 6 weeks It is clear that unexpected pregnancy can be a cause of anxiety and great psychological stress for many women. Fortunately, we are here to help you find a solution to this problem in a completely safe and confidential way. You can get Cytotec pills from Amazon Saudi Arabia at competitive prices and in an easy and convenient way. We provide a fast shipping service to ensure that the pills reach you as quickly as possible. > 00201026560416 > 00971507534596 In addition, you can pay upon receipt, allowing you to receive the pills and pay for them after ensuring their quality. Do not hesitate to contact us now on () via WhatsApp or Telegram to get Cytotec pills from Nahdi in a confidential and secure manner. We are here to provide you with support and assistance in finding the right solution for you, and to ensure an easy and comfortable shopping experience. Make your right decision and do not hesitate to ask any information you need about Cytotec pills. We are here to support you and make this process less stressful and worrying for you. ? Fast Shipping ? Cash on Delivery Available Call us now on the mentioned number, and contact us via WhatsApp or Telegram to get Cytotec Mesoprozole pills with complete safety and privacy. #abortion_pills #Kuwait #unplanned_pregnancy Kuwait UAE Saudi_Arabia Oman Riyadh Jeddah Makkah Dammam Al-Khobar > 00201026560416 > 00971507534596 We are here to support you #Kuwait #abortion #contraceptives #Cytotec #Kuwait #Kuwait #Kuwait #abortion_pills #Kuwait #unplanned_pregnancy Cytotec pills in Kuwait Cytotec pills in the pharmacy Are there Cytotec pills in pharmacies? Where are Cytotec pills in Kuwait? Where can I find Cytotec pills #in Kuwait? Price of Cytotec pills #in Kuwait? Are Cytotec pills guaranteed? Do Cytotec pills cause pain? Cytotec pills are cheap? Cytotec medicine in Kuwait? I took Cytotec pills? Price of Cytotec pills in Kuwait? Cytotec pills in the third month? Types of pills Cytotec Pregnancy abortion medications Do anxiety medications affect pregnancy Do medications affect the fetus during the first month? Abortion pills in Kuwait Pregnancy abortion pills Kuwait Al-Nahdi Pharmacy Abortion pills Third month abortion pills Dammam abortion pills Abortion pills in English Abortion 4 weeks Abortion 6 weeks It is clear that unexpected pregnancy can be a cause of anxiety and great psychological stress for many women. Fortunately, we are here to help you find a solution to this problem in a completely safe and confidential way. You can get Cytotec pills from Amazon Saudi Arabia at competitive prices and in an easy and convenient way. We provide a fast shipping service to ensure that the pills reach you as quickly as possible. In addition, you can pay upon receipt, allowing you to receive the pills and pay for them after ensuring their quality. Do not hesitate to contact us now on () via WhatsApp or Telegram to get Cytotec pills from Nahdi in a confidential and secure manner. We are here to provide you with support and assistance in finding the right solution for you, and to ensure an easy and comfortable shopping experience. Make your right decision and do not hesitate to ask any information you need about Cytotec pills. We are here to support you and make this process less stressful and worrying for you. Contact us now and get the help and services that suit you. Cytotec pills in Kuwait Are there Cytotec pills in pharmacies? Where are Cytotec pills in Kuwait? Where can I find Cytotec pills #in Kuwait? Are Cytotec pills guaranteed? Cytotec pills in the pharmacy? Cytotec pills price #in Kuwait? Cytotec pills in Kuwait? Cytotec pills are cheap? I took Cytotec pills? Cytotec pills price in Kuwait? Cytotec pills in Kuwait? > 00201026560416 > 00971507534596 Cytotec pills in Kuwait at attractive prices! I am here to help you in different cities in Kuwait: Kuwait Riyadh Jeddah Makkah Dammam Al Khobar You can contact me on () to order via WhatsApp or Telegram. #We_Are_Here_To_Serve_You #Kuwait #Abortion #Contraceptives #Cytotec #Abortion_Pills #Kuwait Guys! I have Cytotec pills at competitive prices. I have Cytotec pills at attractive prices in Saudi Arabia! But the delicious Cytotec pills are not available in pharmacies. You can contact me via WhatsApp and Telegram on the number ()! I am here to help you solve your problems #We_Are_Here_To_Help. #Cytotec_at_Competitive_Prices #Cytotec_Pills #Kuwait #Abortion #Unplanned_Pregnancy Welcome! ?? If you are looking for Cytotec pills at attractive prices in Saudi Arabia, I am here to support you! You can contact me via WhatsApp or Telegram on the number (). Help me help you find Cytotec pills easily, as I provide a comprehensive service in Kuwait. #We_are_here_to_help #Abortion #Contraception #Cytotec #Abortion_pills #Kuwait > 00201026560416 > 00971507534596 We are here to support you in your search for abortion pills in Kuwait at attractive prices! Contact us via WhatsApp and Telegram on () to get Cytotec pills in Kuwait. #We_are_here_to_support_you ?? #Kuwait #Abortion #Contraception #Cytotec #Kuwait #Kuwait #Kuwait ?? #Abortion_pills #Kuwait #Unwanted_pregnancy Are you looking for Cytotec Kuwait at attractive prices? Contact me now on () via WhatsApp or Telegram to get 20% discount for new customers! Get Cytotec pills in Kuwait now. We are here to support you ?? #abortion #contraceptives #cytotec #abortion_pills #kuwait #kuwait #kuwait #kuwait Cytotec abortion pills in Kuwait Cytotec pill prices and where to buy them in countries such as Kuwait, the Emirates, Oman and Egypt, in addition to the prices of the original pills in Jeddah, Cytotec tablets, Misoprostol medicine and the price of Cytotec 200. Are you looking for details about the actual price of Cytotec pills for the year 2024? Cytotec is effective in ending unwanted pregnancy and provides assistance to women all over the world. Due to the increasing demand for these pills, many people are wondering about the actual price of Cytotec pills for this year. > 00201026560416 > 00971507534596 > Cytotec abortion pills in Kuwait "Effects and Risks of Abortion Pills: A New Reality in Kuwait". "Limited Access to Abortion Pills and Its Impact on Women's Health in Kuwait". "Increasing Demand for Information on Abortion Pills Among Women in Kuwait". Cytotec Pills Kuwait_ Everything You Need to Know About Cytotec Pills in Kuwait. The Most Important Questions About Cytotec Pills and Their Answers. Uses and Benefits of Cytotec Pills in Kuwait. Women's Experience with Cytotec Pills in Kuwait. Possible Side Effects of Cytotec Pills. How to Get Cytotec Pills Legally in Kuwait. What You Need to Know Before Using Cytotec Pills in Kuwait. Important Facts About Cytotec Pills in Kuwait. Success Stories of Women Who Used Cytotec Pills in Kuwait. Steps to Avoid Problems When Using Cytotec Pills in Kuwait. Abortion Pills in Kuwait_ Important Information About Abortion Pills in Kuwait. How to Get Abortion Pills in Kuwait. Recommended dosage schedule for abortion pills in Kuwait. Effects of abortion pills on the body in Kuwait. How abortion pills work in Kuwait. Potential challenges and risks of using abortion pills in Kuwait. Doctors' opinions on using abortion pills in Kuwait. What you should know before using abortion pills in Kuwait. Availability of abortion pills in Kuwaiti pharmacies. Reports on women's experiences using abortion pills in Kuwait. Cytotec pills in Kuwait_ Everything you need to know about Cytotec pills in Kuwait. A comprehensive report on Cytotec pills and their use in Kuwait. The most important information about Cytotec pills and their benefits in Kuwait. How to get Cytotec pills in Kuwait. The effect of Cytotec pills on pregnancy in Kuwait. Consult a doctor before using Cytotec pills in Kuwait. Optimal use techniques for Cytotec pills in Kuwait. The difference between Cytotec pills and other abortion methods in Kuwait. Tips for safe use of Cytotec pills in Kuwait. What are the possible side effects of Cytotec pills in Kuwait? Cytotec pills in Kuwait_ Everything you need to know about Cytotec pills in Kuwait. The effects of Cytotec pills and their use in Kuwait. Cytotec pills: Directions and risks in Kuwait. A review of Cytotec pills available in Kuwait. Our pills are original from Britain and of high quality. Cytotec pills for sale, original Cytotec pills for sale in Kuwait and cash on delivery, abortion pills Kuwait and immediate delivery, Cytotec pills in Kuwait 30% discount hand-to-hand delivery Contact us via WhatsApp Original Cytotec pills are available for sale in Kuwait, abortion pills are available for sale. You can buy pills to perform an abortion from various governorates of Kuwait such as Jahra, the capital, Farwaniya and Hawalli. You can order from the rest of the Gulf countries as we have original British pills with guarantees for abortion. We offer original Cytotec pills with a dose of 200 mg from the international company Pfizer for sale, to facilitate the abortion process in Kuwait. Cytotec for sale at competitive and suitable prices in Kuwait and with guaranteed guarantees. Cytotec abortion pills for sale in Kuwait Sell original Cytotec pills Original Cytotec Kuwait and all Gulf countries Buy and sell Cytotec abortion pills in Jahra, Farwaniya and Hawalli. We have medical abortion pills available at home, including Cytotec and Misoprostol. > 00201026560416 > 00971507534596 Cytotec Misoprostol 200 mg of British (English) origin from Pfizer is available for delivery to all Gulf countries and the world. Sell original Cytotec pills in Kuwait. Cytotec pills for sale in Kuwait Cytotec pills for sale in Jahra Cytotec pills for sale in the capital Cytotec pills for sale in Farwaniya Cytotec pills for sale in Hawally Cytotec abortion pills for sale Mubarak Al-Kabeer Cytotec abortion pills for sale in Ahmadi Cytotec abortion pills in Kuwait Cytotec has some caps in Kuwait they go to buy a brother in white silk,?give them Sell and buy Cytotec abortion pills in Mubarak Al-Kabeer Ahmadi area. Cytotec abortion pills are currently available in Kuwait. Original British Cytotec pregnancy termination pills are available for purchase In Kuwait, we provide free same-day delivery to all areas of the country. For safe and side-effect-free pregnancy relief at home, original Cytotec pills are the best choice. Cytotec abortion pills for sale in Kuwait, in Jahra, Capital, and Farwaniya areas. Cytotec is the name of the abortion pills, which are available in Kuwait and are used for abortion. Cytotec pills are available in Hawally, Mubarak Al-Kabeer, Ahmadi, and Jahra. Original Cytotec abortion pills are available for sale in Jahra, Farwaniya, and Ahmadi areas in Kuwait. How do I get Cytotec abortion pills How to buy Cytotec pills easily in Kuwait. User experiences with Cytotec pills in Kuwait. Awareness campaigns about the use of Cytotec pills in Kuwait. Prices and availability of Cytotec pills in Kuwait. The potential benefits and risks of Cytotec pills in Kuwait. How to get the most out of Cytotec pills in Kuwait. Cytotec pills for sale Kuwait_ Special offer: Cytotec pills for sale in Kuwait. Now available: Buy Cytotec pills online in Kuwait. Learn about the benefits of Cytotec pills and how to use them in Kuwait. Cytotec pills: The solution to the crisis for women in Kuwait. How to get Cytotec pills safely in Kuwait? Frequently asked questions about Cytotec pills for sale in Kuwait. Get Cytotec pills at reasonable prices in Kuwait. Cytotec pills: The best choice for women in Kuwait. Exciting experience: Buy Cytotec pills online and have them delivered to Kuwait. Tips for the safety of using Cytotec pills in Kuwait. Abortion pills for sale Kuwait_ The best types of abortion pills for sale in Kuwait. Abortion pills are available for sale in Kuwait at reasonable prices. How to buy abortion pills the right way in Kuwait. Review of the best sites to buy abortion pills in Kuwait. The effect of abortion pills on health in Kuwait. Prices of abortion pills available for sale in Kuwait. Everything you need to know about abortion pills for sale in Kuwait. The truth about abortion pills available for sale in Kuwait. How to ensure the quality of the abortion pills you buy in Kuwait. Frequently asked questions about abortion pills and their answers in Kuwait. Where are abortion pills sold in Kuwait_ Discover places to sell abortion pills Introduction Overview of Cytotec Pills Cytotec, generically known as misoprostol, has gained attention in both medical and personal contexts. Initially designed to prevent stomach ulcers, it has since garnered a reputation for its use in various reproductive health situations. For many, understanding Cytotec means navigating its diverse applications and perceived implications. Common Uses: While most are aware of its role in treating ulcers, it is also prescribed for: Inducing labor Managing miscarriage As part of medical abortion regimens Personal Example: Jane, a 30-year-old from a small town, learned about Cytotec during a discussion at her clinic. While initially skeptical, she discovered its pivotal role in reproductive health and how it had helped many women navigate difficult circumstances. This multifaceted medication is more than meets the eye, and it's essential to delve deeper into its uses, mechanisms, and safety considerations. Uses of Cytotec Pills Medical Indications Cytotec is primarily recognized for its specific medical applications. They include: Inducing Labor: In many health facilities, misoprostol is used to help initiate labor in expecting mothers, particularly when the pregnancy has gone beyond term. Management of Miscarriage: For women experiencing early pregnancy loss, Cytotec can be administered to facilitate the natural process of miscarriage. Medical Abortion: Misoprostol is crucial in combination with mifepristone, providing a non-surgical option for terminating a pregnancy safely. These medical indications highlight its significance in women?s health care. Off-label Uses While Cytotec has well-known approved uses, healthcare providers also prescribe it off-label for various reasons. For instance: Gastrointestinal Conditions: Some providers use it to treat conditions like gastric ulcers or gastroesophageal reflux disease (GERD). > 00201026560416 > 00971507534596 Postpartum Hemorrhage: In certain cases, it may help manage severe bleeding following childbirth, saving lives in critical situations. Consider Emily, a nurse who witnessed Cytotec's impact on a patient dealing with complications. The versatility of this medication often leads to creative therapeutic applications, showing just how important it can be in various medical scenarios. How Cytotec Pills Work Understanding how Cytotec works provides insights into its varied applications and effectiveness. This medication primarily contains misoprostol, a synthetic prostaglandin E1 analog. It interacts with the body in several key ways: Cervical Ripening: Cytotec helps soften and dilate the cervix, making it easier for labor to commence. This is essential for expecting mothers preparing for delivery. Uterine Contractions: By stimulating muscle contractions in the uterus, Cytotec assists in the process of expelling tissue during abortion or miscarriage. Reducing Stomach Acid: In its original use, misoprostol works to limit stomach acid secretion, protecting the stomach lining from damage due to nonsteroidal anti-inflammatory drugs (NSAIDs). Consider Sarah, who used Cytotec during her labor. After her doctor explained how the pills would help with cervical dilation, she felt empowered knowing how the medication was aiding her journey to motherhood. This understanding can ease apprehensions and build trust in the treatment process. Safety and Effectiveness Side Effects While Cytotec can be beneficial, it?s crucial to be aware of potential side effects. Many who have taken the medication report experiencing: Gastrointestinal Issues: These may include nausea, diarrhea, and abdominal cramps. Uterine Hyperstimulation: In some cases, the contractions may become overly intense, posing risks during labor. Fever and Chills: These symptoms can occur as the body reacts to the medication, particularly in the context of a medical abortion. Maria, who took Cytotec during her miscarriage, mentioned that while the cramps were intense, her healthcare provider reassured her that they were a normal part of the process. Efficacy When it comes to effectiveness, Cytotec has proven to be a reliable option for the uses mentioned. Studies demonstrate that: Labor Induction: Around 80% of women respond positively, leading to successful labor in due time. Medical Abortion: Reports indicate an efficacy rate surpassing 95% when used as prescribed. In the end, understanding both the potential side effects and efficacy helps users make informed decisions, ensuring that they feel prepared and supported throughout their medical journey. Dosage and Administration Guidelines When it comes to Cytotec, understanding the dosage and administration is essential for achieving the desired outcomes safely. Each individual's medical situation can vary, so healthcare providers tailor the dosage accordingly. Here are some general guidelines: Standard Dosage Recommendations Labor Induction: Typically, a dosage of 25 mcg can be administered every 4 to 6 hours. Monitoring is crucial to ensure the proper progression of labor. Medical Abortion: The standard regimen often includes taking 200 mg of mifepristone followed by 800 mcg of Cytotec 24 to 48 hours later. It's vital to follow the healthcare provider's instructions precisely as they can adjust based on the individual?s response and needs. Administration Tips Route: Cytotec may be taken orally or vaginally, depending on the specific treatment plan. Timing: Adherence to the timing and frequency of doses helps maximize effectiveness and reduce risks. Emily shared her experience of following her doctor's guidance on administration, which ensured a smooth process during her labor induction. Clear communication and proper dosing make a significant difference in the outcomes of Cytotec treatment. Precautions before Taking Cytotec Pills Potential Risks Associated with Cytotec Pills Legal Status and Accessibility Obtaining Cytotec Safely For those interested, it?s crucial to obtain Cytotec through legitimate healthcare providers. Consulting a physician ensures safe usage and allows for discussions on concern, dosage, and monitoring. Ensuring legal and safe access protects individuals and helps to foster a responsible approach to reproductive health. Natural Alternatives Some individuals explore natural methods for labor induction, such as: Abortion Pills Kuwait_ "Types of Abortion Pills Available in Kuwait: A Comprehensive Guide". "Legal Aspects of Using Abortion Pills in Kuwait". "How Do Abortion Pills Work? Important Information for Kuwaiti Women". "Side Effects of Abortion Pills: What Every Woman Should Know". "Medical Consultation and Its Importance Before Using Abortion Pills in Kuwait". "Privacy Issues and Confidentiality of Women's Data When Ordering Abortion Pills". "Abortion Pills Online in Kuwait: Warnings and Risks". "The Impact of Social Pressures on the Decision to Have an Abortion in Kuwaiti Society". "Abortion Pills: Medical Guidelines and Support for Women After Abortion". "Access to Safe Abortion Services: Women's Rights in Kuwait". Abortion Pills in Kuwait_ "Where to Find Abortion Pills in Kuwait? Basic Coverage". "Abortion Pills Dominate Social Discussions in Kuwait". "How to Get Abortion Pills Safely in Kuwait". "Understanding the Legal Aspects of Abortion Pills in Kuwait". "Health Guidelines on Using Abortion Pills in Kuwait". "Health Policy Updates on Abortion Pills in Kuwait". "Community Discussions on Women's Rights and Abortion Pills in Kuwait". > 00201026560416 > 00971507534596 "Effects and Risks of Abortion Pills: A New Reality in Kuwait". "Limited Access to Abortion Pills and Its Impact on Women's Health in Kuwait". "Increasing Demand for Information on Abortion Pills Among Women in Kuwait". Cytotec Pills Kuwait_ Everything You Need to Know About Cytotec Pills in Kuwait. The Most Important Questions About Cytotec Pills and Their Answers. Uses and Benefits of Cytotec Pills in Kuwait. Women's Experience with Cytotec Pills in Kuwait. Possible Side Effects of Cytotec Pills. How to Get Cytotec Pills Legally in Kuwait. What You Need to Know Before Using Cytotec Pills in Kuwait. Important Facts About Cytotec Pills in Kuwait. Success Stories of Women Who Used Cytotec Pills in Kuwait. Steps to Avoid Problems When Using Cytotec Pills in Kuwait. Abortion Pills in Kuwait_ Important Information About Abortion Pills in Kuwait. How to Get Abortion Pills in Kuwait. Recommended dosage schedule for abortion pills in Kuwait. Effects of abortion pills on the body in Kuwait. How abortion pills work in Kuwait. Potential challenges and risks of using abortion pills in Kuwait. Doctors' opinions on using abortion pills in Kuwait. What you should know before using abortion pills in Kuwait. Availability of abortion pills in Kuwaiti pharmacies. Reports on women's experiences using abortion pills in Kuwait. Cytotec pills in Kuwait_ Everything you need to know about Cytotec pills in Kuwait. A comprehensive report on Cytotec pills and their use in Kuwait. The most important information about Cytotec pills and their benefits in Kuwait. How to get Cytotec pills in Kuwait. The effect of Cytotec pills on pregnancy in Kuwait. Consult a doctor before using Cytotec pills in Kuwait. Optimal use techniques for Cytotec pills in Kuwait. The difference between Cytotec pills and other abortion methods in Kuwait. Tips for safe use of Cytotec pills in Kuwait. What are the possible side effects of Cytotec pills in Kuwait? Cytotec pills in Kuwait_ Everything you need to know about Cytotec pills in Kuwait. The effects of Cytotec pills and their use in Kuwait. Cytotec pills: Directions and risks in Kuwait. A review of Cytotec pills available in Kuwait. How to buy Cytotec pills easily in Kuwait. User experiences with Cytotec pills in Kuwait. Awareness campaigns about the use of Cytotec pills in Kuwait. Prices and availability of Cytotec pills in Kuwait. The potential benefits and risks of Cytotec pills in Kuwait. How to get the most out of Cytotec pills in Kuwait. Cytotec pills for sale Kuwait_ Special offer: Cytotec pills for sale in Kuwait. Now available: Buy Cytotec pills online in Kuwait. Learn about the benefits of Cytotec pills and how to use them in Kuwait. Cytotec pills: The solution to the crisis for women in Kuwait. How to get Cytotec pills safely in Kuwait? Frequently asked questions about Cytotec pills for sale in Kuwait. Get Cytotec pills at reasonable prices in Kuwait. Cytotec pills: The best choice for women in Kuwait. Exciting experience: Buy Cytotec pills online and have them delivered to Kuwait. Tips for the safety of using Cytotec pills in Kuwait. Abortion pills for sale Kuwait_ The best types of abortion pills for sale in Kuwait. Abortion pills are available for sale in Kuwait at reasonable prices. How to buy abortion pills the right way in Kuwait. Review of the best sites to buy abortion pills in Kuwait. The effect of abortion pills on health in Kuwait. Prices of abortion pills available for sale in Kuwait. Everything you need to know about abortion pills for sale in Kuwait. The truth about abortion pills available for sale in Kuwait. How to ensure the quality of the abortion pills you buy in Kuwait. Frequently asked questions about abortion pills and their answers in Kuwait. Where are abortion pills sold in Kuwait_ > 00201026560416 > 00971507534596 -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/cc68d37b-33b8-4dfe-963a-adb3eb640d26n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From souvikkondal03 at gmail.com Wed Oct 9 23:52:36 2024 From: souvikkondal03 at gmail.com (Souvik Mondal) Date: Wed, 9 Oct 2024 16:52:36 -0700 (PDT) Subject: [CP2K-user] [CP2K:20762] Segmentation Fault (SIGSEGV) when Using RESP in Geometry Optimization Message-ID: <8dbb754c-93a9-416c-847e-ee2fa9c75c4an@googlegroups.com> Hello CP2K community, I am encountering a *segmentation fault (SIGSEGV)* while performing a geometry optimization on a protein system with a catalytic Zn site using CP2K. The geometry optimization runs successfully without the *RESP* section, but when I include the RESP section, I receive the following error after the first geometry optimization step: Program received signal SIGSEGV: Segmentation fault - invalid memory reference. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/8dbb754c-93a9-416c-847e-ee2fa9c75c4an%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: qm-mm-en-resp.inp Type: chemical/x-gamess-input Size: 4691 bytes Desc: not available URL: From ucapca1 at ucl.ac.uk Thu Oct 10 12:11:03 2024 From: ucapca1 at ucl.ac.uk (uca...@ucl.ac.uk) Date: Thu, 10 Oct 2024 05:11:03 -0700 (PDT) Subject: [CP2K-user] [CP2K:20764] Re: Segmentation fault with Hirshfeld CDFT In-Reply-To: References: <184bd67e-d61f-4efc-9fe7-09e04d18cdcdn@googlegroups.com> Message-ID: <0224492e-965f-47d7-bd30-58dd66b63743n@googlegroups.com> For future reference, the bug causing a segmentation fault when calculating CDFT forces with a large number of constraints has been fixed in the latest version of CP2K. See https://github.com/cp2k/cp2k/pull/3711 On Friday 2 December 2022 at 23:49:13 UTC+8 Leili Rassouli wrote: > Dear Chris, > Thanks for your reply. > Finally, I need to apply 7 constraints. The energy calculation with seven > constraints has been tested, and it worked fine. However, the final spins > in Hirshfeld analysis are not quite what I have defined as constraints. The > files related to this calculation are attached. > I read the " Electron and Hole Mobilities in Bulk Hematite from Spin- > Constrained Density Functional Theory" paper by Christian S. Ahart, Kevin > M. Rosso, and Jochen Blumberger. And it seems they employed 2 constraints > for geometry optimization using cDFT. I am not sure why it doesn't work for > my case. > As you requested, I have attached the xyz file. > Thank you so much for your assistance, I really appreciate it. > Please let me know if you need any other information. > Lili > > On Friday, December 2, 2022 at 9:05:52 AM UTC-5 uca... at ucl.ac.uk wrote: > >> Hi Lili, >> >> I had a look at your input files and the issue is likely related to the >> use of 2 constraints (your unsuccessful job) instead of 1 constraint (your >> successful job). Optimising multiple constraints is very challenging, and >> therefore this has not been tested extensively. >> >> If you provide your structure 'ex1-geo.xyz' then I can check your job >> and perhaps implement a bug fix if needed. >> >> Regards, >> Chris >> >> On Thursday, 1 December 2022 at 23:13:27 UTC rassoul... at gmail.com wrote: >> >>> Dear all, >>> I am using 2022.1 version of cp2k. But I have the same problem as Chris. >>> I want to optimize the geometry of a periodic system using cDFT. I received >>> Segmentation fault error ("Program received signal SIGSEGV: Segmentation >>> fault - invalid memory reference.") with Hirshfeld constraints. Same as >>> Chris, it is sensitive to the combinations of atoms I use in &ATOM_GROUP. >>> In my case, it only works fine for one combination of atoms and not any >>> other combinations. The input and output files for the successful and >>> unsuccessful jobs are attached. Only the ATOM GROUP in the input files was >>> altered. >>> I Changed the number of nodes, cores, and memory, but it didn't solve >>> the problem. >>> Any advice or tips on how to solve this issue would be greatly valued. >>> Best regards, >>> Lili >>> >>> On Tuesday, March 31, 2020 at 12:21:31 PM UTC-4 uca... at ucl.ac.uk wrote: >>> >>>> This bug has now been fixed in the latest development version of CP2K. >>>> See https://github.com/cp2k/cp2k/issues/847 >>>> >>>> >>>> >>>> On Thursday, 19 March 2020 18:31:58 UTC, Chris Ahart wrote: >>>>> >>>>> Dear all, >>>>> >>>>> When performing CDFT with Hirshfeld I am getting the following error: >>>>> "Program received signal SIGSEGV: Segmentation fault - invalid memory >>>>> reference." This appears to be an issue reported previously ( >>>>> https://github.com/cp2k/cp2k/issues/560), however I am encountering >>>>> it again in CP2K 7.1. This error occurs on both my local machine and on a >>>>> cluster. >>>>> >>>>> I have found this error to be sensitive to the system and to the atoms >>>>> included in &ATOM_GROUP, as for certain combinations of atoms CP2K runs >>>>> while with other combinations it crashes. Becke constraint runs in all >>>>> cases, so this is isolated to Hirshfeld. I have attached two example input >>>>> and output files, one with an atom combination that runs while another >>>>> which fails. I have confirmed that Hirshfeld runs for the input files >>>>> included in the CP2K CDFT tutorial. >>>>> >>>>> Any guidance or insight to resolve this problem would be greatly >>>>> appreciated. >>>>> >>>>> Thank you for your help and time. >>>>> >>>>> Regards, >>>>> Chris >>>>> >>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/0224492e-965f-47d7-bd30-58dd66b63743n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Fri Oct 11 11:46:08 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 11 Oct 2024 04:46:08 -0700 (PDT) Subject: [CP2K-user] [CP2K:20765] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> Message-ID: <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> Hi Frederic, I've used Intel OneAPI 2024.2. and it helped with the error we discussed. Thanks a lot for that! However, still some tests failed (correct: 4091 / 4227; failed: 136). Now most of the failed tests are killed without additional information with: ``` =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 0 PID 172367 RUNNING AT r23c03b11 = KILLED BY SIGNAL: 11 (Segmentation fault) =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 1 PID 172368 RUNNING AT r23c03b11 = KILLED BY SIGNAL: 11 (Segmentation fault) =================================================================================== ``` and sometimes also this message is printed: ``` LIBXSMM_VERSION: develop-1.17-3834 (25693946) LIBXSMM_TARGET: clx [Intel(R) Xeon(R) Platinum 8268 CPU @ 2.90GHz] Registry and code: 13 MB Command (PID=172367): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp 2H2O_t01.inp Uptime: 0.932725 s ``` or ``` LIBXSMM_VERSION: develop-1.17-3834 (25693946) CLX/DP TRY JIT STA COL 0..13 22 22 0 0 14..23 15 15 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 320 KB (gemm=37) Command (PID=132831): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp admm_dbcsr_thread_dist.inp Uptime: 1.272898 s ``` I was able to find similar issue (here ) but I am not sure how I could fix it. I performed the regetests twice and in some cases the tasks finished without error the first time, but failed the second time, and the opposite: for example `QS/regtest-hfx/H2-ADMM-full.inp` was `OK` in the first run but finished with `RUNTIME FAIL` in second run, or `QS/regtest-as-1/h2_gapw_pp_2-4.inp` finished with `OK` in the first run (as the only one in this set) but in the second run finished with `RUNTIME FAIL`. In the attachment I provide outputs from toolchain, make and regtests 1st and 2nd run. The thing I've noticed is that toolchain is using ifort, which is some older version `ifort (IFORT) 2021.13.0 20240602`. Do you think using ifx would be better and maybe could help solving this issue? If yes, how can I force toolchain to use ifx instead of ifort? Another question - none of the regtests ended in `WRONG`. Does this mean that I can assume that cp2k is safe to use and if an error occurs, the job will be killed instead of getting an erroneous result? Best Bartosz wtorek, 8 pa?dziernika 2024 o 15:46:14 UTC+2 Frederick Stein napisa?(a): > Hi Bartosz, > No, Intel 2021 will be probably not work, it is older than Intel 2022. I > meant something like Intel OneAPI 2023 or 2024. > Best, > Frederick > > bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 14:43:19 UTC+2: > >> Hi Frederick, >> >> Thank you for your quick response! Just to be sure, if I compile the >> latest version of cp2k using Intel 2021 ( >> https://www.cp2k.org/dev:compiler_support), I should no longer have the >> problems described? I ask because I don't see a module with Intel OneAPI >> 2024 on our HPC, so I am considering using either an older module or asking >> the admins to provide a newer one. >> >> Best >> Bartosz >> >> wtorek, 8 pa?dziernika 2024 o 14:07:15 UTC+2 Frederick Stein napisa?(a): >> >>> Dear Bartosz, >>> If you want to compile with Intel, then drop the "--with-gcc" flag. >>> Regarding Intel, we do not test Intel 2022.2 anymore. You should try the >>> IntelOneAPI containing more recent compilers instead. We are currently >>> testing version 2024.2. >>> The warnings can be ignored for now, but we are aware of that issue and >>> will make adjustments later after dropping some older compilers. >>> Regarding the runtime errors. The error "LHS and RHS of an assignment >>> statement have incompatible types" could be a compiler bug (see >>> https://community.intel.com/t5/Intel-Fortran-Compiler/Segmentation-fault-due-to-assignment-of-derived-type-variable/td-p/1489823). >>> The allocation error may also be a compiler bug as the respective array is >>> always allocated and the routine is left directly after deallocating the >>> array earlier in the routine. >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Dienstag, 8. Oktober 2024 um 13:17:08 UTC+2: >>> >>>> Hi all, >>>> >>>> I recently managed to compile cp2k on our cluster, but regtests showed >>>> several errors. Most of the failures are due to the error `forrtl: >>>> severe (189): LHS and RHS of an assignment statement have incompatible >>>> types` or `forrtl: severe (153): allocatable array or pointer is not >>>> allocated`. After looking at the output from `make` I noticed that >>>> there are quite a few similar warnings there: >>>> >>>> ``` >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exts/dbcsr/src/mpi/dbcsr_mpiwrap.F(1930): >>>> warning #8100: The actual argument is an array section or assumed-shape >>>> array, corresponding dummy argument that has either the VOLATILE or >>>> ASYNCHRONOUS attribute shall be an assumed-shape array. [MSGIN] >>>> CALL mpi_isend(msgin, msglen, MPI_LOGICAL, dest, my_tag, & >>>> ------------------------^ >>>> ``` >>>> >>>> For compilation I used GCC 12.2.0 and intel 2022.2.1. My toolchain >>>> command was `./install_cp2k_toolchain.sh --mpi-mode=intelmpi >>>> --with-intel --with-gcc=system --with-plumed --with-quip --with-pexsi >>>> --with-ptscotch --with-superlu --with-fftw=no --with-hdf5`. In the >>>> attachment I provide all outputs from toolchain, make, and regtests. >>>> >>>> I'm not sure what went wrong and how should I proceed so any help will >>>> be much appreciated! >>>> >>>> Best >>>> Bartosz >>>> >>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Fri Oct 11 11:48:42 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 11 Oct 2024 04:48:42 -0700 (PDT) Subject: [CP2K-user] [CP2K:20766] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> Message-ID: Sorry, forgot attachments. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ecf2254a-c9d0-41b4-b83b-28d46f92630dn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: intel2024.zip Type: application/x-zip Size: 514073 bytes Desc: not available URL: From f.stein at hzdr.de Fri Oct 11 12:30:25 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 11 Oct 2024 05:30:25 -0700 (PDT) Subject: [CP2K-user] [CP2K:20767] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> Message-ID: <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> Dear Bartosz, If I am not mistaken, you used 8 OpenMP threads. The test do not run that efficiently with such a large number of threads. 2 should be sufficient. The test result suggests that most of the functionality may work but due to a missing backtrace (or similar information), it is hard to tell why they fail. You could also try to run some of the single-node tests to assess the stability of CP2K. Best, Frederick bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: > Sorry, forgot attachments. > > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/74d53224-f8e3-4c5f-9a23-90fd2b4e81edn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guangshengyao36 at gmail.com Sun Oct 13 17:47:29 2024 From: guangshengyao36 at gmail.com (=?UTF-8?B?5aea5bm/56yZ?=) Date: Sun, 13 Oct 2024 10:47:29 -0700 (PDT) Subject: [CP2K-user] [CP2K:20767] Request for Assistance with CP2K Calculation - Cholesky Decomposition Failure Message-ID: <44575518-ca55-4288-be70-42fb92abbb65n@googlegroups.com> Dear all, I hope this message finds you well. I am currently working on the geometry optimization of a Ga- and Al-doped ZSM-5 zeolite using CP2K. However, I have encountered a persistent issue during the calculation that I have not been able to resolve, and I would like to kindly request your assistance. Here are the details of the issue: 1. During the geometry optimization run, despite increasing the SCF iteration limit, improving the energy cutoffs, and adjusting the precision settings, the calculation fails during the *SCF convergence* and *Cholesky decomposition* steps with the following error message: Cholesky decompose failed: the matrix is not positive definite or ill-conditioned. 2. I have tried switching from OT (Orbital Transformation) to traditional diagonalization for SCF solving, using higher cutoff energies, and increasing grid precision, but the problem persists. Additionally, I have relaxed the geometry optimization conditions (RMS_FORCE and MAX_ITER), but the issue remains unresolved. 3. The error message suggests a numerical instability, likely related to a non-positive definite matrix or ill-conditioned system. I suspect that this may be caused by the initial geometry or my SCF settings, but I am unsure how to proceed. To help you better understand the issue, I have attached my input files to this email. I would greatly appreciate it if you could review the input and suggest any necessary modifications. Specifically, I would like to ask: - Could this issue be related to my initial geometry setup? - Are there any recommended numerical parameters or settings to avoid Cholesky decomposition failures? - Do you have any suggestions for debugging or resolving this issue? Thank you very much for your time and consideration. I look forward to any guidance or recommendations you may have. Best regards, Yao guangshengyao36 at gmail.com -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/44575518-ca55-4288-be70-42fb92abbb65n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga-MFI.inp Type: chemical/x-gamess-input Size: 4877 bytes Desc: not available URL: From guangshengyao36 at gmail.com Sun Oct 13 18:01:06 2024 From: guangshengyao36 at gmail.com (guangshengyao36) Date: Mon, 14 Oct 2024 02:01:06 +0800 Subject: [CP2K-user] [CP2K:20767] Request for Assistance with CP2K Calculation - Cholesky Decomposition Failure Message-ID: Dear all, I hope this message finds you well. I am currently working on the geometry optimization of a Ga- and Al-doped ZSM-5 zeolite using CP2K. However, I have encountered a persistent issue during the calculation that I have not been able to resolve, and I would like to kindly request your assistance. Here are the details of the issue: During the geometry optimization run, despite increasing the SCF iteration limit, improving the energy cutoffs, and adjusting the precision settings, the calculation fails during the SCF convergence and Cholesky decomposition steps with an error message indicating that the matrix is not positive definite or is ill-conditioned. I have tried switching from OT (Orbital Transformation) to traditional diagonalization for SCF solving, using higher cutoff energies, and increasing grid precision, but the problem persists. Additionally, I have relaxed the geometry optimization conditions, but the issue remains unresolved. The error message suggests a numerical instability, likely related to a non-positive definite matrix or ill-conditioned system. I suspect that this may be caused by the initial geometry or my SCF settings, but I am unsure how to proceed. To help you better understand the issue, I have attached my input files to this email. I would greatly appreciate it if you could review the input and suggest any necessary modifications. Specifically, I would like to ask: Could this issue be related to my initial geometry setup? Are there any recommended numerical parameters or settings to avoid Cholesky decomposition failures? Do you have any suggestions for debugging or resolving this issue? Thank you very much for your time and consideration. I look forward to any guidance or recommendations you may have. Best regards, Andy guangshengyao36 at gmail.com 2024.10.14 -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/B8989187-CC0D-4020-860D-8B7EC130860B%40gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga-MFI.inp Type: application/octet-stream Size: 5067 bytes Desc: not available URL: From hanaa.sarimohammed at gmail.com Sun Oct 13 19:20:35 2024 From: hanaa.sarimohammed at gmail.com (Hanaa Sari) Date: Sun, 13 Oct 2024 12:20:35 -0700 (PDT) Subject: [CP2K-user] [CP2K:20769] The pseudopotential and basis set Message-ID: <6188289e-5fd8-4829-a4be-e30e4b06bd50n@googlegroups.com> Dear CP2K users, I am currently running a cell optimization of Ruthenium using the DFT method with GTH pseudopotentials (GTH-PBE-q16) and the PBE functional, along with the DZVP-MOLOPT-SR-GTH basis set (input file attached). Could you kindly advise if these choices are appropriate for my calculations, or would you recommend a better alternative? Thank you in advance for your guidance. Best regards. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/6188289e-5fd8-4829-a4be-e30e4b06bd50n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Rubulk.inp Type: chemical/x-gamess-input Size: 1820 bytes Desc: not available URL: From niamh97oneill at gmail.com Mon Oct 14 17:32:13 2024 From: niamh97oneill at gmail.com (Niamh O'Neill) Date: Mon, 14 Oct 2024 10:32:13 -0700 (PDT) Subject: [CP2K-user] [CP2K:20771] MP2 atomic energy Message-ID: Dear CP2K developers, I am having issues trying to compute the atomic energy of carbon with MP2 and get the error attached below (slurm-6136536.out). I attach below the MP2 output (MP2.out), MP2 error (slurm-6136536.out), MP2 input file (MP2.inp), initial configuration (init.xyz), the PBE restart file (PBE-TZ-RESTART.wfn) and the basis set file (BASIS). Thank you in advance for any help you can give. Best wishes, -Niamh -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/e3f1aad3-86e5-4944-8b94-a0a9267bf517n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: init.xyz Type: chemical/x-xyz Size: 17 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MP2.out Type: application/octet-stream Size: 54475 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: MP2.inp Type: chemical/x-gamess-input Size: 2686 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PBE-TZ-RESTART.wfn Type: application/octet-stream Size: 1228 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: init.xyz Type: chemical/x-xyz Size: 17 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BASIS Type: application/octet-stream Size: 23252 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slurm-6136536.out Type: application/octet-stream Size: 3245 bytes Desc: not available URL: From f.stein at hzdr.de Mon Oct 14 21:28:35 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Mon, 14 Oct 2024 14:28:35 -0700 (PDT) Subject: [CP2K-user] [CP2K:20772] Re: MP2 atomic energy In-Reply-To: References: Message-ID: <1cedca25-33cf-45b0-b98d-c86509406f65n@googlegroups.com> Dear Niamh, This is a bug which was fixed with version 2024.2 (see https://github.com/cp2k/cp2k/commit/b19582577794ce6d375e9544ab82269d114d84ee). Best, Frederick Niamh O'Neill schrieb am Montag, 14. Oktober 2024 um 19:32:13 UTC+2: > Dear CP2K developers, > > I am having issues trying to compute the atomic energy of carbon with MP2 > and get the error attached below (slurm-6136536.out). > I attach below the MP2 output (MP2.out), MP2 error (slurm-6136536.out), > MP2 input file (MP2.inp), initial configuration (init.xyz), the PBE > restart file (PBE-TZ-RESTART.wfn) and the basis set file (BASIS). > > Thank you in advance for any help you can give. > Best wishes, > -Niamh > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/1cedca25-33cf-45b0-b98d-c86509406f65n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From niamh97oneill at gmail.com Tue Oct 15 08:10:46 2024 From: niamh97oneill at gmail.com (Niamh O'Neill) Date: Tue, 15 Oct 2024 01:10:46 -0700 (PDT) Subject: [CP2K-user] [CP2K:20773] Re: MP2 atomic energy In-Reply-To: <1cedca25-33cf-45b0-b98d-c86509406f65n@googlegroups.com> References: <1cedca25-33cf-45b0-b98d-c86509406f65n@googlegroups.com> Message-ID: Thank you very much for pointing this out Frederick, it works perfectly now! Best, -Niamh On Monday 14 October 2024 at 22:28:36 UTC+1 Frederick Stein wrote: > Dear Niamh, > This is a bug which was fixed with version 2024.2 (see > https://github.com/cp2k/cp2k/commit/b19582577794ce6d375e9544ab82269d114d84ee > ). > Best, > Frederick > > Niamh O'Neill schrieb am Montag, 14. Oktober 2024 um 19:32:13 UTC+2: > >> Dear CP2K developers, >> >> I am having issues trying to compute the atomic energy of carbon with MP2 >> and get the error attached below (slurm-6136536.out). >> I attach below the MP2 output (MP2.out), MP2 error (slurm-6136536.out), >> MP2 input file (MP2.inp), initial configuration (init.xyz), the PBE >> restart file (PBE-TZ-RESTART.wfn) and the basis set file (BASIS). >> >> Thank you in advance for any help you can give. >> Best wishes, >> -Niamh >> > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/cad9c674-a8bb-4243-a356-06ff953def3dn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lheidari125 at gmail.com Wed Oct 16 17:32:46 2024 From: lheidari125 at gmail.com (L Heidarizadeh) Date: Wed, 16 Oct 2024 10:32:46 -0700 (PDT) Subject: [CP2K-user] [CP2K:20773] SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation Message-ID: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> Hello CP2K community, I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 (CZTS)* system using *DFT* in CP2K. Below is a detailed description of my system and the modifications I applied, followed by the SCF convergence issue I am facing. Cu2ZnSnS4 (CZTS) system modeled in a periodic box. Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. The goal is to study surface interactions and electronic properties with a vacuum layer. A 20 ? vacuum layer was added in the Z direction to simulate surface effects: 10.8 ? 10.8 ? 30.8 ? A layer of sulfur (S) atoms was added to the surface to stabilize the system and account for surface states. I attempted hydrogen passivation by capping the dangling bonds with H atoms to further stabilize the surface. I tried running the SCF loop with and without hydrogen passivation, but both cases failed to converge. SCF Settings and Methods Tried: *Orbital Transformation (OT):* MINIMIZER: DIIS PRECONDITIONER: FULL_SINGLE_INVERSE ENERGY_GAP: 0.001 N_HISTORY_VEC: 7 *Diagonalization:* I disabled the OT section and enabled diagonalization as a fallback method, but the SCF still did not converge. ( I tried different parameters setting) SCF Parameters: SCF_GUESS: ATOMIC EPS_SCF: 1.0E-6 MAX_SCF: 100 *The SCF loop exits after a few minutes, failing to converge under both OT and diagonalization methods.* Are there specific SCF settings or preconditioners that can improve convergence for systems with large vacuum gaps? Are there alternative strategies for handling surfaces and vacuum layers that could make the system more stable for electronic structure calculations? Has anyone successfully applied hydrogen passivation to stabilize surfaces and improve SCF convergence in CP2K? Any suggestions or advice would be greatly appreciated! Thank you for your help and support. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/67815156-489d-4b4c-89c1-af4282adf5f1n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Wed Oct 16 18:15:52 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Wed, 16 Oct 2024 11:15:52 -0700 (PDT) Subject: [CP2K-user] [CP2K:20775] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> Message-ID: <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> Hi .. Maybe it simply needs to run for more iterations to converge. With the information you provide it is hard to guess. Is the electronic structure calculation of the bulk working fine? Can you reproduce with your settings (BS, PP, XC etc) the known bulk properties? Regards Marcella On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com wrote: > Hello CP2K community, > > I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 (CZTS)* > system using *DFT* in CP2K. Below is a detailed description of my system > and the modifications I applied, followed by the SCF convergence issue I am > facing. > > Cu2ZnSnS4 (CZTS) system modeled in a periodic box. > Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. > The goal is to study surface interactions and electronic properties with a > vacuum layer. > > A 20 ? vacuum layer was added in the Z direction to simulate surface > effects: 10.8 ? 10.8 ? 30.8 ? > > A layer of sulfur (S) atoms was added to the surface to stabilize the > system and account for surface states. > > I attempted hydrogen passivation by capping the dangling bonds with H > atoms to further stabilize the surface. > I tried running the SCF loop with and without hydrogen passivation, but > both cases failed to converge. > > SCF Settings and Methods Tried: > *Orbital Transformation (OT):* > MINIMIZER: DIIS > PRECONDITIONER: FULL_SINGLE_INVERSE > ENERGY_GAP: 0.001 > N_HISTORY_VEC: 7 > *Diagonalization:* > I disabled the OT section and enabled diagonalization as a fallback > method, but the SCF still did not converge. ( I tried different parameters > setting) > SCF Parameters: > SCF_GUESS: ATOMIC > EPS_SCF: 1.0E-6 > MAX_SCF: 100 > > *The SCF loop exits after a few minutes, failing to converge under both OT > and diagonalization methods.* > Are there specific SCF settings or preconditioners that can improve > convergence for systems with large vacuum gaps? > Are there alternative strategies for handling surfaces and vacuum layers > that could make the system more stable for electronic structure > calculations? > Has anyone successfully applied hydrogen passivation to stabilize surfaces > and improve SCF convergence in CP2K? > > Any suggestions or advice would be greatly appreciated! > > Thank you for your help and support. > > > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/67fea74e-5fc0-4a47-8739-d8ae68961729n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lheidari125 at gmail.com Wed Oct 16 21:19:03 2024 From: lheidari125 at gmail.com (L Heidarizadeh) Date: Wed, 16 Oct 2024 14:19:03 -0700 (PDT) Subject: [CP2K-user] [CP2K:20775] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> Message-ID: <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> Hi, Thank you for the suggestion! I?ve already tested the *bulk CZTS system* with the same *basis sets, pseudopotentials, and exchange-correlation functional*, and it *converged successfully*. However, the *convergence problem arises only after introducing the 20 ? vacuum layer* and *surface modifications*. I've Adjusted *MAX_SCF* values (up to 500 iterations) and tried various *mixing parameters*, including Broyden mixing, but no improvement. I have attached my input file for more clarification (before switching to OT). Thank you again for your help and suggestions! Best regards, Layla On Wednesday, October 16, 2024 at 2:15:52?PM UTC-4 Marcella Iannuzzi wrote: > > Hi .. > > Maybe it simply needs to run for more iterations to converge. > With the information you provide it is hard to guess. > Is the electronic structure calculation of the bulk working fine? > Can you reproduce with your settings (BS, PP, XC etc) the known bulk > properties? > > Regards > Marcella > > > > On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com > wrote: > >> Hello CP2K community, >> >> I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 (CZTS)* >> system using *DFT* in CP2K. Below is a detailed description of my system >> and the modifications I applied, followed by the SCF convergence issue I am >> facing. >> >> Cu2ZnSnS4 (CZTS) system modeled in a periodic box. >> Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. >> The goal is to study surface interactions and electronic properties with >> a vacuum layer. >> >> A 20 ? vacuum layer was added in the Z direction to simulate surface >> effects: 10.8 ? 10.8 ? 30.8 ? >> >> A layer of sulfur (S) atoms was added to the surface to stabilize the >> system and account for surface states. >> >> I attempted hydrogen passivation by capping the dangling bonds with H >> atoms to further stabilize the surface. >> I tried running the SCF loop with and without hydrogen passivation, but >> both cases failed to converge. >> >> SCF Settings and Methods Tried: >> *Orbital Transformation (OT):* >> MINIMIZER: DIIS >> PRECONDITIONER: FULL_SINGLE_INVERSE >> ENERGY_GAP: 0.001 >> N_HISTORY_VEC: 7 >> *Diagonalization:* >> I disabled the OT section and enabled diagonalization as a fallback >> method, but the SCF still did not converge. ( I tried different parameters >> setting) >> SCF Parameters: >> SCF_GUESS: ATOMIC >> EPS_SCF: 1.0E-6 >> MAX_SCF: 100 >> >> *The SCF loop exits after a few minutes, failing to converge under both >> OT and diagonalization methods.* >> Are there specific SCF settings or preconditioners that can improve >> convergence for systems with large vacuum gaps? >> Are there alternative strategies for handling surfaces and vacuum layers >> that could make the system more stable for electronic structure >> calculations? >> Has anyone successfully applied hydrogen passivation to stabilize >> surfaces and improve SCF convergence in CP2K? >> >> Any suggestions or advice would be greatly appreciated! >> >> Thank you for your help and support. >> >> >> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/81ab9c27-73c9-4a21-92c9-21944a37f8cdn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- &GLOBAL PRINT_LEVEL MEDIUM PROJECT_NAME CZTS-MD RUN_TYPE MD &END GLOBAL &FORCE_EVAL METHOD QS &DFT UKS .TRUE. BASIS_SET_FILE_NAME BASIS_MOLOPT POTENTIAL_FILE_NAME POTENTIAL &MGRID NGRIDS 12 CUTOFF 300 REL_CUTOFF 50 &END MGRID &QS METHOD GPW EPS_DEFAULT 1.000E-14 &END QS &POISSON PERIODIC XYZ PSOLVER PERIODIC &END POISSON &SCF SCF_GUESS ATOMIC EPS_SCF 1.0E-6 MAX_SCF 500 ! Increased iterations &DIAGONALIZATION ON ! Using diagonalization method &END DIAGONALIZATION &MIXING ALPHA 0.1 BETA 0.1 METHOD BROYDEN_MIXING &END MIXING &END SCF &XC &XC_FUNCTIONAL &PBE &END PBE &END XC_FUNCTIONAL &VDW_POTENTIAL POTENTIAL_TYPE PAIR_POTENTIAL &PAIR_POTENTIAL PARAMETER_FILE_NAME dftd3.dat TYPE DFTD3 REFERENCE_FUNCTIONAL PBE R_CUTOFF [angstrom] 16 &END PAIR_POTENTIAL &END VDW_POTENTIAL &END XC &PRINT &MULLIKEN OFF &END MULLIKEN &HIRSHFELD OFF &END HIRSHFELD &END PRINT &END DFT &SUBSYS &CELL A 10.8683996201 0.0000000000 0.0000000000 B 0.0000000000 10.8683996201 0.0000000000 C 0.0000000000 0.0000000000 30.8495998383 PERIODIC XYZ &END CELL &TOPOLOGY COORD_FILE_NAME coords.xyz ! Coordinate file name anonymized COORD_FILE_FORMAT XYZ &END TOPOLOGY &KIND Zn BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q12 &END KIND &KIND Sn BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q4 &END KIND &KIND S BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q6 &END KIND &KIND Cu BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q11 &END KIND ! &KIND H ! BASIS_SET DZVP-MOLOPT-GTH ! POTENTIAL GTH-PBE ! &END KIND &END SUBSYS &END FORCE_EVAL &MOTION &MD ENSEMBLE NVT TEMPERATURE [K] 300 TIMESTEP [fs] 1.0 STEPS 5000 &THERMOSTAT REGION GLOBAL TYPE CSVR &CSVR TIMECON 20 &END CSVR &END THERMOSTAT &END MD &PRINT &TRAJECTORY &EACH MD 1 &END EACH &END TRAJECTORY &VELOCITIES ON &END VELOCITIES &FORCES ON &END FORCES &RESTART BACKUP_COPIES 1 &EACH MD 1 &END EACH &END RESTART &END PRINT &END MOTION From marci.akira at gmail.com Thu Oct 17 08:17:57 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 17 Oct 2024 01:17:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20777] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> Message-ID: Hi ... It is good that the bulk system converges. Do you also obtain the correct electronic structure? The energy cutoff seems very low. Are you using k-points for the bulk? Is the PBE functional good enough for this type of systems? 12 grids are too many, just use 4 or 5 I suppose that depending on how you cleave the bulk, there might be dangling bonds that should be saturated. Maybe the surface needs to go through a reconstruction (just guessing), in this case it might help to adjust the coordinates to avoid too many dangling bonds. Diagonalization is recommended if the energy gap is very small, which can be the case if you have unrelaxed dangling bonds at the surface. In this case, a smaller mixing ALPHA parameter might help, like 0.005. Regards Marcella On Wednesday, October 16, 2024 at 11:57:41?PM UTC+2 lheid... at gmail.com wrote: > Hi, > Thank you for the suggestion! I?ve already tested the *bulk CZTS system* > with the same *basis sets, pseudopotentials, and exchange-correlation > functional*, and it *converged successfully*. However, the *convergence > problem arises only after introducing the 20 ? vacuum layer* and *surface > modifications*. I've Adjusted *MAX_SCF* values (up to 500 iterations) and > tried various *mixing parameters*, including Broyden mixing, but no > improvement. I have attached my input file for more clarification (before > switching to OT). > Thank you again for your help and suggestions! > Best regards, > Layla > > On Wednesday, October 16, 2024 at 2:15:52?PM UTC-4 Marcella Iannuzzi wrote: > >> >> Hi .. >> >> Maybe it simply needs to run for more iterations to converge. >> With the information you provide it is hard to guess. >> Is the electronic structure calculation of the bulk working fine? >> Can you reproduce with your settings (BS, PP, XC etc) the known bulk >> properties? >> >> Regards >> Marcella >> >> >> >> On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com >> wrote: >> >>> Hello CP2K community, >>> >>> I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 (CZTS)* >>> system using *DFT* in CP2K. Below is a detailed description of my >>> system and the modifications I applied, followed by the SCF convergence >>> issue I am facing. >>> >>> Cu2ZnSnS4 (CZTS) system modeled in a periodic box. >>> Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. >>> The goal is to study surface interactions and electronic properties with >>> a vacuum layer. >>> >>> A 20 ? vacuum layer was added in the Z direction to simulate surface >>> effects: 10.8 ? 10.8 ? 30.8 ? >>> >>> A layer of sulfur (S) atoms was added to the surface to stabilize the >>> system and account for surface states. >>> >>> I attempted hydrogen passivation by capping the dangling bonds with H >>> atoms to further stabilize the surface. >>> I tried running the SCF loop with and without hydrogen passivation, but >>> both cases failed to converge. >>> >>> SCF Settings and Methods Tried: >>> *Orbital Transformation (OT):* >>> MINIMIZER: DIIS >>> PRECONDITIONER: FULL_SINGLE_INVERSE >>> ENERGY_GAP: 0.001 >>> N_HISTORY_VEC: 7 >>> *Diagonalization:* >>> I disabled the OT section and enabled diagonalization as a fallback >>> method, but the SCF still did not converge. ( I tried different parameters >>> setting) >>> SCF Parameters: >>> SCF_GUESS: ATOMIC >>> EPS_SCF: 1.0E-6 >>> MAX_SCF: 100 >>> >>> *The SCF loop exits after a few minutes, failing to converge under both >>> OT and diagonalization methods.* >>> Are there specific SCF settings or preconditioners that can improve >>> convergence for systems with large vacuum gaps? >>> Are there alternative strategies for handling surfaces and vacuum layers >>> that could make the system more stable for electronic structure >>> calculations? >>> Has anyone successfully applied hydrogen passivation to stabilize >>> surfaces and improve SCF convergence in CP2K? >>> >>> Any suggestions or advice would be greatly appreciated! >>> >>> Thank you for your help and support. >>> >>> >>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ab96804f-797c-44e2-8ca9-d250e5205602n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lheidari125 at gmail.com Thu Oct 17 16:25:21 2024 From: lheidari125 at gmail.com (L Heidarizadeh) Date: Thu, 17 Oct 2024 09:25:21 -0700 (PDT) Subject: [CP2K-user] [CP2K:20777] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> Message-ID: Hi Marcella, Thank you for your feedback and suggestions. I wanted to update you on what I?ve tried so far and ask for some additional guidance. Since I didn?t include a &KPOINTS block, CP2K defaulted to ?-point-only sampling. For my 2x2x1 periodic system, I realize this might not be sufficient, especially for accurate TDDFT calculations. should I add a 2 ? 2 ? 1 k-point grid to sample the Brillouin zone more effectively? For MD simulations, I?ve been using ?-point-only sampling since the focus is on atomic forces. Would you recommend defining a small k-point grid (e.g., 2 ? 2 ? 1) for MD runs as well, or is ?-point-only sufficient in this case? I increased the energy cutoff to 500 Ry but still faced convergence issues with the surface model. I also adjusted the mixing parameter ALPHA down to 0.005, but the issue persisted. I also applied DFT+U However, I?m still encountering issues with the surface convergence. I tried adding a sulfur layer and hydrogen passivation to address dangling bonds, but the SCF still failed to converge. You mentioned surface reconstruction?would you suggest running a geometry optimization on the surface before attempting SCF calculations? Also, are there any specific techniques or guidelines for identifying and capping dangling bonds effectively to stabilize the surface? I really appreciate your insights so far. If you have any further recommendations, I would be grateful. Best regards, Layla On Thursday, October 17, 2024 at 4:17:58?AM UTC-4 Marcella Iannuzzi wrote: > Hi ... > > It is good that the bulk system converges. Do you also obtain the correct > electronic structure? The energy cutoff seems very low. > Are you using k-points for the bulk? Is the PBE functional good enough > for this type of systems? 12 grids are too many, just use 4 or 5 > > I suppose that depending on how you cleave the bulk, there might be > dangling bonds that should be saturated. > Maybe the surface needs to go through a reconstruction (just guessing), in > this case it might help to adjust the coordinates to avoid too many > dangling bonds. > > Diagonalization is recommended if the energy gap is very small, which can > be the case if you have unrelaxed dangling bonds at the surface. > In this case, a smaller mixing ALPHA parameter might help, like 0.005. > > Regards > Marcella > > > > > > > On Wednesday, October 16, 2024 at 11:57:41?PM UTC+2 lheid... at gmail.com > wrote: > >> Hi, >> Thank you for the suggestion! I?ve already tested the *bulk CZTS >> system* with the same *basis sets, pseudopotentials, and >> exchange-correlation functional*, and it *converged successfully*. >> However, the *convergence problem arises only after introducing the 20 ? >> vacuum layer* and *surface modifications*. I've Adjusted *MAX_SCF* >> values (up to 500 iterations) and tried various *mixing parameters*, >> including Broyden mixing, but no improvement. I have attached my input file >> for more clarification (before switching to OT). >> Thank you again for your help and suggestions! >> Best regards, >> Layla >> >> On Wednesday, October 16, 2024 at 2:15:52?PM UTC-4 Marcella Iannuzzi >> wrote: >> >>> >>> Hi .. >>> >>> Maybe it simply needs to run for more iterations to converge. >>> With the information you provide it is hard to guess. >>> Is the electronic structure calculation of the bulk working fine? >>> Can you reproduce with your settings (BS, PP, XC etc) the known bulk >>> properties? >>> >>> Regards >>> Marcella >>> >>> >>> >>> On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com >>> wrote: >>> >>>> Hello CP2K community, >>>> >>>> I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 >>>> (CZTS)* system using *DFT* in CP2K. Below is a detailed description of >>>> my system and the modifications I applied, followed by the SCF convergence >>>> issue I am facing. >>>> >>>> Cu2ZnSnS4 (CZTS) system modeled in a periodic box. >>>> Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. >>>> The goal is to study surface interactions and electronic properties >>>> with a vacuum layer. >>>> >>>> A 20 ? vacuum layer was added in the Z direction to simulate surface >>>> effects: 10.8 ? 10.8 ? 30.8 ? >>>> >>>> A layer of sulfur (S) atoms was added to the surface to stabilize the >>>> system and account for surface states. >>>> >>>> I attempted hydrogen passivation by capping the dangling bonds with H >>>> atoms to further stabilize the surface. >>>> I tried running the SCF loop with and without hydrogen passivation, but >>>> both cases failed to converge. >>>> >>>> SCF Settings and Methods Tried: >>>> *Orbital Transformation (OT):* >>>> MINIMIZER: DIIS >>>> PRECONDITIONER: FULL_SINGLE_INVERSE >>>> ENERGY_GAP: 0.001 >>>> N_HISTORY_VEC: 7 >>>> *Diagonalization:* >>>> I disabled the OT section and enabled diagonalization as a fallback >>>> method, but the SCF still did not converge. ( I tried different parameters >>>> setting) >>>> SCF Parameters: >>>> SCF_GUESS: ATOMIC >>>> EPS_SCF: 1.0E-6 >>>> MAX_SCF: 100 >>>> >>>> *The SCF loop exits after a few minutes, failing to converge under both >>>> OT and diagonalization methods.* >>>> Are there specific SCF settings or preconditioners that can improve >>>> convergence for systems with large vacuum gaps? >>>> Are there alternative strategies for handling surfaces and vacuum >>>> layers that could make the system more stable for electronic structure >>>> calculations? >>>> Has anyone successfully applied hydrogen passivation to stabilize >>>> surfaces and improve SCF convergence in CP2K? >>>> >>>> Any suggestions or advice would be greatly appreciated! >>>> >>>> Thank you for your help and support. >>>> >>>> >>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d0ebcef1-d532-4e97-b40f-22c82452a5a1n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Thu Oct 17 16:46:14 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 17 Oct 2024 09:46:14 -0700 (PDT) Subject: [CP2K-user] [CP2K:20779] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> Message-ID: <97af0a63-a7c3-4474-9891-42d99fadc529n@googlegroups.com> Hi Layla It should not be too difficult to verify whether your bulk calculations are accurate enough. You can compare some properties with the literature, for instance band gap and density of states. The k-point sampling can also be replaced by adding more replicas of the unit cell. If the 2x2x1 is not sufficient in bulk calculations, it is also not sufficient for slab models, and this for sure would affect the forces. But if I correctly understood, you did not manage to calculate forces yet. A geometry optimisation before running MD is for sure meaningful in this case, but this would not solve the convergence problem, I fear, since the SCF is exactly the same. So the problem is still the single point calculation for the set of coordinates of your model. Maybe you can share the output of the SCF, such that we get an idea of how bad the problem is. Regards Marcella On Thursday, October 17, 2024 at 6:30:15?PM UTC+2 lheid... at gmail.com wrote: > Hi Marcella, > Thank you for your feedback and suggestions. I wanted to update you on > what I?ve tried so far and ask for some additional guidance. > Since I didn?t include a &KPOINTS block, CP2K defaulted to ?-point-only > sampling. For my 2x2x1 periodic system, I realize this might not be > sufficient, especially for accurate TDDFT calculations. should I add a 2 ? > 2 ? 1 k-point grid to sample the Brillouin zone more effectively? > > For MD simulations, I?ve been using ?-point-only sampling since the focus > is on atomic forces. Would you recommend defining a small k-point grid > (e.g., 2 ? 2 ? 1) for MD runs as well, or is ?-point-only sufficient in > this case? > > I increased the energy cutoff to 500 Ry but still faced convergence issues > with the surface model. I also adjusted the mixing parameter ALPHA down to > 0.005, but the issue persisted. > I also applied DFT+U However, I?m still encountering issues with the > surface convergence. > > I tried adding a sulfur layer and hydrogen passivation to address dangling > bonds, but the SCF still failed to converge. You mentioned surface > reconstruction?would you suggest running a geometry optimization on the > surface before attempting SCF calculations? Also, are there any specific > techniques or guidelines for identifying and capping dangling bonds > effectively to stabilize the surface? > > I really appreciate your insights so far. If you have any further > recommendations, I would be grateful. > > Best regards, > Layla > > On Thursday, October 17, 2024 at 4:17:58?AM UTC-4 Marcella Iannuzzi wrote: > >> Hi ... >> >> It is good that the bulk system converges. Do you also obtain the correct >> electronic structure? The energy cutoff seems very low. >> Are you using k-points for the bulk? Is the PBE functional good enough >> for this type of systems? 12 grids are too many, just use 4 or 5 >> >> I suppose that depending on how you cleave the bulk, there might be >> dangling bonds that should be saturated. >> Maybe the surface needs to go through a reconstruction (just guessing), >> in this case it might help to adjust the coordinates to avoid too many >> dangling bonds. >> >> Diagonalization is recommended if the energy gap is very small, which can >> be the case if you have unrelaxed dangling bonds at the surface. >> In this case, a smaller mixing ALPHA parameter might help, like 0.005. >> >> Regards >> Marcella >> >> >> >> >> >> >> On Wednesday, October 16, 2024 at 11:57:41?PM UTC+2 lheid... at gmail.com >> wrote: >> >>> Hi, >>> Thank you for the suggestion! I?ve already tested the *bulk CZTS >>> system* with the same *basis sets, pseudopotentials, and >>> exchange-correlation functional*, and it *converged successfully*. >>> However, the *convergence problem arises only after introducing the 20 >>> ? vacuum layer* and *surface modifications*. I've Adjusted *MAX_SCF* >>> values (up to 500 iterations) and tried various *mixing parameters*, >>> including Broyden mixing, but no improvement. I have attached my input file >>> for more clarification (before switching to OT). >>> Thank you again for your help and suggestions! >>> Best regards, >>> Layla >>> >>> On Wednesday, October 16, 2024 at 2:15:52?PM UTC-4 Marcella Iannuzzi >>> wrote: >>> >>>> >>>> Hi .. >>>> >>>> Maybe it simply needs to run for more iterations to converge. >>>> With the information you provide it is hard to guess. >>>> Is the electronic structure calculation of the bulk working fine? >>>> Can you reproduce with your settings (BS, PP, XC etc) the known bulk >>>> properties? >>>> >>>> Regards >>>> Marcella >>>> >>>> >>>> >>>> On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com >>>> wrote: >>>> >>>>> Hello CP2K community, >>>>> >>>>> I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 >>>>> (CZTS)* system using *DFT* in CP2K. Below is a detailed description >>>>> of my system and the modifications I applied, followed by the SCF >>>>> convergence issue I am facing. >>>>> >>>>> Cu2ZnSnS4 (CZTS) system modeled in a periodic box. >>>>> Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. >>>>> The goal is to study surface interactions and electronic properties >>>>> with a vacuum layer. >>>>> >>>>> A 20 ? vacuum layer was added in the Z direction to simulate surface >>>>> effects: 10.8 ? 10.8 ? 30.8 ? >>>>> >>>>> A layer of sulfur (S) atoms was added to the surface to stabilize the >>>>> system and account for surface states. >>>>> >>>>> I attempted hydrogen passivation by capping the dangling bonds with H >>>>> atoms to further stabilize the surface. >>>>> I tried running the SCF loop with and without hydrogen passivation, >>>>> but both cases failed to converge. >>>>> >>>>> SCF Settings and Methods Tried: >>>>> *Orbital Transformation (OT):* >>>>> MINIMIZER: DIIS >>>>> PRECONDITIONER: FULL_SINGLE_INVERSE >>>>> ENERGY_GAP: 0.001 >>>>> N_HISTORY_VEC: 7 >>>>> *Diagonalization:* >>>>> I disabled the OT section and enabled diagonalization as a fallback >>>>> method, but the SCF still did not converge. ( I tried different parameters >>>>> setting) >>>>> SCF Parameters: >>>>> SCF_GUESS: ATOMIC >>>>> EPS_SCF: 1.0E-6 >>>>> MAX_SCF: 100 >>>>> >>>>> *The SCF loop exits after a few minutes, failing to converge under >>>>> both OT and diagonalization methods.* >>>>> Are there specific SCF settings or preconditioners that can improve >>>>> convergence for systems with large vacuum gaps? >>>>> Are there alternative strategies for handling surfaces and vacuum >>>>> layers that could make the system more stable for electronic structure >>>>> calculations? >>>>> Has anyone successfully applied hydrogen passivation to stabilize >>>>> surfaces and improve SCF convergence in CP2K? >>>>> >>>>> Any suggestions or advice would be greatly appreciated! >>>>> >>>>> Thank you for your help and support. >>>>> >>>>> >>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/97af0a63-a7c3-4474-9891-42d99fadc529n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.f.d.j99 at gmail.com Fri Oct 18 12:52:26 2024 From: t.f.d.j99 at gmail.com (T deJ) Date: Fri, 18 Oct 2024 05:52:26 -0700 (PDT) Subject: [CP2K-user] [CP2K:20780] OVERLAP_DELTAT Message-ID: Dear all, I would like to inquire about the function of OVERLAP_DELTAT. It appears to do disable most calculation steps. Am I correct in assuming that OVERLAP_DELTAT is meant to be used by a file IO based, external code, that feeds CP2K the two consecutive geometries in one coordinate file? Best regards, Tjeerd -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/a3cac84f-6bfb-413e-9e93-d60419ee28dbn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Fri Oct 18 13:37:42 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 18 Oct 2024 06:37:42 -0700 (PDT) Subject: [CP2K-user] [CP2K:20781] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> Message-ID: Hi Frederick, thanks again for help. So I have tested different simulation variants and I know that the problem occurs when using OMP. For MPI calculations without OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from the effect on simulation time, they have no significant effect on the presence of errors. Below are the results for ssmp: ``` OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time spread, threads, 3850, 4144, 4, 290, 186min spread, cores, 3831, 4144, 3, 310, 183min spread, sockets, 3864, 4144, 3, 277, 104min close, threads, 3879, 4144, 3, 262, 171min close, cores, 3854, 4144, 0, 290, 168min close, sockets, 3865, 4144, 3, 276, 104min master, threads, 4121, 4144, 0, 23, 1002min master, cores, 4121, 4144, 0, 23, 986min master, sockets, 3942, 4144, 3, 199, 219min false, threads, 3918, 4144, 0, 226, 178min false, cores, 3919, 4144, 3, 222, 176min false, sockets, 3856, 4144, 4, 284, 104min ``` and psmp: ``` OMP_PROC_BIND, OMP_PLACES, results spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min spread, cores, 26 / 362 spread, cores, 26 / 362 close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min close, cores, 60 / 362 close, sockets, 13 / 362 master, threads, 13 / 362 master, cores, 79 / 362 master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min false, sockets, 96 / 362 not specified, not specified, Summary: correct: 4129 / 4227; failed: 98; 263min ``` Any ideas what I could do next to have more information about the source of the problem or maybe you see a potential solution at this stage? I would appreciate any further help. Best Bartosz pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > If I am not mistaken, you used 8 OpenMP threads. The test do not run that > efficiently with such a large number of threads. 2 should be sufficient. > The test result suggests that most of the functionality may work but due > to a missing backtrace (or similar information), it is hard to tell why > they fail. You could also try to run some of the single-node tests to > assess the stability of CP2K. > Best, > Frederick > > bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: > >> Sorry, forgot attachments. >> >> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ae4f6638-f1e0-47a2-b335-75d8019ca6cbn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Fri Oct 18 14:24:16 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 18 Oct 2024 07:24:16 -0700 (PDT) Subject: [CP2K-user] [CP2K:20782] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> Message-ID: <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> Dear Bartosz, What happens if you set the number of OpenMP threads to 1 (add '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the ssmp? Best, Frederick bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: > Hi Frederick, > > thanks again for help. So I have tested different simulation variants and > I know that the problem occurs when using OMP. For MPI calculations without > OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` and > `OMP_PLACES` parameters and apart from the effect on simulation time, > they have no significant effect on the presence of errors. Below are the > results for ssmp: > > ``` > OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time > spread, threads, 3850, 4144, 4, 290, 186min > spread, cores, 3831, 4144, 3, 310, 183min > spread, sockets, 3864, 4144, 3, 277, 104min > close, threads, 3879, 4144, 3, 262, 171min > close, cores, 3854, 4144, 0, 290, 168min > close, sockets, 3865, 4144, 3, 276, 104min > master, threads, 4121, 4144, 0, 23, 1002min > master, cores, 4121, 4144, 0, 23, 986min > master, sockets, 3942, 4144, 3, 199, 219min > false, threads, 3918, 4144, 0, 226, 178min > false, cores, 3919, 4144, 3, 222, 176min > false, sockets, 3856, 4144, 4, 284, 104min > ``` > > and psmp: > > ``` > OMP_PROC_BIND, OMP_PLACES, results > spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min > spread, cores, 26 / 362 > spread, cores, 26 / 362 > close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min > close, cores, 60 / 362 > close, sockets, 13 / 362 > master, threads, 13 / 362 > master, cores, 79 / 362 > master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min > false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min > false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min > false, sockets, 96 / 362 > not specified, not specified, Summary: correct: 4129 / 4227; failed: 98; > 263min > ``` > > Any ideas what I could do next to have more information about the source > of the problem or maybe you see a potential solution at this stage? I would > appreciate any further help. > > Best > Bartosz > > > pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein napisa?(a): > >> Dear Bartosz, >> If I am not mistaken, you used 8 OpenMP threads. The test do not run that >> efficiently with such a large number of threads. 2 should be sufficient. >> The test result suggests that most of the functionality may work but due >> to a missing backtrace (or similar information), it is hard to tell why >> they fail. You could also try to run some of the single-node tests to >> assess the stability of CP2K. >> Best, >> Frederick >> >> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >> >>> Sorry, forgot attachments. >>> >>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/791c7cac-d72c-4d79-b7ff-c9581366eed0n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emersonp90 at gmail.com Fri Oct 18 14:38:03 2024 From: emersonp90 at gmail.com (emerson p l) Date: Fri, 18 Oct 2024 07:38:03 -0700 (PDT) Subject: [CP2K-user] [CP2K:20783] ab initio MD - necessary requirements Message-ID: <0168b1f0-f6e4-477f-8422-34af3ad1a075n@googlegroups.com> Dear friends, I'm starting to study ab initio MD, and I have a doubt as to whether it is viable to run Car?Parrinello molecular dynamics with the computational resources I have. I saw an example with 64 water molecules, on average, how much RAM would be needed to run a system of this size on CP2K? What would be the number of threads needed? Thanks in advance. Best, Emerson. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/0168b1f0-f6e4-477f-8422-34af3ad1a075n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lheidari125 at gmail.com Fri Oct 18 14:55:31 2024 From: lheidari125 at gmail.com (L Heidarizadeh) Date: Fri, 18 Oct 2024 07:55:31 -0700 (PDT) Subject: [CP2K-user] [CP2K:20784] Re: SCF Convergence Issues with OT and Diagonalization for CZTS System (Vacuum, Sulfur Layer, and Hydrogen Passivation In-Reply-To: <97af0a63-a7c3-4474-9891-42d99fadc529n@googlegroups.com> References: <67815156-489d-4b4c-89c1-af4282adf5f1n@googlegroups.com> <67fea74e-5fc0-4a47-8739-d8ae68961729n@googlegroups.com> <81ab9c27-73c9-4a21-92c9-21944a37f8cdn@googlegroups.com> <97af0a63-a7c3-4474-9891-42d99fadc529n@googlegroups.com> Message-ID: <2b0ba105-7c0b-4066-aafe-d310aa7ff4adn@googlegroups.com> Hi Marcella, Thanks so much again for the feedback . (I believe hit the reply wrongly when I respond to this message ,sorry for that). I obtained a *band gap of about 1 eV*, whereas the *theoretical and experimental values* are around *1.5 eV*. My current system is a *2 ? 2 ? 1 supercell* with *64 atoms*. Do you think *increasing the size of the supercell* would help improve the accuracy, and I should try a larger system? And yes, i didn't calculate the forces. Thank you again. Best regards, Layla On Thursday, October 17, 2024 at 12:46:15?PM UTC-4 Marcella Iannuzzi wrote: > Hi Layla > > It should not be too difficult to verify whether your bulk calculations > are accurate enough. > You can compare some properties with the literature, for instance band gap > and density of states. > The k-point sampling can also be replaced by adding more replicas of the > unit cell. > If the 2x2x1 is not sufficient in bulk calculations, it is also not > sufficient for slab models, and this for sure would affect the forces. > But if I correctly understood, you did not manage to calculate forces yet. > A geometry optimisation before running MD is for sure meaningful in this > case, > but this would not solve the convergence problem, I fear, since the SCF is > exactly the same. > So the problem is still the single point calculation for the set of > coordinates of your model. > Maybe you can share the output of the SCF, such that we get an idea of how > bad the problem is. > > Regards > Marcella > > > > On Thursday, October 17, 2024 at 6:30:15?PM UTC+2 lheid... at gmail.com > wrote: > >> Hi Marcella, >> Thank you for your feedback and suggestions. I wanted to update you on >> what I?ve tried so far and ask for some additional guidance. >> Since I didn?t include a &KPOINTS block, CP2K defaulted to ?-point-only >> sampling. For my 2x2x1 periodic system, I realize this might not be >> sufficient, especially for accurate TDDFT calculations. should I add a 2 ? >> 2 ? 1 k-point grid to sample the Brillouin zone more effectively? >> >> For MD simulations, I?ve been using ?-point-only sampling since the focus >> is on atomic forces. Would you recommend defining a small k-point grid >> (e.g., 2 ? 2 ? 1) for MD runs as well, or is ?-point-only sufficient in >> this case? >> >> I increased the energy cutoff to 500 Ry but still faced convergence >> issues with the surface model. I also adjusted the mixing parameter ALPHA >> down to 0.005, but the issue persisted. >> I also applied DFT+U However, I?m still encountering issues with the >> surface convergence. >> >> I tried adding a sulfur layer and hydrogen passivation to address >> dangling bonds, but the SCF still failed to converge. You mentioned surface >> reconstruction?would you suggest running a geometry optimization on the >> surface before attempting SCF calculations? Also, are there any specific >> techniques or guidelines for identifying and capping dangling bonds >> effectively to stabilize the surface? >> >> I really appreciate your insights so far. If you have any further >> recommendations, I would be grateful. >> >> Best regards, >> Layla >> >> On Thursday, October 17, 2024 at 4:17:58?AM UTC-4 Marcella Iannuzzi wrote: >> >>> Hi ... >>> >>> It is good that the bulk system converges. Do you also obtain the >>> correct electronic structure? The energy cutoff seems very low. >>> Are you using k-points for the bulk? Is the PBE functional good enough >>> for this type of systems? 12 grids are too many, just use 4 or 5 >>> >>> I suppose that depending on how you cleave the bulk, there might be >>> dangling bonds that should be saturated. >>> Maybe the surface needs to go through a reconstruction (just guessing), >>> in this case it might help to adjust the coordinates to avoid too many >>> dangling bonds. >>> >>> Diagonalization is recommended if the energy gap is very small, which >>> can be the case if you have unrelaxed dangling bonds at the surface. >>> In this case, a smaller mixing ALPHA parameter might help, like 0.005. >>> >>> Regards >>> Marcella >>> >>> >>> >>> >>> >>> >>> On Wednesday, October 16, 2024 at 11:57:41?PM UTC+2 lheid... at gmail.com >>> wrote: >>> >>>> Hi, >>>> Thank you for the suggestion! I?ve already tested the *bulk CZTS >>>> system* with the same *basis sets, pseudopotentials, and >>>> exchange-correlation functional*, and it *converged successfully*. >>>> However, the *convergence problem arises only after introducing the 20 >>>> ? vacuum layer* and *surface modifications*. I've Adjusted *MAX_SCF* >>>> values (up to 500 iterations) and tried various *mixing parameters*, >>>> including Broyden mixing, but no improvement. I have attached my input file >>>> for more clarification (before switching to OT). >>>> Thank you again for your help and suggestions! >>>> Best regards, >>>> Layla >>>> >>>> On Wednesday, October 16, 2024 at 2:15:52?PM UTC-4 Marcella Iannuzzi >>>> wrote: >>>> >>>>> >>>>> Hi .. >>>>> >>>>> Maybe it simply needs to run for more iterations to converge. >>>>> With the information you provide it is hard to guess. >>>>> Is the electronic structure calculation of the bulk working fine? >>>>> Can you reproduce with your settings (BS, PP, XC etc) the known bulk >>>>> properties? >>>>> >>>>> Regards >>>>> Marcella >>>>> >>>>> >>>>> >>>>> On Wednesday, October 16, 2024 at 7:40:32?PM UTC+2 lheid... at gmail.com >>>>> wrote: >>>>> >>>>>> Hello CP2K community, >>>>>> >>>>>> I am running molecular dynamics (MD) simulations on a *Cu2ZnSnS4 >>>>>> (CZTS)* system using *DFT* in CP2K. Below is a detailed description >>>>>> of my system and the modifications I applied, followed by the SCF >>>>>> convergence issue I am facing. >>>>>> >>>>>> Cu2ZnSnS4 (CZTS) system modeled in a periodic box. >>>>>> Unit cell dimensions: 10.8 ? 10.8 ? 10.8 ?. >>>>>> The goal is to study surface interactions and electronic properties >>>>>> with a vacuum layer. >>>>>> >>>>>> A 20 ? vacuum layer was added in the Z direction to simulate surface >>>>>> effects: 10.8 ? 10.8 ? 30.8 ? >>>>>> >>>>>> A layer of sulfur (S) atoms was added to the surface to stabilize the >>>>>> system and account for surface states. >>>>>> >>>>>> I attempted hydrogen passivation by capping the dangling bonds with H >>>>>> atoms to further stabilize the surface. >>>>>> I tried running the SCF loop with and without hydrogen passivation, >>>>>> but both cases failed to converge. >>>>>> >>>>>> SCF Settings and Methods Tried: >>>>>> *Orbital Transformation (OT):* >>>>>> MINIMIZER: DIIS >>>>>> PRECONDITIONER: FULL_SINGLE_INVERSE >>>>>> ENERGY_GAP: 0.001 >>>>>> N_HISTORY_VEC: 7 >>>>>> *Diagonalization:* >>>>>> I disabled the OT section and enabled diagonalization as a fallback >>>>>> method, but the SCF still did not converge. ( I tried different parameters >>>>>> setting) >>>>>> SCF Parameters: >>>>>> SCF_GUESS: ATOMIC >>>>>> EPS_SCF: 1.0E-6 >>>>>> MAX_SCF: 100 >>>>>> >>>>>> *The SCF loop exits after a few minutes, failing to converge under >>>>>> both OT and diagonalization methods.* >>>>>> Are there specific SCF settings or preconditioners that can improve >>>>>> convergence for systems with large vacuum gaps? >>>>>> Are there alternative strategies for handling surfaces and vacuum >>>>>> layers that could make the system more stable for electronic structure >>>>>> calculations? >>>>>> Has anyone successfully applied hydrogen passivation to stabilize >>>>>> surfaces and improve SCF convergence in CP2K? >>>>>> >>>>>> Any suggestions or advice would be greatly appreciated! >>>>>> >>>>>> Thank you for your help and support. >>>>>> >>>>>> >>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/2b0ba105-7c0b-4066-aafe-d310aa7ff4adn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- ******************************************************************************* * ___ * * / \ * * [ABORT] * * \___/ SCF run NOT converged. To continue the calculation regardless, * * | please set the keyword IGNORE_CONVERGENCE_FAILURE. * * O/| * * /| | * * / \ qs_scf.F:605 * ******************************************************************************* ===== Routine Calling Stack ===== 6 scf_env_do_scf 5 qs_energies 4 qs_forces 3 velocity_verlet 2 qs_mol_dyn_low 1 CP2K GLOBAL| Force Environment number 1 GLOBAL| Basis set file name BASIS_MOLOPT GLOBAL| Potential file name POTENTIAL GLOBAL| MM Potential file name MM_POTENTIAL GLOBAL| Coordinate file name cu2snzns4.xyz GLOBAL| Method name CP2K GLOBAL| Project name CZTS-MD GLOBAL| Run type MD GLOBAL| FFT library FFTW3 GLOBAL| Diagonalization library ELPA GLOBAL| DGEMM library BLAS GLOBAL| Minimum number of eigenvectors for ELPA usage 16 GLOBAL| Orthonormality check for eigenvectors DISABLED GLOBAL| Matrix multiplication library ScaLAPACK GLOBAL| All-to-all communication in single precision F GLOBAL| FFTs using library dependent lengths F GLOBAL| Grid backend AUTO GLOBAL| Global print level MEDIUM GLOBAL| MPI I/O enabled T GLOBAL| Total number of message passing processes 25 GLOBAL| Number of threads for this process 1 GLOBAL| This output is from process 0 GLOBAL| Stack size for threads created by OpenMP (OMP_STACKSIZE) default GLOBAL| CPU model name Intel(R) Xeon(R) Gold 6448Y GLOBAL| CPUID 1003 GLOBAL| Compiled for CPUID 0 *** HINT in environment.F:920 :: The compiler target flags (generic) used *** *** to build this binary cannot exploit all extensions of this CPU model *** *** (x86_avx512). Consider compiler target flags as part of FCFLAGS and *** *** CFLAGS (ARCH file). *** MEMORY| system memory details [Kb] MEMORY| rank 0 min max average MEMORY| MemTotal 527848940 527848940 527848940 527848940 MEMORY| MemFree 468809240 468809240 468809240 468809240 MEMORY| Buffers 619512 619512 619512 619512 MEMORY| Cached 53173808 53173808 53173808 53173808 MEMORY| Slab 1083400 1083400 1083400 1083400 MEMORY| SReclaimable 591012 591012 591012 591012 MEMORY| MemLikelyFree 523193572 523193572 523193572 523193572 *** Fundamental physical constants (SI units) *** *** Literature: B. J. Mohr and B. N. Taylor, *** CODATA recommended values of the fundamental physical *** constants: 2006, Web Version 5.1 *** http://physics.nist.gov/constants Speed of light in vacuum [m/s] 2.99792458000000E+08 Magnetic constant or permeability of vacuum [N/A**2] 1.25663706143592E-06 Electric constant or permittivity of vacuum [F/m] 8.85418781762039E-12 Planck constant (h) [J*s] 6.62606896000000E-34 Planck constant (h-bar) [J*s] 1.05457162825177E-34 Elementary charge [C] 1.60217648700000E-19 Electron mass [kg] 9.10938215000000E-31 Electron g factor [ ] -2.00231930436220E+00 Proton mass [kg] 1.67262163700000E-27 Fine-structure constant 7.29735253760000E-03 Rydberg constant [1/m] 1.09737315685270E+07 Avogadro constant [1/mol] 6.02214179000000E+23 Boltzmann constant [J/K] 1.38065040000000E-23 Atomic mass unit [kg] 1.66053878200000E-27 Bohr radius [m] 5.29177208590000E-11 *** Conversion factors *** [u] -> [a.u.] 1.82288848426455E+03 [Angstrom] -> [Bohr] = [a.u.] 1.88972613288564E+00 [a.u.] = [Bohr] -> [Angstrom] 5.29177208590000E-01 [a.u.] -> [s] 2.41888432650478E-17 [a.u.] -> [fs] 2.41888432650478E-02 [a.u.] -> [J] 4.35974393937059E-18 [a.u.] -> [N] 8.23872205491840E-08 [a.u.] -> [K] 3.15774647902944E+05 [a.u.] -> [kJ/mol] 2.62549961709828E+03 [a.u.] -> [kcal/mol] 6.27509468713739E+02 [a.u.] -> [Pa] 2.94210107994716E+13 [a.u.] -> [bar] 2.94210107994716E+08 [a.u.] -> [atm] 2.90362800883016E+08 [a.u.] -> [eV] 2.72113838565563E+01 [a.u.] -> [Hz] 6.57968392072181E+15 [a.u.] -> [1/cm] (wave numbers) 2.19474631370540E+05 [a.u./Bohr**2] -> [1/cm] 5.14048714338585E+03 CELL_TOP| Volume [angstrom^3]: 3644.019835 CELL_TOP| Vector a [angstrom 10.868 0.000 0.000 |a| = 10.868400 CELL_TOP| Vector b [angstrom 0.000 10.868 0.000 |b| = 10.868400 CELL_TOP| Vector c [angstrom 0.000 0.000 30.850 |c| = 30.849600 CELL_TOP| Angle (b,c), alpha [degree]: 90.000000 CELL_TOP| Angle (a,c), beta [degree]: 90.000000 CELL_TOP| Angle (a,b), gamma [degree]: 90.000000 CELL_TOP| Numerically orthorhombic: YES CELL_TOP| Periodicity XYZ GENERATE| Preliminary Number of Bonds generated: 0 GENERATE| Achieved consistency in connectivity generation. CELL| Volume [angstrom^3]: 3644.019835 CELL| Vector a [angstrom]: 10.868 0.000 0.000 |a| = 10.868400 CELL| Vector b [angstrom]: 0.000 10.868 0.000 |b| = 10.868400 CELL| Vector c [angstrom]: 0.000 0.000 30.850 |c| = 30.849600 CELL| Angle (b,c), alpha [degree]: 90.000000 CELL| Angle (a,c), beta [degree]: 90.000000 CELL| Angle (a,b), gamma [degree]: 90.000000 CELL| Numerically orthorhombic: YES CELL| Periodicity XYZ CELL_REF| Volume [angstrom^3]: 3644.019835 CELL_REF| Vector a [angstrom 10.868 0.000 0.000 |a| = 10.868400 CELL_REF| Vector b [angstrom 0.000 10.868 0.000 |b| = 10.868400 CELL_REF| Vector c [angstrom 0.000 0.000 30.850 |c| = 30.849600 CELL_REF| Angle (b,c), alpha [degree]: 90.000000 CELL_REF| Angle (a,c), beta [degree]: 90.000000 CELL_REF| Angle (a,b), gamma [degree]: 90.000000 CELL_REF| Numerically orthorhombic: YES CELL_REF| Periodicity XYZ ******************************************************************************* ******************************************************************************* ** ** ** ##### ## ## ** ** ## ## ## ## ## ** ** ## ## ## ###### ** ** ## ## ## ## ## ##### ## ## #### ## ##### ##### ** ** ## ## ## ## ## ## ## ## ## ## ## ## ## ## ** ** ## ## ## ## ## ## ## #### ### ## ###### ###### ** ** ## ### ## ## ## ## ## ## ## ## ## ## ** ** ####### ##### ## ##### ## ## #### ## ##### ## ** ** ## ## ** ** ** ** ... make the atoms dance ** ** ** ** Copyright (C) by CP2K developers group (2000-2024) ** ** J. Chem. Phys. 152, 194103 (2020) ** ** ** ******************************************************************************* DFT| Spin unrestricted (spin-polarized) Kohn-Sham calculation UKS DFT| Multiplicity 1 DFT| Number of spin states 2 DFT| Charge 0 DFT| Self-interaction correction (SIC) NO DFT| Cutoffs: density 1.000000E-10 DFT| gradient 1.000000E-10 DFT| tau 1.000000E-10 DFT| cutoff_smoothing_range 0.000000E+00 DFT| XC density smoothing NONE DFT| XC derivatives PW FUNCTIONAL| PBE: FUNCTIONAL| J.P.Perdew, K.Burke, M.Ernzerhof, Phys. Rev. Letter, vol. 77, n 18, FUNCTIONAL| pp. 3865-3868, (1996) {spin polarized} vdW POTENTIAL| Pair Potential vdW POTENTIAL| DFT-D3 (Version 3.1) vdW POTENTIAL| Potential Form: S. Grimme et al, JCP 132: 154104 (2010) vdW POTENTIAL| Zero Damping vdW POTENTIAL| Cutoff Radius [Bohr]: 30.24 vdW POTENTIAL| s6 Scaling Factor: 1.0000 vdW POTENTIAL| sr6 Scaling Factor: 1.2170 vdW POTENTIAL| s8 Scaling Factor: 0.7220 vdW POTENTIAL| Cutoff for CN calculation: 0.1000E-05 QS| Method: GPW QS| Density plane wave grid type NON-SPHERICAL FULLSPACE QS| Number of grid levels: 12 QS| Density cutoff [a.u.]: 150.0 QS| Multi grid cutoff [a.u.]: 1) grid level 150.0 QS| 2) grid level 50.0 QS| 3) grid level 16.7 QS| 4) grid level 5.6 QS| 5) grid level 1.9 QS| 6) grid level 0.6 QS| 7) grid level 0.2 QS| 8) grid level 0.1 QS| 9) grid level 0.0 QS| 10) grid level 0.0 QS| 11) grid level 0.0 QS| 12) grid level 0.0 QS| Grid level progression factor: 3.0 QS| Relative density cutoff [a.u.]: 25.0 QS| Interaction thresholds: eps_pgf_orb: 1.0E-07 QS| eps_filter_matrix: 0.0E+00 QS| eps_core_charge: 1.0E-16 QS| eps_rho_gspace: 1.0E-14 QS| eps_rho_rspace: 1.0E-14 QS| eps_gvg_rspace: 1.0E-07 QS| eps_ppl: 1.0E-02 QS| eps_ppnl: 1.0E-09 ATOMIC KIND INFORMATION 1. Atomic kind: Cu Number of atoms: 16 Orbital Basis Set DZVP-MOLOPT-SR-GTH Number of orbital shell sets: 1 Number of orbital shells: 7 Number of primitive Cartesian functions: 6 Number of Cartesian basis functions: 30 Number of spherical basis functions: 25 Norm type: 2 Normalised Cartesian orbitals: Set Shell Orbital Exponent Coefficient 1 1 2s 5.804051 0.045596 2.947778 -0.139279 1.271621 0.214572 0.517174 0.085605 0.198007 -0.138200 0.061684 -0.053295 1 2 3s 5.804051 0.130403 2.947778 -0.161341 1.271621 -0.101039 0.517174 -0.345250 0.198007 0.499925 0.061684 -0.162382 1 3 3px 5.804051 -0.066551 2.947778 0.111915 1.271621 -0.204459 0.517174 0.040237 0.198007 0.099394 0.061684 0.024283 1 3 3py 5.804051 -0.066551 2.947778 0.111915 1.271621 -0.204459 0.517174 0.040237 0.198007 0.099394 0.061684 0.024283 1 3 3pz 5.804051 -0.066551 2.947778 0.111915 1.271621 -0.204459 0.517174 0.040237 0.198007 0.099394 0.061684 0.024283 1 4 4px 5.804051 -0.329266 2.947778 0.140173 1.271621 0.545791 0.517174 0.280404 0.198007 -0.321326 0.061684 0.055470 1 4 4py 5.804051 -0.329266 2.947778 0.140173 1.271621 0.545791 0.517174 0.280404 0.198007 -0.321326 0.061684 0.055470 1 4 4pz 5.804051 -0.329266 2.947778 0.140173 1.271621 0.545791 0.517174 0.280404 0.198007 -0.321326 0.061684 0.055470 1 5 4dx2 5.804051 9.381621 2.947778 3.660256 1.271621 0.792510 0.517174 0.128389 0.198007 0.013938 0.061684 0.000367 1 5 4dxy 5.804051 16.249444 2.947778 6.339749 1.271621 1.372667 0.517174 0.222377 0.198007 0.024141 0.061684 0.000636 1 5 4dxz 5.804051 16.249444 2.947778 6.339749 1.271621 1.372667 0.517174 0.222377 0.198007 0.024141 0.061684 0.000636 1 5 4dy2 5.804051 9.381621 2.947778 3.660256 1.271621 0.792510 0.517174 0.128389 0.198007 0.013938 0.061684 0.000367 1 5 4dyz 5.804051 16.249444 2.947778 6.339749 1.271621 1.372667 0.517174 0.222377 0.198007 0.024141 0.061684 0.000636 1 5 4dz2 5.804051 9.381621 2.947778 3.660256 1.271621 0.792510 0.517174 0.128389 0.198007 0.013938 0.061684 0.000367 1 6 5dx2 5.804051 -3.189865 2.947778 -1.988097 1.271621 -0.492759 0.517174 0.080183 0.198007 0.017863 0.061684 0.009947 1 6 5dxy 5.804051 -5.525009 2.947778 -3.443486 1.271621 -0.853484 0.517174 0.138882 0.198007 0.030940 0.061684 0.017228 1 6 5dxz 5.804051 -5.525009 2.947778 -3.443486 1.271621 -0.853484 0.517174 0.138882 0.198007 0.030940 0.061684 0.017228 1 6 5dy2 5.804051 -3.189865 2.947778 -1.988097 1.271621 -0.492759 0.517174 0.080183 0.198007 0.017863 0.061684 0.009947 1 6 5dyz 5.804051 -5.525009 2.947778 -3.443486 1.271621 -0.853484 0.517174 0.138882 0.198007 0.030940 0.061684 0.017228 1 6 5dz2 5.804051 -3.189865 2.947778 -1.988097 1.271621 -0.492759 0.517174 0.080183 0.198007 0.017863 0.061684 0.009947 1 7 5fx3 5.804051 -1.183361 2.947778 0.859927 1.271621 -0.674185 0.517174 -0.156063 0.198007 -0.018231 0.061684 0.001772 1 7 5fx2y 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fx2z 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fxy2 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fxyz 5.804051 -4.583137 2.947778 3.330483 1.271621 -2.611108 0.517174 -0.604431 0.198007 -0.070608 0.061684 0.006861 1 7 5fxz2 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fy3 5.804051 -1.183361 2.947778 0.859927 1.271621 -0.674185 0.517174 -0.156063 0.198007 -0.018231 0.061684 0.001772 1 7 5fy2z 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fyz2 5.804051 -2.646075 2.947778 1.922855 1.271621 -1.507524 0.517174 -0.348968 0.198007 -0.040766 0.061684 0.003961 1 7 5fz3 5.804051 -1.183361 2.947778 0.859927 1.271621 -0.674185 0.517174 -0.156063 0.198007 -0.018231 0.061684 0.001772 Atomic covalent radius [Angstrom]: 1.170 Atomic van der Waals radius [Angstrom]: 1.400 GTH Potential information for GTH-PBE-q11 Description: Goedecker-Teter-Hutter pseudopotential Goedecker et al., PRB 54, 1703 (1996) Hartwigsen et al., PRB 58, 3641 (1998) Krack, TCA 114, 145 (2005) Gaussian exponent of the core charge distribution: 1.779993 Electronic configuration (s p d ...): 1 0 10 Parameters of the local part of the GTH pseudopotential: rloc C1 C2 C3 C4 0.530000 Parameters of the non-local part of the GTH pseudopotential: l r(l) h(i,j,l) 0 0.431355 9.693805 -6.470165 1.935952 -6.470165 11.501774 -4.998607 1.935952 -4.998607 3.967521 1 0.561392 2.545473 -0.784636 -0.784636 0.928394 2 0.264555 -12.828614 2. Atomic kind: Zn Number of atoms: 8 Orbital Basis Set DZVP-MOLOPT-SR-GTH Number of orbital shell sets: 1 Number of orbital shells: 7 Number of primitive Cartesian functions: 6 Number of Cartesian basis functions: 30 Number of spherical basis functions: 25 Norm type: 2 Normalised Cartesian orbitals: Set Shell Orbital Exponent Coefficient 1 1 2s 6.400813 0.070678 3.167793 -0.182901 1.341704 0.275787 0.545418 0.070077 0.222222 -0.163015 0.079830 -0.058340 1 2 3s 6.400813 0.159300 3.167793 -0.240359 1.341704 -0.028572 0.545418 -0.590243 0.222222 0.705234 0.079830 -0.232595 1 3 3px 6.400813 0.044197 3.167793 -0.065115 1.341704 0.233173 0.545418 -0.041743 0.222222 -0.119659 0.079830 -0.031199 1 3 3py 6.400813 0.044197 3.167793 -0.065115 1.341704 0.233173 0.545418 -0.041743 0.222222 -0.119659 0.079830 -0.031199 1 3 3pz 6.400813 0.044197 3.167793 -0.065115 1.341704 0.233173 0.545418 -0.041743 0.222222 -0.119659 0.079830 -0.031199 1 4 4px 6.400813 0.240176 3.167793 -0.408028 1.341704 -0.147214 0.545418 -0.102288 0.222222 0.357755 0.079830 -0.080257 1 4 4py 6.400813 0.240176 3.167793 -0.408028 1.341704 -0.147214 0.545418 -0.102288 0.222222 0.357755 0.079830 -0.080257 1 4 4pz 6.400813 0.240176 3.167793 -0.408028 1.341704 -0.147214 0.545418 -0.102288 0.222222 0.357755 0.079830 -0.080257 1 5 4dx2 6.400813 12.052935 3.167793 4.421864 1.341704 0.893324 0.545418 0.127232 0.222222 0.011466 0.079830 0.000178 1 5 4dxy 6.400813 20.876296 3.167793 7.658893 1.341704 1.547283 0.545418 0.220373 0.222222 0.019859 0.079830 0.000308 1 5 4dxz 6.400813 20.876296 3.167793 7.658893 1.341704 1.547283 0.545418 0.220373 0.222222 0.019859 0.079830 0.000308 1 5 4dy2 6.400813 12.052935 3.167793 4.421864 1.341704 0.893324 0.545418 0.127232 0.222222 0.011466 0.079830 0.000178 1 5 4dyz 6.400813 20.876296 3.167793 7.658893 1.341704 1.547283 0.545418 0.220373 0.222222 0.019859 0.079830 0.000308 1 5 4dz2 6.400813 12.052935 3.167793 4.421864 1.341704 0.893324 0.545418 0.127232 0.222222 0.011466 0.079830 0.000178 1 6 5dx2 6.400813 -3.077244 3.167793 -1.397438 1.341704 -0.501565 0.545418 0.120936 0.222222 0.032489 0.079830 0.013923 1 6 5dxy 6.400813 -5.329943 3.167793 -2.420434 1.341704 -0.868735 0.545418 0.209468 0.222222 0.056272 0.079830 0.024116 1 6 5dxz 6.400813 -5.329943 3.167793 -2.420434 1.341704 -0.868735 0.545418 0.209468 0.222222 0.056272 0.079830 0.024116 1 6 5dy2 6.400813 -3.077244 3.167793 -1.397438 1.341704 -0.501565 0.545418 0.120936 0.222222 0.032489 0.079830 0.013923 1 6 5dyz 6.400813 -5.329943 3.167793 -2.420434 1.341704 -0.868735 0.545418 0.209468 0.222222 0.056272 0.079830 0.024116 1 6 5dz2 6.400813 -3.077244 3.167793 -1.397438 1.341704 -0.501565 0.545418 0.120936 0.222222 0.032489 0.079830 0.013923 1 7 5fx3 6.400813 -0.041001 3.167793 0.016340 1.341704 0.068339 0.545418 0.147998 0.222222 0.008656 0.079830 0.003475 1 7 5fx2y 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fx2z 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fxy2 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fxyz 6.400813 -0.158795 3.167793 0.063286 1.341704 0.264674 0.545418 0.573193 0.222222 0.033523 0.079830 0.013460 1 7 5fxz2 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fy3 6.400813 -0.041001 3.167793 0.016340 1.341704 0.068339 0.545418 0.147998 0.222222 0.008656 0.079830 0.003475 1 7 5fy2z 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fyz2 6.400813 -0.091680 3.167793 0.036538 1.341704 0.152810 0.545418 0.330933 0.222222 0.019354 0.079830 0.007771 1 7 5fz3 6.400813 -0.041001 3.167793 0.016340 1.341704 0.068339 0.545418 0.147998 0.222222 0.008656 0.079830 0.003475 Atomic covalent radius [Angstrom]: 1.250 Atomic van der Waals radius [Angstrom]: 1.390 GTH Potential information for GTH-PBE-q12 Description: Goedecker-Teter-Hutter pseudopotential Goedecker et al., PRB 54, 1703 (1996) Hartwigsen et al., PRB 58, 3641 (1998) Krack, TCA 114, 145 (2005) Gaussian exponent of the core charge distribution: 1.922338 Electronic configuration (s p d ...): 2 0 10 Parameters of the local part of the GTH pseudopotential: rloc C1 C2 C3 C4 0.510000 Parameters of the non-local part of the GTH pseudopotential: l r(l) h(i,j,l) 0 0.400316 11.530041 -8.791898 3.145086 -8.791898 16.465775 -8.120578 3.145086 -8.120578 6.445509 1 0.543182 2.597195 -0.594263 -0.594263 0.703141 2 0.250959 -14.466958 3. Atomic kind: Sn Number of atoms: 8 Orbital Basis Set DZVP-MOLOPT-SR-GTH Number of orbital shell sets: 1 Number of orbital shells: 5 Number of primitive Cartesian functions: 5 Number of Cartesian basis functions: 14 Number of spherical basis functions: 13 Norm type: 2 Normalised Cartesian orbitals: Set Shell Orbital Exponent Coefficient 1 1 2s 0.666670 0.520777 0.486831 -0.306181 0.172776 -0.164135 0.065536 -0.028602 0.063644 0.007393 1 2 3s 0.666670 0.493242 0.486831 -0.377610 0.172776 -0.069723 0.065536 -0.189696 0.063644 0.305206 1 3 3px 0.666670 0.519082 0.486831 -0.396920 0.172776 -0.090433 0.065536 -0.029966 0.063644 0.012991 1 3 3py 0.666670 0.519082 0.486831 -0.396920 0.172776 -0.090433 0.065536 -0.029966 0.063644 0.012991 1 3 3pz 0.666670 0.519082 0.486831 -0.396920 0.172776 -0.090433 0.065536 -0.029966 0.063644 0.012991 1 4 4px 0.666670 0.740628 0.486831 -0.723623 0.172776 0.069715 0.065536 -0.140798 0.063644 0.174598 1 4 4py 0.666670 0.740628 0.486831 -0.723623 0.172776 0.069715 0.065536 -0.140798 0.063644 0.174598 1 4 4pz 0.666670 0.740628 0.486831 -0.723623 0.172776 0.069715 0.065536 -0.140798 0.063644 0.174598 1 5 4dx2 0.666670 0.239194 0.486831 -0.202509 0.172776 -0.028531 0.065536 -0.014897 0.063644 0.006227 1 5 4dxy 0.666670 0.414297 0.486831 -0.350755 0.172776 -0.049417 0.065536 -0.025803 0.063644 0.010786 1 5 4dxz 0.666670 0.414297 0.486831 -0.350755 0.172776 -0.049417 0.065536 -0.025803 0.063644 0.010786 1 5 4dy2 0.666670 0.239194 0.486831 -0.202509 0.172776 -0.028531 0.065536 -0.014897 0.063644 0.006227 1 5 4dyz 0.666670 0.414297 0.486831 -0.350755 0.172776 -0.049417 0.065536 -0.025803 0.063644 0.010786 1 5 4dz2 0.666670 0.239194 0.486831 -0.202509 0.172776 -0.028531 0.065536 -0.014897 0.063644 0.006227 Atomic covalent radius [Angstrom]: 1.410 Atomic van der Waals radius [Angstrom]: 2.170 GTH Potential information for GTH-PBE-q4 Description: Goedecker-Teter-Hutter pseudopotential Goedecker et al., PRB 54, 1703 (1996) Hartwigsen et al., PRB 58, 3641 (1998) Krack, TCA 114, 145 (2005) Gaussian exponent of the core charge distribution: 1.366027 Electronic configuration (s p d ...): 2 2 Parameters of the local part of the GTH pseudopotential: rloc C1 C2 C3 C4 0.605000 6.266782 Parameters of the non-local part of the GTH pseudopotential: l r(l) h(i,j,l) 0 0.566437 1.571181 1.470041 -1.178577 1.470041 -3.814770 3.043072 -1.178577 3.043072 -2.415363 1 0.641850 0.526898 0.403258 0.403258 -0.477141 2 0.990873 0.198766 4. Atomic kind: S Number of atoms: 40 Orbital Basis Set DZVP-MOLOPT-SR-GTH Number of orbital shell sets: 1 Number of orbital shells: 5 Number of primitive Cartesian functions: 4 Number of Cartesian basis functions: 14 Number of spherical basis functions: 13 Norm type: 2 Normalised Cartesian orbitals: Set Shell Orbital Exponent Coefficient 1 1 2s 2.215855 0.308496 1.131471 0.138505 0.410168 -0.373736 0.140587 -0.040370 1 2 3s 2.215855 -0.329528 1.131471 -0.517509 0.410168 0.787368 0.140587 -0.322786 1 3 3px 2.215855 0.420434 1.131471 -0.319283 0.410168 -0.335323 0.140587 -0.030976 1 3 3py 2.215855 0.420434 1.131471 -0.319283 0.410168 -0.335323 0.140587 -0.030976 1 3 3pz 2.215855 0.420434 1.131471 -0.319283 0.410168 -0.335323 0.140587 -0.030976 1 4 4px 2.215855 0.241296 1.131471 -0.170582 0.410168 -0.186977 0.140587 0.153509 1 4 4py 2.215855 0.241296 1.131471 -0.170582 0.410168 -0.186977 0.140587 0.153509 1 4 4pz 2.215855 0.241296 1.131471 -0.170582 0.410168 -0.186977 0.140587 0.153509 1 5 4dx2 2.215855 0.573559 1.131471 0.544926 0.410168 0.228284 0.140587 0.008811 1 5 4dxy 2.215855 0.993434 1.131471 0.943840 0.410168 0.395400 0.140587 0.015260 1 5 4dxz 2.215855 0.993434 1.131471 0.943840 0.410168 0.395400 0.140587 0.015260 1 5 4dy2 2.215855 0.573559 1.131471 0.544926 0.410168 0.228284 0.140587 0.008811 1 5 4dyz 2.215855 0.993434 1.131471 0.943840 0.410168 0.395400 0.140587 0.015260 1 5 4dz2 2.215855 0.573559 1.131471 0.544926 0.410168 0.228284 0.140587 0.008811 Atomic covalent radius [Angstrom]: 1.020 Atomic van der Waals radius [Angstrom]: 1.800 GTH Potential information for GTH-PBE-q6 Description: Goedecker-Teter-Hutter pseudopotential Goedecker et al., PRB 54, 1703 (1996) Hartwigsen et al., PRB 58, 3641 (1998) Krack, TCA 114, 145 (2005) Gaussian exponent of the core charge distribution: 2.834467 Electronic configuration (s p d ...): 2 4 Parameters of the local part of the GTH pseudopotential: rloc C1 C2 C3 C4 0.420000 -5.986260 Parameters of the non-local part of the GTH pseudopotential: l r(l) h(i,j,l) 0 0.364820 13.143544 -4.241830 -4.241830 5.476180 1 0.409480 3.700891 5. Atomic kind: H Number of atoms: 16 Orbital Basis Set DZVP-MOLOPT-GTH Number of orbital shell sets: 1 Number of orbital shells: 3 Number of primitive Cartesian functions: 7 Number of Cartesian basis functions: 5 Number of spherical basis functions: 5 Norm type: 2 Normalised Cartesian orbitals: Set Shell Orbital Exponent Coefficient 1 1 2s 11.478000 0.129129 3.700759 0.177012 1.446884 0.141285 0.716815 0.245670 0.247919 0.094768 0.066918 0.004062 0.021708 -0.000053 1 2 3s 11.478000 -0.079256 3.700759 -0.152992 1.446884 0.015066 0.716815 -0.331234 0.247919 0.210690 0.066918 0.058630 0.021708 -0.003429 1 3 3px 11.478000 0.325290 3.700759 0.187466 1.446884 0.443300 0.716815 0.267738 0.247919 0.088285 0.066918 0.019092 0.021708 0.000629 1 3 3py 11.478000 0.325290 3.700759 0.187466 1.446884 0.443300 0.716815 0.267738 0.247919 0.088285 0.066918 0.019092 0.021708 0.000629 1 3 3pz 11.478000 0.325290 3.700759 0.187466 1.446884 0.443300 0.716815 0.267738 0.247919 0.088285 0.066918 0.019092 0.021708 0.000629 Atomic covalent radius [Angstrom]: 0.320 Atomic van der Waals radius [Angstrom]: 1.090 GTH Potential information for GTH-PBE Description: Goedecker-Teter-Hutter pseudopotential Goedecker et al., PRB 54, 1703 (1996) Hartwigsen et al., PRB 58, 3641 (1998) Krack, TCA 114, 145 (2005) Gaussian exponent of the core charge distribution: 12.500000 Electronic configuration (s p d ...): 1 Parameters of the local part of the GTH pseudopotential: rloc C1 C2 C3 C4 0.200000 -4.178900 0.724463 MOLECULE KIND INFORMATION All atoms are their own molecule, skipping detailed information TOTAL NUMBERS AND MAXIMUM NUMBERS Total number of - Atomic kinds: 5 - Atoms: 88 - Shell sets: 88 - Shells: 456 - Primitive Cartesian functions: 456 - Cartesian basis functions: 1472 - Spherical basis functions: 1304 Maximum angular momentum of- Orbital basis functions: 3 - Local part of the GTH pseudopotential: 2 - Non-local part of the GTH pseudopotential: 4 MODULE QUICKSTEP: ATOMIC COORDINATES IN ANGSTROM Atom Kind Element X Y Z Z(eff) Mass 1 1 Cu 29 3.589669 3.427118 4.343183 11.0000 63.5460 2 1 Cu 29 6.306773 6.144206 9.767977 11.0000 63.5460 3 1 Cu 29 3.589669 8.861310 4.343183 11.0000 63.5460 4 1 Cu 29 9.023878 3.427118 4.343183 11.0000 63.5460 5 1 Cu 29 6.306773 11.578415 9.767977 11.0000 63.5460 6 1 Cu 29 11.740983 6.144206 9.767977 11.0000 63.5460 7 1 Cu 29 9.023878 8.861310 4.343183 11.0000 63.5460 8 1 Cu 29 11.740983 11.578415 9.767977 11.0000 63.5460 9 1 Cu 29 3.589669 6.144206 7.055588 11.0000 63.5460 10 1 Cu 29 3.589669 11.578415 7.055588 11.0000 63.5460 11 1 Cu 29 6.306773 3.427118 12.480382 11.0000 63.5460 12 1 Cu 29 11.740983 3.427118 12.480382 11.0000 63.5460 13 1 Cu 29 6.306773 8.861310 12.480382 11.0000 63.5460 14 1 Cu 29 9.023878 6.144206 7.055588 11.0000 63.5460 15 1 Cu 29 9.023878 11.578415 7.055588 11.0000 63.5460 16 1 Cu 29 11.740983 8.861310 12.480382 11.0000 63.5460 17 2 Zn 30 3.589669 6.144206 12.480382 12.0000 65.3800 18 2 Zn 30 3.589669 11.578415 12.480382 12.0000 65.3800 19 2 Zn 30 6.306773 3.427118 7.055588 12.0000 65.3800 20 2 Zn 30 11.740983 3.427118 7.055588 12.0000 65.3800 21 2 Zn 30 6.306773 8.861310 7.055588 12.0000 65.3800 22 2 Zn 30 9.023878 6.144206 12.480382 12.0000 65.3800 23 2 Zn 30 9.023878 11.578415 12.480382 12.0000 65.3800 24 2 Zn 30 11.740983 8.861310 7.055588 12.0000 65.3800 25 3 Sn 50 3.589669 3.427118 9.767977 4.0000 118.7100 26 3 Sn 50 6.306773 6.144206 4.343183 4.0000 118.7100 27 3 Sn 50 3.589669 8.861310 9.767977 4.0000 118.7100 28 3 Sn 50 9.023878 3.427118 9.767977 4.0000 118.7100 29 3 Sn 50 6.306773 11.578415 4.343183 4.0000 118.7100 30 3 Sn 50 11.740983 6.144206 4.343183 4.0000 118.7100 31 3 Sn 50 9.023878 8.861310 9.767977 4.0000 118.7100 32 3 Sn 50 11.740983 11.578415 4.343183 4.0000 118.7100 33 4 S 16 7.699018 4.749786 5.734224 6.0000 32.0650 34 4 S 16 10.348738 12.972835 5.734224 6.0000 32.0650 35 4 S 16 4.912354 10.186171 13.801746 6.0000 32.0650 36 4 S 16 13.135403 7.536450 13.801746 6.0000 32.0650 37 4 S 16 10.416107 7.466891 11.159018 6.0000 32.0650 38 4 S 16 13.065843 4.821537 11.159018 6.0000 32.0650 39 4 S 16 7.629459 12.903275 8.376952 6.0000 32.0650 40 4 S 16 4.984105 10.253555 8.376952 6.0000 32.0650 41 4 S 16 7.699018 10.183995 5.734224 6.0000 32.0650 42 4 S 16 10.348738 7.538625 5.734224 6.0000 32.0650 43 4 S 16 10.346547 10.186171 13.801746 6.0000 32.0650 44 4 S 16 7.701193 7.536450 13.801746 6.0000 32.0650 45 4 S 16 10.416107 12.901084 11.159018 6.0000 32.0650 46 4 S 16 13.065843 10.255730 11.159018 6.0000 32.0650 47 4 S 16 13.063651 12.903275 8.376952 6.0000 32.0650 48 4 S 16 10.418298 10.253555 8.376952 6.0000 32.0650 49 4 S 16 13.133211 4.749786 5.734224 6.0000 32.0650 50 4 S 16 4.914528 12.972835 5.734224 6.0000 32.0650 51 4 S 16 4.912354 4.751977 13.801746 6.0000 32.0650 52 4 S 16 13.135403 12.970643 13.801746 6.0000 32.0650 53 4 S 16 4.981914 7.466891 11.159018 6.0000 32.0650 54 4 S 16 7.631633 4.821537 11.159018 6.0000 32.0650 55 4 S 16 7.629459 7.469066 8.376952 6.0000 32.0650 56 4 S 16 4.984105 4.819346 8.376952 6.0000 32.0650 57 4 S 16 13.133211 10.183995 5.734224 6.0000 32.0650 58 4 S 16 4.914528 7.538625 5.734224 6.0000 32.0650 59 4 S 16 10.346547 4.751977 13.801746 6.0000 32.0650 60 4 S 16 7.701193 12.970643 13.801746 6.0000 32.0650 61 4 S 16 4.981914 12.901084 11.159018 6.0000 32.0650 62 4 S 16 7.631633 10.255730 11.159018 6.0000 32.0650 63 4 S 16 13.063651 7.469066 8.376952 6.0000 32.0650 64 4 S 16 10.418298 4.819346 8.376952 6.0000 32.0650 65 4 S 16 7.629459 12.903275 2.952159 6.0000 32.0650 66 4 S 16 4.984105 10.253555 2.952159 6.0000 32.0650 67 4 S 16 13.063651 12.903275 2.952159 6.0000 32.0650 68 4 S 16 10.418298 10.253555 2.952159 6.0000 32.0650 69 4 S 16 7.629459 7.469066 2.952159 6.0000 32.0650 70 4 S 16 4.984105 4.819346 2.952159 6.0000 32.0650 71 4 S 16 13.063651 7.469066 2.952159 6.0000 32.0650 72 4 S 16 10.418298 4.819346 2.952159 6.0000 32.0650 73 5 H 1 4.264124 4.408126 14.910167 1.0000 1.0079 74 5 H 1 9.730629 4.504982 14.971263 1.0000 1.0079 75 5 H 1 7.162077 7.449815 15.014899 1.0000 1.0079 76 5 H 1 4.387321 10.149292 15.052564 1.0000 1.0079 77 5 H 1 12.543135 7.318607 14.954722 1.0000 1.0079 78 5 H 1 9.785572 10.058525 15.009346 1.0000 1.0079 79 5 H 1 7.160221 12.904931 15.050389 1.0000 1.0079 80 5 H 1 12.596938 12.884794 15.014447 1.0000 1.0079 81 5 H 1 12.788090 13.255103 1.697862 1.0000 1.0079 82 5 H 1 7.400041 13.297953 1.704552 1.0000 1.0079 83 5 H 1 4.761444 10.669190 1.675585 1.0000 1.0079 84 5 H 1 4.769789 5.227655 1.706793 1.0000 1.0079 85 5 H 1 7.339931 7.736583 1.672507 1.0000 1.0079 86 5 H 1 10.135777 5.087465 1.692109 1.0000 1.0079 87 5 H 1 10.046901 10.366985 1.686422 1.0000 1.0079 88 5 H 1 12.664960 7.525980 1.699568 1.0000 1.0079 SCF PARAMETERS Density guess: ATOMIC -------------------------------------------------------- max_scf: 100 max_scf_history: 0 max_diis: 4 -------------------------------------------------------- eps_scf: 1.00E-06 eps_scf_history: 0.00E+00 eps_diis: 1.00E-01 eps_eigval: 1.00E-05 -------------------------------------------------------- level_shift [a.u.]: 0.000000 added MOs 100 100 -------------------------------------------------------- Mixing method: BROYDEN_MIXING charge density mixing in g-space -------------------------------------------------------- No outer SCF PW_GRID| Information for grid number 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 25 1 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 150.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -60 59 Points: 120 PW_GRID| Bounds 2 -60 59 Points: 120 PW_GRID| Bounds 3 -162 161 Points: 324 PW_GRID| Volume element (a.u.^3) 0.5271E-02 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 186624.0 186720 186600 PW_GRID| G-Rays 1555.2 1556 1555 PW_GRID| Real Space Points 186624.0 194400 155520 PW_GRID| Information for grid number 2 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 25 1 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 50.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -36 35 Points: 72 PW_GRID| Bounds 2 -36 35 Points: 72 PW_GRID| Bounds 3 -96 95 Points: 192 PW_GRID| Volume element (a.u.^3) 0.2471E-01 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 39813.1 41904 38952 PW_GRID| G-Rays 553.0 582 541 PW_GRID| Real Space Points 39813.1 41472 27648 PW_GRID| Information for grid number 3 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 25 1 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 16.7 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -20 19 Points: 40 PW_GRID| Bounds 2 -20 19 Points: 40 PW_GRID| Bounds 3 -54 53 Points: 108 PW_GRID| Volume element (a.u.^3) 0.1423 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 6912.0 7120 6680 PW_GRID| G-Rays 172.8 178 167 PW_GRID| Real Space Points 6912.0 8640 4320 PW_GRID| Information for grid number 4 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 5.6 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -12 11 Points: 24 PW_GRID| Bounds 2 -12 11 Points: 24 PW_GRID| Bounds 3 -32 31 Points: 64 PW_GRID| Volume element (a.u.^3) 0.6671 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 1474.6 1632 1368 PW_GRID| G-Rays 61.4 68 57 PW_GRID| Real Space Points 1474.6 1600 1024 PW_GRID| Information for grid number 5 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 1.9 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -7 7 Points: 15 PW_GRID| Bounds 2 -7 7 Points: 15 PW_GRID| Bounds 3 -18 17 Points: 36 PW_GRID| Volume element (a.u.^3) 3.036 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 324.0 375 285 PW_GRID| G-Rays 21.6 25 19 PW_GRID| Real Space Points 324.0 324 324 PW_GRID| Information for grid number 6 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.6 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -4 3 Points: 8 PW_GRID| Bounds 2 -4 3 Points: 8 PW_GRID| Bounds 3 -12 11 Points: 24 PW_GRID| Volume element (a.u.^3) 16.01 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 61.4 104 32 PW_GRID| G-Rays 7.7 13 4 PW_GRID| Real Space Points 61.4 96 24 PW_GRID| Information for grid number 7 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.2 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -3 2 Points: 6 PW_GRID| Bounds 2 -3 2 Points: 6 PW_GRID| Bounds 3 -6 5 Points: 12 PW_GRID| Volume element (a.u.^3) 56.92 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 17.3 24 6 PW_GRID| G-Rays 2.9 4 1 PW_GRID| Real Space Points 17.3 48 12 PW_GRID| Information for grid number 8 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.1 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -2 1 Points: 4 PW_GRID| Bounds 2 -2 1 Points: 4 PW_GRID| Bounds 3 -4 3 Points: 8 PW_GRID| Volume element (a.u.^3) 192.1 Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 5.1 8 0 PW_GRID| G-Rays 1.3 2 0 PW_GRID| Real Space Points 5.1 8 0 PW_GRID| Information for grid number 9 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -1 0 Points: 2 PW_GRID| Bounds 2 -1 0 Points: 2 PW_GRID| Bounds 3 -2 1 Points: 4 PW_GRID| Volume element (a.u.^3) 1537. Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 0.6 2 0 PW_GRID| G-Rays 0.3 1 0 PW_GRID| Real Space Points 0.6 4 0 PW_GRID| Information for grid number 10 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -1 0 Points: 2 PW_GRID| Bounds 2 -1 0 Points: 2 PW_GRID| Bounds 3 -2 1 Points: 4 PW_GRID| Volume element (a.u.^3) 1537. Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 0.6 2 0 PW_GRID| G-Rays 0.3 1 0 PW_GRID| Real Space Points 0.6 4 0 PW_GRID| Information for grid number 11 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -1 0 Points: 2 PW_GRID| Bounds 2 -1 0 Points: 2 PW_GRID| Bounds 3 -1 0 Points: 2 PW_GRID| Volume element (a.u.^3) 3074. Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 0.3 2 0 PW_GRID| G-Rays 0.2 1 0 PW_GRID| Real Space Points 0.3 2 0 PW_GRID| Information for grid number 12 PW_GRID| Number of the reference grid 1 PW_GRID| Grid distributed over 25 processors PW_GRID| Real space group dimensions 5 5 PW_GRID| the grid is blocked: NO PW_GRID| Cutoff [a.u.] 0.0 PW_GRID| spherical cutoff: NO PW_GRID| Bounds 1 -1 0 Points: 2 PW_GRID| Bounds 2 -1 0 Points: 2 PW_GRID| Bounds 3 -1 0 Points: 2 PW_GRID| Volume element (a.u.^3) 3074. Volume (a.u.^3) 24591.0651 PW_GRID| Grid span FULLSPACE PW_GRID| Distribution Average Max Min PW_GRID| G-Vectors 0.3 2 0 PW_GRID| G-Rays 0.2 1 0 PW_GRID| Real Space Points 0.3 2 0 POISSON| Solver PERIODIC POISSON| Periodicity XYZ RS_GRID| Information for grid number 1 RS_GRID| Bounds 1 -60 59 Points: 120 RS_GRID| Bounds 2 -60 59 Points: 120 RS_GRID| Bounds 3 -162 161 Points: 324 RS_GRID| Real space distribution over 25 groups RS_GRID| Real space distribution along direction 3 RS_GRID| Border size 31 RS_GRID| Distribution Average Max Min RS_GRID| Planes 75.0 75 74 RS_GRID| Information for grid number 2 RS_GRID| Bounds 1 -36 35 Points: 72 RS_GRID| Bounds 2 -36 35 Points: 72 RS_GRID| Bounds 3 -96 95 Points: 192 RS_GRID| Real space distribution over 25 groups RS_GRID| Real space distribution along direction 3 RS_GRID| Border size 31 RS_GRID| Distribution Average Max Min RS_GRID| Planes 69.7 70 69 RS_GRID| Information for grid number 3 RS_GRID| Bounds 1 -20 19 Points: 40 RS_GRID| Bounds 2 -20 19 Points: 40 RS_GRID| Bounds 3 -54 53 Points: 108 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 4 RS_GRID| Bounds 1 -12 11 Points: 24 RS_GRID| Bounds 2 -12 11 Points: 24 RS_GRID| Bounds 3 -32 31 Points: 64 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 5 RS_GRID| Bounds 1 -7 7 Points: 15 RS_GRID| Bounds 2 -7 7 Points: 15 RS_GRID| Bounds 3 -18 17 Points: 36 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 6 RS_GRID| Bounds 1 -4 3 Points: 8 RS_GRID| Bounds 2 -4 3 Points: 8 RS_GRID| Bounds 3 -12 11 Points: 24 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 7 RS_GRID| Bounds 1 -3 2 Points: 6 RS_GRID| Bounds 2 -3 2 Points: 6 RS_GRID| Bounds 3 -6 5 Points: 12 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 8 RS_GRID| Bounds 1 -2 1 Points: 4 RS_GRID| Bounds 2 -2 1 Points: 4 RS_GRID| Bounds 3 -4 3 Points: 8 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 9 RS_GRID| Bounds 1 -1 0 Points: 2 RS_GRID| Bounds 2 -1 0 Points: 2 RS_GRID| Bounds 3 -2 1 Points: 4 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 10 RS_GRID| Bounds 1 -1 0 Points: 2 RS_GRID| Bounds 2 -1 0 Points: 2 RS_GRID| Bounds 3 -2 1 Points: 4 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 11 RS_GRID| Bounds 1 -1 0 Points: 2 RS_GRID| Bounds 2 -1 0 Points: 2 RS_GRID| Bounds 3 -1 0 Points: 2 RS_GRID| Real space fully replicated RS_GRID| Group size 1 RS_GRID| Information for grid number 12 RS_GRID| Bounds 1 -1 0 Points: 2 RS_GRID| Bounds 2 -1 0 Points: 2 RS_GRID| Bounds 3 -1 0 Points: 2 RS_GRID| Real space fully replicated RS_GRID| Group size 1 MD_PAR| Molecular dynamics protocol (MD input parameters) MD_PAR| Ensemble type NVT MD_PAR| Number of time steps 5000 MD_PAR| Time step [fs] 1.000000 MD_PAR| Temperature [K] 300.000000 MD_PAR| Temperature tolerance [K] 0.000000 MD_PAR| Print MD information every 1 step(s) MD_PAR| File type Print frequency [steps] File names MD_PAR| Coordinates 1 CZTS-MD-pos-1.xyz MD_PAR| Velocities 1 CZTS-MD-vel-1.xyz MD_PAR| Energies 1 CZTS-MD-1.ener MD_PAR| Dump 1 CZTS-MD-1.restart ROT| Rotational analysis information ROT| Principal axes and moments of inertia [a.u.] ROT| 1 2 3 ROT| Eigenvalues 4.69089263784E+08 5.01385086589E+08 5.32075095218E+08 ROT| x -0.315079972870 0.629047271897 -0.710650505112 ROT| y -0.306308411732 0.641316887315 0.703482627324 ROT| z 0.898275997452 0.439331514650 -0.009383636697 ROT| Number of rotovibrational vectors 6 DOF| Calculation of degrees of freedom DOF| Number of atoms 88 DOF| Number of intramolecular constraints 0 DOF| Number of intermolecular constraints 0 DOF| Invariants (translations + rotations) 3 DOF| Degrees of freedom 261 DOF| Restraints information DOF| Number of intramolecular restraints 0 DOF| Number of intermolecular restraints 0 THERMOSTAT| Thermostat information for PARTICLES THERMOSTAT| Type of thermostat Canonical Sampling/Velocity Rescaling THERMOSTAT| CSVR time constant [fs] 20.000000 THERMOSTAT| Initial kinetic energy 0.000000000000E+00 THERMOSTAT| End of thermostat information for PARTICLES MD_VEL| Velocities initialization MD_VEL| Initial temperature [K] 300.000000 MD_VEL| COM velocity 0.0000000000 0.0000000000 0.0000000000 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Extrapolation method: initial_guess Atomic guess: The first density matrix is obtained in terms of atomic orbitals and electronic configurations assigned to each atomic kind Guess for atomic kind: Cu Electronic structure Total number of core electrons 18.00 Total number of valence electrons 11.00 Total number of electrons 29.00 Multiplicity not specified S [ 2.00 2.00 2.00] 1.00 P [ 6.00 6.00] D 10.00 ******************************************************************************* Iteration Convergence Energy [au] ******************************************************************************* 1 8.71505 -20.168431725172 2 1.34863 -47.029846705101 3 0.818172 -47.613326608586 4 0.860535 -47.607473473694 5 0.864171 -47.606157957192 6 0.865777 -47.605667663595 7 0.790042 -47.626123456192 8 0.293056 -47.683511067524 9 0.747961E-01 -47.690193205003 10 0.349749E-01 -47.690530826048 11 0.229606E-01 -47.690583915231 12 0.124650E-01 -47.690612134272 13 0.255913E-03 -47.690623870666 14 0.167134E-03 -47.690623873493 15 0.181177E-04 -47.690623875571 16 0.367695E-05 -47.690623875595 17 0.307454E-06 -47.690623875596 Energy components [Hartree] Total Energy :: -47.690623875596 Band Energy :: -1.733904951157 Kinetic Energy :: 70.782182583171 Potential Energy :: -118.472806458767 Virial (-V/T) :: 1.673765941303 Core Energy :: -88.493778474700 XC Energy :: -7.514089188792 Coulomb Energy :: 48.317243787896 Total Pseudopotential Energy :: -159.367716978354 Local Pseudopotential Energy :: -110.592891027464 Nonlocal Pseudopotential Energy :: -48.774825950891 Confinement :: 0.917559204831 Orbital energies State L Occupation Energy[a.u.] Energy[eV] 1 0 1.000 -0.138206 -3.760781 1 2 10.000 -0.159570 -4.342117 Total Electron Density at R=0: 0.000061 Guess for atomic kind: Zn Electronic structure Total number of core electrons 18.00 Total number of valence electrons 12.00 Total number of electrons 30.00 Multiplicity not specified S [ 2.00 2.00 2.00] 2.00 P [ 6.00 6.00] D 10.00 ******************************************************************************* Iteration Convergence Energy [au] ******************************************************************************* 1 13.7275 -30.289279873707 2 1.62573 -59.415025792876 3 0.550947 -60.124876236920 4 0.306346 -60.145956083590 5 0.253786 -60.148352517188 6 0.227042 -60.149386567743 7 0.186905E-01 -60.153358327574 8 0.992511E-03 -60.153383650094 9 0.138567E-03 -60.153383717664 10 0.208056E-04 -60.153383718973 11 0.278120E-05 -60.153383719003 12 0.242005E-07 -60.153383719004 Energy components [Hartree] Total Energy :: -60.153383719004 Band Energy :: -3.832694036072 Kinetic Energy :: 84.396829757587 Potential Energy :: -144.550213476591 Virial (-V/T) :: 1.712744588769 Core Energy :: -110.535288578130 XC Energy :: -8.666206296984 Coulomb Energy :: 59.048111156110 Total Pseudopotential Energy :: -195.030087766233 Local Pseudopotential Energy :: -135.966032603671 Nonlocal Pseudopotential Energy :: -59.064055162562 Confinement :: 0.979694305158 Orbital energies State L Occupation Energy[a.u.] Energy[eV] 1 0 2.000 -0.191113 -5.200458 1 2 10.000 -0.345047 -9.389199 Total Electron Density at R=0: 0.000033 Guess for atomic kind: Sn Electronic structure Total number of core electrons 46.00 Total number of valence electrons 4.00 Total number of electrons 50.00 Multiplicity not specified S [ 2.00 2.00 2.00 2.00] 2.00 P [ 6.00 6.00 6.00] 2.00 D [ 10.00 10.00] ******************************************************************************* Iteration Convergence Energy [au] ******************************************************************************* 1 0.165374 -3.246804701694 2 0.329403E-01 -3.274469419853 3 0.860984E-04 -3.275610622456 4 0.306030E-05 -3.275610628895 5 0.147215E-05 -3.275610628902 6 0.966370E-06 -3.275610628903 Energy components [Hartree] Total Energy :: -3.275610628903 Band Energy :: -0.922750326470 Kinetic Energy :: 1.140919601724 Potential Energy :: -4.416530230627 Virial (-V/T) :: 3.871026691059 Core Energy :: -5.014366446743 XC Energy :: -0.879744148339 Coulomb Energy :: 2.618499966179 Total Pseudopotential Energy :: -6.208397630549 Local Pseudopotential Energy :: -6.270819541740 Nonlocal Pseudopotential Energy :: 0.062421911190 Confinement :: 0.531115820820 Orbital energies State L Occupation Energy[a.u.] Energy[eV] 1 0 2.000 -0.356679 -9.705721 1 1 2.000 -0.104696 -2.848935 Total Electron Density at R=0: 0.000620 Guess for atomic kind: S Electronic structure Total number of core electrons 10.00 Total number of valence electrons 6.00 Total number of electrons 16.00 Multiplicity not specified S [ 2.00 2.00] 2.00 P [ 6.00] 4.00 ******************************************************************************* Iteration Convergence Energy [au] ******************************************************************************* 1 0.506127E-01 -9.947327174035 2 0.268535E-01 -9.947806097329 3 0.416209E-04 -9.948003730091 4 0.374461E-07 -9.948003730535 Energy components [Hartree] Total Energy :: -9.948003730535 Band Energy :: -2.105895662988 Kinetic Energy :: 3.753358161068 Potential Energy :: -13.701361891603 Virial (-V/T) :: 3.650427511480 Core Energy :: -16.361135329452 XC Energy :: -2.059734796104 Coulomb Energy :: 8.472866395021 Total Pseudopotential Energy :: -20.171145086058 Local Pseudopotential Energy :: -22.366310416427 Nonlocal Pseudopotential Energy :: 2.195165330369 Confinement :: 0.566515955377 Orbital energies State L Occupation Energy[a.u.] Energy[eV] 1 0 2.000 -0.599768 -16.320517 1 1 4.000 -0.226590 -6.165825 Total Electron Density at R=0: 0.000005 Guess for atomic kind: H Electronic structure Total number of core electrons 0.00 Total number of valence electrons 1.00 Total number of electrons 1.00 Multiplicity not specified S 1.00 ******************************************************************************* Iteration Convergence Energy [au] ******************************************************************************* 1 0.437545E-02 -0.424159432110 2 0.531356E-03 -0.424178438213 3 0.259176E-06 -0.424178722480 Energy components [Hartree] Total Energy :: -0.424178722480 Band Energy :: -0.199015229016 Kinetic Energy :: 0.464007633961 Potential Energy :: -0.888186356441 Virial (-V/T) :: 1.914163240935 Core Energy :: -0.479160803564 XC Energy :: -0.244352904054 Coulomb Energy :: 0.299334985138 Total Pseudopotential Energy :: -0.962295316385 Local Pseudopotential Energy :: -0.962295316385 Nonlocal Pseudopotential Energy :: 0.000000000000 Confinement :: 0.191268788606 Orbital energies State L Occupation Energy[a.u.] Energy[eV] 1 0 1.000 -0.199015 -5.415480 Total Electron Density at R=0: 0.242907 Spin 1 Re-scaling the density matrix to get the right number of electrons for spin 1 # Electrons Trace(P) Scaling factor 280 280.000 1.000 Spin 2 Re-scaling the density matrix to get the right number of electrons for spin 2 # Electrons Trace(P) Scaling factor 280 280.000 1.000 SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 NoMix/Diag. 0.10E+00 1.2 0.78549895 -1702.3095331670 -1.70E+03 2 Broy./Diag. 0.10E+00 1.8 0.06882245 -1709.3065349924 -7.00E+00 3 Broy./Diag. 0.10E+00 1.7 0.40432856 -1701.5591901427 7.75E+00 4 Broy./Diag. 0.10E+00 1.8 0.37993102 -1700.8709585360 6.88E-01 5 Broy./Diag. 0.10E+00 1.8 0.06890458 -1699.1408225940 1.73E+00 6 Broy./Diag. 0.10E+00 1.8 0.02185138 -1700.0451372473 -9.04E-01 7 Broy./Diag. 0.10E+00 1.8 0.06774965 -1702.2297052003 -2.18E+00 8 Broy./Diag. 0.10E+00 1.8 0.02975156 -1702.8508067633 -6.21E-01 9 Broy./Diag. 0.10E+00 1.8 0.01469848 -1702.7054659184 1.45E-01 10 Broy./Diag. 0.10E+00 1.8 0.03175897 -1701.9124674330 7.93E-01 11 Broy./Diag. 0.10E+00 1.8 0.04207349 -1701.6095732275 3.03E-01 12 Broy./Diag. 0.10E+00 1.8 0.01277258 -1701.2413144985 3.68E-01 13 Broy./Diag. 0.10E+00 1.8 0.00327871 -1701.2025132217 3.88E-02 14 Broy./Diag. 0.10E+00 1.8 0.00187280 -1701.1603919233 4.21E-02 15 Broy./Diag. 0.10E+00 1.8 0.00139846 -1701.1769411992 -1.65E-02 16 Broy./Diag. 0.10E+00 1.8 0.00076752 -1701.2099084634 -3.30E-02 17 Broy./Diag. 0.10E+00 1.8 0.00023230 -1701.2267097534 -1.68E-02 18 Broy./Diag. 0.10E+00 1.8 0.00111222 -1701.2427340512 -1.60E-02 19 Broy./Diag. 0.10E+00 1.8 0.00092920 -1701.2485308283 -5.80E-03 20 Broy./Diag. 0.10E+00 1.8 0.00035639 -1701.2458426902 2.69E-03 21 Broy./Diag. 0.10E+00 1.8 0.00007516 -1701.2493928069 -3.55E-03 22 Broy./Diag. 0.10E+00 1.8 0.00012113 -1701.2507852485 -1.39E-03 23 Broy./Diag. 0.10E+00 1.8 0.00004257 -1701.2497693214 1.02E-03 24 Broy./Diag. 0.10E+00 1.8 0.00003008 -1701.2486644136 1.10E-03 25 Broy./Diag. 0.10E+00 1.8 0.00002151 -1701.2478299111 8.35E-04 26 Broy./Diag. 0.10E+00 1.8 0.00003361 -1701.2475369476 2.93E-04 27 Broy./Diag. 0.10E+00 1.8 0.00005210 -1701.2473106013 2.26E-04 28 Broy./Diag. 0.10E+00 1.8 0.00002430 -1701.2476124545 -3.02E-04 29 Broy./Diag. 0.10E+00 1.8 0.00000405 -1701.2478691088 -2.57E-04 30 Broy./Diag. 0.10E+00 1.8 0.00000292 -1701.2481869275 -3.18E-04 31 Broy./Diag. 0.10E+00 1.8 0.00000365 -1701.2483626340 -1.76E-04 32 Broy./Diag. 0.10E+00 1.8 0.00000653 -1701.2483770498 -1.44E-05 33 Broy./Diag. 0.10E+00 1.8 0.00000729 -1701.2483903845 -1.33E-05 34 Broy./Diag. 0.10E+00 1.8 0.00000569 -1701.2483063446 8.40E-05 35 Broy./Diag. 0.10E+00 1.8 0.00000131 -1701.2482345495 7.18E-05 36 Broy./Diag. 0.10E+00 1.8 0.00000107 -1701.2481835741 5.10E-05 37 Broy./Diag. 0.10E+00 1.8 0.00000051 -1701.2481772955 6.28E-06 *** SCF run converged in 37 steps *** Electronic density on regular grids: -560.0000000625 -0.0000000625 Core density on regular grids: 560.0000000001 0.0000000001 Total charge density on r-space grids: -0.0000000624 Total charge density g-space grids: -0.0000000624 Overlap energy of the core charge distribution: 0.00000262183168 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.57312417049218 Hartree energy: 525.23389557748078 Exchange-correlation energy: -301.34992776655196 Dispersion energy: -0.62595565153091 Total energy: -1701.24817729552638 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.248221125330701 MD_INI| MD initialization MD_INI| Potential energy [hartree] -0.170124822113E+04 MD_INI| Kinetic energy [hartree] 0.123980820690E+00 MD_INI| Temperature [K] 300.000000 MD_INI| Cell volume [bohr^3] 2.459106507317E+04 MD_INI| Cell volume [ang^3] 3.644019834878E+03 MD_INI| Cell lengths [bohr] 2.05382988E+01 2.05382988E+01 5.82972950E+01 MD_INI| Cell lengths [ang] 1.08683996E+01 1.08683996E+01 3.08495998E+01 MD_INI| Cell angles [deg] 9.00000000E+01 9.00000000E+01 9.00000000E+01 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 0 B(1) = 1.000000 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 0 B(1) = 1.000000 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02077288 -1701.2497644137 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03300132 -1701.2180626630 3.17E-02 3 Broy./Diag. 0.10E+00 1.7 0.02193572 -1701.3333323309 -1.15E-01 4 Broy./Diag. 0.10E+00 1.7 0.00431632 -1701.2471111733 8.62E-02 5 Broy./Diag. 0.10E+00 1.7 0.00087272 -1701.2537665194 -6.66E-03 6 Broy./Diag. 0.10E+00 1.8 0.00042188 -1701.2479098805 5.86E-03 7 Broy./Diag. 0.10E+00 1.8 0.00041296 -1701.2501181559 -2.21E-03 8 Broy./Diag. 0.10E+00 1.8 0.00056459 -1701.2529427161 -2.82E-03 9 Broy./Diag. 0.10E+00 1.8 0.00037937 -1701.2504602870 2.48E-03 10 Broy./Diag. 0.10E+00 1.8 0.00031064 -1701.2520174312 -1.56E-03 11 Broy./Diag. 0.10E+00 1.8 0.00013918 -1701.2506191937 1.40E-03 12 Broy./Diag. 0.10E+00 1.8 0.00007410 -1701.2510995791 -4.80E-04 13 Broy./Diag. 0.10E+00 1.8 0.00001833 -1701.2510076935 9.19E-05 14 Broy./Diag. 0.10E+00 1.8 0.00002681 -1701.2511621131 -1.54E-04 15 Broy./Diag. 0.10E+00 1.8 0.00002895 -1701.2513207403 -1.59E-04 16 Broy./Diag. 0.10E+00 1.8 0.00002570 -1701.2511737341 1.47E-04 17 Broy./Diag. 0.10E+00 1.8 0.00001511 -1701.2512737693 -1.00E-04 18 Broy./Diag. 0.10E+00 1.8 0.00000791 -1701.2512085157 6.53E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000376 -1701.2512168446 -8.33E-06 20 Broy./Diag. 0.10E+00 1.8 0.00000186 -1701.2512084053 8.44E-06 21 Broy./Diag. 0.10E+00 1.8 0.00000490 -1701.2512038896 4.52E-06 22 Broy./Diag. 0.10E+00 1.8 0.00000354 -1701.2512267572 -2.29E-05 23 Broy./Diag. 0.10E+00 1.8 0.00000124 -1701.2512117616 1.50E-05 24 Broy./Diag. 0.10E+00 1.8 0.00000070 -1701.2512161163 -4.35E-06 *** SCF run converged in 24 steps *** Electronic density on regular grids: -560.0000000578 -0.0000000578 Core density on regular grids: 560.0000000000 -0.0000000000 Total charge density on r-space grids: -0.0000000578 Total charge density g-space grids: -0.0000000578 Overlap energy of the core charge distribution: 0.00000261619613 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.56523775837650 Hartree energy: 525.23708213846521 Exchange-correlation energy: -301.34821252662368 Dispersion energy: -0.62600985541660 Total energy: -1701.25121611625082 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.251215564618633 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000009010 0.0000008531 0.0000001450 MD| *************************************************************************** MD| Step number 1 MD| Time [fs] 1.000000 MD| Conserved quantity [hartree] -0.170112421912E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 114.772841 114.772841 MD| Energy drift per atom [K] 0.760343580001E-01 0.000000000000E+00 MD| Potential energy [hartree] -0.170125121556E+04 -0.170125121556E+04 MD| Kinetic energy [hartree] 0.128940337458E+00 0.128940337458E+00 MD| Temperature [K] 312.000687 312.000687 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 564 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 0 B(1) = 2.000000 B(2) = -1.000000 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 0 B(1) = 2.000000 B(2) = -1.000000 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02861778 -1701.2553186555 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.04104865 -1701.2145691912 4.07E-02 3 Broy./Diag. 0.10E+00 1.7 0.02560117 -1701.3904249265 -1.76E-01 4 Broy./Diag. 0.10E+00 1.7 0.00567314 -1701.2724050144 1.18E-01 5 Broy./Diag. 0.10E+00 1.7 0.00137023 -1701.2791544024 -6.75E-03 6 Broy./Diag. 0.10E+00 1.8 0.00079017 -1701.2665937935 1.26E-02 7 Broy./Diag. 0.10E+00 1.8 0.00063993 -1701.2554547595 1.11E-02 8 Broy./Diag. 0.10E+00 1.8 0.00020390 -1701.2537820006 1.67E-03 9 Broy./Diag. 0.10E+00 1.7 0.00032003 -1701.2529074144 8.75E-04 10 Broy./Diag. 0.10E+00 1.8 0.00017293 -1701.2555580635 -2.65E-03 11 Broy./Diag. 0.10E+00 1.7 0.00015535 -1701.2551355345 4.23E-04 12 Broy./Diag. 0.10E+00 1.8 0.00009157 -1701.2558285171 -6.93E-04 13 Broy./Diag. 0.10E+00 1.7 0.00001512 -1701.2556529195 1.76E-04 14 Broy./Diag. 0.10E+00 1.7 0.00002537 -1701.2556840767 -3.12E-05 15 Broy./Diag. 0.10E+00 1.8 0.00001505 -1701.2558998488 -2.16E-04 16 Broy./Diag. 0.10E+00 1.8 0.00001523 -1701.2559351639 -3.53E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000790 -1701.2560594864 -1.24E-04 18 Broy./Diag. 0.10E+00 1.8 0.00000717 -1701.2560378475 2.16E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000290 -1701.2560487072 -1.09E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000257 -1701.2560192552 2.95E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000244 -1701.2559958163 2.34E-05 22 Broy./Diag. 0.10E+00 1.8 0.00000206 -1701.2560049364 -9.12E-06 23 Broy./Diag. 0.10E+00 1.8 0.00000125 -1701.2559960254 8.91E-06 24 Broy./Diag. 0.10E+00 1.8 0.00000074 -1701.2560040981 -8.07E-06 *** SCF run converged in 24 steps *** Electronic density on regular grids: -560.0000000546 -0.0000000546 Core density on regular grids: 560.0000000000 -0.0000000000 Total charge density on r-space grids: -0.0000000547 Total charge density g-space grids: -0.0000000547 Overlap energy of the core charge distribution: 0.00000246468309 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.48840465865112 Hartree energy: 525.28386918220508 Exchange-correlation energy: -301.32294899112730 Dispersion energy: -0.62601516528250 Total energy: -1701.25600409811886 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.256008265261698 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000017113 0.0000016933 0.0000002332 MD| *************************************************************************** MD| Step number 2 MD| Time [fs] 2.000000 MD| Conserved quantity [hartree] -0.170112415719E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 45.566507 80.169674 MD| Energy drift per atom [K] 0.298241110360E+00 0.149120555180E+00 MD| Potential energy [hartree] -0.170125600827E+04 -0.170125361191E+04 MD| Kinetic energy [hartree] 0.133079699737E+00 0.131010018597E+00 MD| Temperature [K] 322.016822 317.008755 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 565 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 1 B(1) = 2.500000 B(2) = -2.000000 B(3) = 0.500000 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 1 B(1) = 2.500000 B(2) = -2.000000 B(3) = 0.500000 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02846242 -1701.2625425861 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.04036315 -1701.2197180913 4.28E-02 3 Broy./Diag. 0.10E+00 1.7 0.02417861 -1701.4021000101 -1.82E-01 4 Broy./Diag. 0.10E+00 1.7 0.00497126 -1701.2753544577 1.27E-01 5 Broy./Diag. 0.10E+00 1.7 0.00140386 -1701.2773499418 -2.00E-03 6 Broy./Diag. 0.10E+00 1.7 0.00027181 -1701.2613366234 1.60E-02 7 Broy./Diag. 0.10E+00 1.7 0.00077578 -1701.2588440090 2.49E-03 8 Broy./Diag. 0.10E+00 1.8 0.00017014 -1701.2606647516 -1.82E-03 9 Broy./Diag. 0.10E+00 1.7 0.00018878 -1701.2612490775 -5.84E-04 10 Broy./Diag. 0.10E+00 1.8 0.00031152 -1701.2632518502 -2.00E-03 11 Broy./Diag. 0.10E+00 1.8 0.00021651 -1701.2617797945 1.47E-03 12 Broy./Diag. 0.10E+00 1.7 0.00004022 -1701.2630852299 -1.31E-03 13 Broy./Diag. 0.10E+00 1.7 0.00002016 -1701.2630727389 1.25E-05 14 Broy./Diag. 0.10E+00 1.8 0.00003250 -1701.2631774136 -1.05E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000979 -1701.2633304904 -1.53E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000999 -1701.2632703552 6.01E-05 17 Broy./Diag. 0.10E+00 1.7 0.00001132 -1701.2632910518 -2.07E-05 18 Broy./Diag. 0.10E+00 1.7 0.00000752 -1701.2632234552 6.76E-05 19 Broy./Diag. 0.10E+00 1.7 0.00000362 -1701.2632637131 -4.03E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000088 -1701.2632651900 -1.48E-06 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000526 -0.0000000526 Core density on regular grids: 560.0000000000 -0.0000000000 Total charge density on r-space grids: -0.0000000526 Total charge density g-space grids: -0.0000000526 Overlap energy of the core charge distribution: 0.00000211718955 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.35804249511102 Hartree energy: 525.36384444285864 Exchange-correlation energy: -301.27984975325120 Dispersion energy: -0.62598824464765 Total energy: -1701.26326518998803 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.263253547503837 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000024250 0.0000025439 0.0000002498 MD| *************************************************************************** MD| Step number 3 MD| Time [fs] 3.000000 MD| Conserved quantity [hartree] -0.170112417077E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.487620 66.275656 MD| Energy drift per atom [K] 0.249518593021E+00 0.182586567793E+00 MD| Potential energy [hartree] -0.170126325355E+04 -0.170125682579E+04 MD| Kinetic energy [hartree] 0.141437146495E+00 0.134485727896E+00 MD| Temperature [K] 342.239580 325.419030 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 566 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 2 B(1) = 2.800000 B(2) = -2.800000 B(3) = 1.200000 B(4) = -0.200000 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 2 B(1) = 2.800000 B(2) = -2.800000 B(3) = 1.200000 B(4) = -0.200000 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02815821 -1701.2721898006 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03854127 -1701.2223502071 4.98E-02 3 Broy./Diag. 0.10E+00 1.7 0.02218247 -1701.4162917757 -1.94E-01 4 Broy./Diag. 0.10E+00 1.7 0.00466294 -1701.2834259602 1.33E-01 5 Broy./Diag. 0.10E+00 1.7 0.00146403 -1701.2838667146 -4.41E-04 6 Broy./Diag. 0.10E+00 1.7 0.00037527 -1701.2650298446 1.88E-02 7 Broy./Diag. 0.10E+00 1.7 0.00067033 -1701.2667311627 -1.70E-03 8 Broy./Diag. 0.10E+00 1.7 0.00018162 -1701.2701755595 -3.44E-03 9 Broy./Diag. 0.10E+00 1.7 0.00019672 -1701.2713474898 -1.17E-03 10 Broy./Diag. 0.10E+00 1.7 0.00030035 -1701.2732639073 -1.92E-03 11 Broy./Diag. 0.10E+00 1.7 0.00017738 -1701.2716334753 1.63E-03 12 Broy./Diag. 0.10E+00 1.8 0.00004438 -1701.2729146783 -1.28E-03 13 Broy./Diag. 0.10E+00 1.8 0.00002870 -1701.2728286193 8.61E-05 14 Broy./Diag. 0.10E+00 1.8 0.00003089 -1701.2730092560 -1.81E-04 15 Broy./Diag. 0.10E+00 1.7 0.00001116 -1701.2731545245 -1.45E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000696 -1701.2730601036 9.44E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000994 -1701.2730513741 8.73E-06 18 Broy./Diag. 0.10E+00 1.7 0.00000641 -1701.2729728770 7.85E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000381 -1701.2730295690 -5.67E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000045 -1701.2730286496 9.19E-07 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000510 -0.0000000510 Core density on regular grids: 560.0000000000 -0.0000000000 Total charge density on r-space grids: -0.0000000510 Total charge density g-space grids: -0.0000000510 Overlap energy of the core charge distribution: 0.00000167727593 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.19673406128095 Hartree energy: 525.46249147567312 Exchange-correlation energy: -301.22698425908322 Dispersion energy: -0.62595535752714 Total energy: -1701.27302864962826 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.273016228355345 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000029960 0.0000033542 0.0000001786 MD| *************************************************************************** MD| Step number 4 MD| Time [fs] 4.000000 MD| Conserved quantity [hartree] -0.170112421780E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.508920 59.333972 MD| Energy drift per atom [K] 0.807535449641E-01 0.157128312086E+00 MD| Potential energy [hartree] -0.170127301623E+04 -0.170126087340E+04 MD| Kinetic energy [hartree] 0.151306176821E+00 0.138690840128E+00 MD| Temperature [K] 366.119959 335.594262 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 567 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02787392 -1701.2835240082 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03632614 -1701.2169123431 6.66E-02 3 Broy./Diag. 0.10E+00 1.7 0.02199687 -1701.4226304016 -2.06E-01 4 Broy./Diag. 0.10E+00 1.7 0.00470998 -1701.2868991767 1.36E-01 5 Broy./Diag. 0.10E+00 1.7 0.00179856 -1701.2907429081 -3.84E-03 6 Broy./Diag. 0.10E+00 1.7 0.00064870 -1701.2724600726 1.83E-02 7 Broy./Diag. 0.10E+00 1.7 0.00052611 -1701.2780385279 -5.58E-03 8 Broy./Diag. 0.10E+00 1.7 0.00021094 -1701.2821649068 -4.13E-03 9 Broy./Diag. 0.10E+00 1.7 0.00016742 -1701.2832642515 -1.10E-03 10 Broy./Diag. 0.10E+00 1.7 0.00026231 -1701.2846121852 -1.35E-03 11 Broy./Diag. 0.10E+00 1.7 0.00015611 -1701.2830607683 1.55E-03 12 Broy./Diag. 0.10E+00 1.7 0.00005432 -1701.2844562013 -1.40E-03 13 Broy./Diag. 0.10E+00 1.7 0.00002773 -1701.2843111420 1.45E-04 14 Broy./Diag. 0.10E+00 1.7 0.00002627 -1701.2845561529 -2.45E-04 15 Broy./Diag. 0.10E+00 1.8 0.00001158 -1701.2846443092 -8.82E-05 16 Broy./Diag. 0.10E+00 1.7 0.00000542 -1701.2845186935 1.26E-04 17 Broy./Diag. 0.10E+00 1.7 0.00000876 -1701.2844991279 1.96E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000507 -1701.2844371536 6.20E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000281 -1701.2844927926 -5.56E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000057 -1701.2844869838 5.81E-06 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000486 -0.0000000486 Core density on regular grids: 559.9999999999 -0.0000000001 Total charge density on r-space grids: -0.0000000486 Total charge density g-space grids: -0.0000000486 Overlap energy of the core charge distribution: 0.00000129550363 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.04042449244184 Hartree energy: 525.55719811153631 Exchange-correlation energy: -301.17684429113496 Dispersion energy: -0.62595034487410 Total energy: -1701.28448698377542 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.284477593504334 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000034979 0.0000041735 0.0000000241 MD| *************************************************************************** MD| Step number 5 MD| Time [fs] 5.000000 MD| Conserved quantity [hartree] -0.170112422597E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.489549 55.165087 MD| Energy drift per atom [K] 0.514530093623E-01 0.135993251541E+00 MD| Potential energy [hartree] -0.170128447759E+04 -0.170126559424E+04 MD| Kinetic energy [hartree] 0.166330402227E+00 0.144218752547E+00 MD| Temperature [K] 402.474515 348.970313 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02681421 -1701.2964714917 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03412664 -1701.2137293650 8.27E-02 3 Broy./Diag. 0.10E+00 1.7 0.02165655 -1701.4249129328 -2.11E-01 4 Broy./Diag. 0.10E+00 1.7 0.00464823 -1701.2939089836 1.31E-01 5 Broy./Diag. 0.10E+00 1.7 0.00196230 -1701.3018903820 -7.98E-03 6 Broy./Diag. 0.10E+00 1.7 0.00069784 -1701.2853171484 1.66E-02 7 Broy./Diag. 0.10E+00 1.8 0.00051062 -1701.2915334194 -6.22E-03 8 Broy./Diag. 0.10E+00 1.7 0.00025732 -1701.2960056476 -4.47E-03 9 Broy./Diag. 0.10E+00 1.8 0.00011572 -1701.2965838295 -5.78E-04 10 Broy./Diag. 0.10E+00 1.8 0.00021303 -1701.2975324514 -9.49E-04 11 Broy./Diag. 0.10E+00 1.7 0.00012121 -1701.2964951353 1.04E-03 12 Broy./Diag. 0.10E+00 1.7 0.00003920 -1701.2976087425 -1.11E-03 13 Broy./Diag. 0.10E+00 1.8 0.00003054 -1701.2974813375 1.27E-04 14 Broy./Diag. 0.10E+00 1.8 0.00001425 -1701.2977143065 -2.33E-04 15 Broy./Diag. 0.10E+00 1.7 0.00001089 -1701.2976575634 5.67E-05 16 Broy./Diag. 0.10E+00 1.8 0.00000721 -1701.2975391780 1.18E-04 17 Broy./Diag. 0.10E+00 1.8 0.00000682 -1701.2975583489 -1.92E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000397 -1701.2975234641 3.49E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000131 -1701.2975558770 -3.24E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000090 -1701.2975457311 1.01E-05 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000444 -0.0000000444 Core density on regular grids: 559.9999999998 -0.0000000002 Total charge density on r-space grids: -0.0000000446 Total charge density g-space grids: -0.0000000446 Overlap energy of the core charge distribution: 0.00000104494429 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.92006026611239 Hartree energy: 525.62806797062831 Exchange-correlation energy: -301.14035317612428 Dispersion energy: -0.62600558941201 Total energy: -1701.29754573109949 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.297543774200449 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000038617 0.0000048933 -0.0000001941 MD| *************************************************************************** MD| Step number 6 MD| Time [fs] 6.000000 MD| Conserved quantity [hartree] -0.170112420830E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.519553 52.390832 MD| Energy drift per atom [K] 0.114851298185E+00 0.132469592649E+00 MD| Potential energy [hartree] -0.170129754377E+04 -0.170127091916E+04 MD| Kinetic energy [hartree] 0.178860220755E+00 0.149992330582E+00 MD| Temperature [K] 432.793281 362.940807 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02603846 -1701.3115385283 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03463311 -1701.2089779591 1.03E-01 3 Broy./Diag. 0.10E+00 1.7 0.02131220 -1701.4277190541 -2.19E-01 4 Broy./Diag. 0.10E+00 1.7 0.00475515 -1701.3030468791 1.25E-01 5 Broy./Diag. 0.10E+00 1.7 0.00208715 -1701.3172627074 -1.42E-02 6 Broy./Diag. 0.10E+00 1.7 0.00059113 -1701.3024758256 1.48E-02 7 Broy./Diag. 0.10E+00 1.7 0.00057321 -1701.3069080580 -4.43E-03 8 Broy./Diag. 0.10E+00 1.8 0.00028766 -1701.3114013377 -4.49E-03 9 Broy./Diag. 0.10E+00 1.8 0.00009725 -1701.3118954118 -4.94E-04 10 Broy./Diag. 0.10E+00 1.8 0.00019519 -1701.3127796276 -8.84E-04 11 Broy./Diag. 0.10E+00 1.8 0.00009218 -1701.3119965754 7.83E-04 12 Broy./Diag. 0.10E+00 1.8 0.00002108 -1701.3128125558 -8.16E-04 13 Broy./Diag. 0.10E+00 1.8 0.00003696 -1701.3128073971 5.16E-06 14 Broy./Diag. 0.10E+00 1.8 0.00001012 -1701.3130154364 -2.08E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000533 -1701.3128728700 1.43E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000451 -1701.3128068399 6.60E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000343 -1701.3128271170 -2.03E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000399 -1701.3128169723 1.01E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000257 -1701.3128363743 -1.94E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000080 -1701.3128095898 2.68E-05 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000463 -0.0000000463 Core density on regular grids: 559.9999999998 -0.0000000002 Total charge density on r-space grids: -0.0000000465 Total charge density g-space grids: -0.0000000465 Overlap energy of the core charge distribution: 0.00000093542881 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.85305623593763 Hartree energy: 525.66349275077960 Exchange-correlation energy: -301.12389488112933 Dispersion energy: -0.62614838354335 Total energy: -1701.31280958977459 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.312810412330464 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000041732 0.0000055973 -0.0000004614 MD| *************************************************************************** MD| Step number 7 MD| Time [fs] 7.000000 MD| Conserved quantity [hartree] -0.170112420992E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.565557 50.415792 MD| Energy drift per atom [K] 0.109042715546E+00 0.129122895920E+00 MD| Potential energy [hartree] -0.170131281041E+04 -0.170127690363E+04 MD| Kinetic energy [hartree] 0.197932067641E+00 0.156840864448E+00 MD| Temperature [K] 478.941984 379.512404 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02467115 -1701.3283705551 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03432440 -1701.2064679580 1.22E-01 3 Broy./Diag. 0.10E+00 1.7 0.02061745 -1701.4281951785 -2.22E-01 4 Broy./Diag. 0.10E+00 1.7 0.00478808 -1701.3130147356 1.15E-01 5 Broy./Diag. 0.10E+00 1.7 0.00213124 -1701.3344344785 -2.14E-02 6 Broy./Diag. 0.10E+00 1.8 0.00042512 -1701.3223521571 1.21E-02 7 Broy./Diag. 0.10E+00 1.8 0.00062472 -1701.3244001077 -2.05E-03 8 Broy./Diag. 0.10E+00 1.8 0.00030545 -1701.3284777730 -4.08E-03 9 Broy./Diag. 0.10E+00 1.8 0.00009410 -1701.3288871705 -4.09E-04 10 Broy./Diag. 0.10E+00 1.8 0.00017804 -1701.3297449549 -8.58E-04 11 Broy./Diag. 0.10E+00 1.8 0.00007363 -1701.3291025552 6.42E-04 12 Broy./Diag. 0.10E+00 1.8 0.00001348 -1701.3297549741 -6.52E-04 13 Broy./Diag. 0.10E+00 1.8 0.00003716 -1701.3298060364 -5.11E-05 14 Broy./Diag. 0.10E+00 1.8 0.00001251 -1701.3299926370 -1.87E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000170 -1701.3298343627 1.58E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000393 -1701.3298073929 2.70E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000603 -1701.3298091063 -1.71E-06 18 Broy./Diag. 0.10E+00 1.8 0.00000288 -1701.3298483349 -3.92E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000216 -1701.3298192925 2.90E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000260 -1701.3298181230 1.17E-06 21 Broy./Diag. 0.10E+00 1.8 0.00000064 -1701.3297913106 2.68E-05 *** SCF run converged in 21 steps *** Electronic density on regular grids: -560.0000000448 -0.0000000448 Core density on regular grids: 559.9999999999 -0.0000000001 Total charge density on r-space grids: -0.0000000449 Total charge density g-space grids: -0.0000000449 Overlap energy of the core charge distribution: 0.00000097053079 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.84850189154076 Hartree energy: 525.65781537936573 Exchange-correlation energy: -301.13040530458682 Dispersion energy: -0.62638800018127 Total energy: -1701.32979131057914 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.329805178888137 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000043253 0.0000061424 -0.0000007766 MD| *************************************************************************** MD| Step number 8 MD| Time [fs] 8.000000 MD| Conserved quantity [hartree] -0.170112423925E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 40.451413 49.170245 MD| Energy drift per atom [K] 0.378054438061E-02 0.113455101977E+00 MD| Potential energy [hartree] -0.170132980518E+04 -0.170128351632E+04 MD| Kinetic energy [hartree] 0.212898270520E+00 0.163848040207E+00 MD| Temperature [K] 515.156141 396.467871 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02244622 -1701.3466072284 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03291772 -1701.2181689900 1.28E-01 3 Broy./Diag. 0.10E+00 1.7 0.01935577 -1701.4341966437 -2.16E-01 4 Broy./Diag. 0.10E+00 1.7 0.00450923 -1701.3303980230 1.04E-01 5 Broy./Diag. 0.10E+00 1.7 0.00208131 -1701.3556616699 -2.53E-02 6 Broy./Diag. 0.10E+00 1.8 0.00058624 -1701.3453292860 1.03E-02 7 Broy./Diag. 0.10E+00 1.8 0.00068756 -1701.3436554943 1.67E-03 8 Broy./Diag. 0.10E+00 1.8 0.00032935 -1701.3470186255 -3.36E-03 9 Broy./Diag. 0.10E+00 1.8 0.00010021 -1701.3471405367 -1.22E-04 10 Broy./Diag. 0.10E+00 1.8 0.00014848 -1701.3480936494 -9.53E-04 11 Broy./Diag. 0.10E+00 1.8 0.00006749 -1701.3476000167 4.94E-04 12 Broy./Diag. 0.10E+00 1.8 0.00001514 -1701.3481020901 -5.02E-04 13 Broy./Diag. 0.10E+00 1.8 0.00003071 -1701.3482153341 -1.13E-04 14 Broy./Diag. 0.10E+00 1.8 0.00001415 -1701.3483623182 -1.47E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000412 -1701.3482235411 1.39E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000565 -1701.3482274714 -3.93E-06 17 Broy./Diag. 0.10E+00 1.8 0.00000698 -1701.3482120172 1.55E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000197 -1701.3482579779 -4.60E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000081 -1701.3482399869 1.80E-05 *** SCF run converged in 19 steps *** Electronic density on regular grids: -560.0000000450 -0.0000000450 Core density on regular grids: 559.9999999998 -0.0000000002 Total charge density on r-space grids: -0.0000000452 Total charge density g-space grids: -0.0000000452 Overlap energy of the core charge distribution: 0.00000115547205 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.89793298564041 Hartree energy: 525.61661856721150 Exchange-correlation energy: -301.15675350857356 Dispersion energy: -0.62672293943113 Total energy: -1701.34823998692877 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.348226724029246 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000043233 0.0000065079 -0.0000011285 MD| *************************************************************************** MD| Step number 9 MD| Time [fs] 9.000000 MD| Conserved quantity [hartree] -0.170112424974E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 36.930702 47.810296 MD| Energy drift per atom [K] -0.338677717386E-01 0.970858937867E-01 MD| Potential energy [hartree] -0.170134822672E+04 -0.170129070637E+04 MD| Kinetic energy [hartree] 0.222455825652E+00 0.170360016367E+00 MD| Temperature [K] 538.282835 412.225090 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02033786 -1701.3660685663 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03207358 -1701.2358642063 1.30E-01 3 Broy./Diag. 0.10E+00 1.7 0.01865517 -1701.4472369221 -2.11E-01 4 Broy./Diag. 0.10E+00 1.7 0.00431703 -1701.3504038947 9.68E-02 5 Broy./Diag. 0.10E+00 1.7 0.00206267 -1701.3786423555 -2.82E-02 6 Broy./Diag. 0.10E+00 1.8 0.00073884 -1701.3693021136 9.34E-03 7 Broy./Diag. 0.10E+00 1.8 0.00077593 -1701.3638879921 5.41E-03 8 Broy./Diag. 0.10E+00 1.8 0.00035613 -1701.3666807431 -2.79E-03 9 Broy./Diag. 0.10E+00 1.8 0.00015120 -1701.3662698319 4.11E-04 10 Broy./Diag. 0.10E+00 1.8 0.00012498 -1701.3676248966 -1.36E-03 11 Broy./Diag. 0.10E+00 1.8 0.00008339 -1701.3672731202 3.52E-04 12 Broy./Diag. 0.10E+00 1.8 0.00002144 -1701.3676887357 -4.16E-04 13 Broy./Diag. 0.10E+00 1.8 0.00002752 -1701.3678813453 -1.93E-04 14 Broy./Diag. 0.10E+00 1.8 0.00001535 -1701.3680226782 -1.41E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000925 -1701.3679005548 1.22E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000905 -1701.3679332454 -3.27E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000607 -1701.3678839600 4.93E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000063 -1701.3679210690 -3.71E-05 *** SCF run converged in 18 steps *** Electronic density on regular grids: -560.0000000493 -0.0000000493 Core density on regular grids: 560.0000000002 0.0000000002 Total charge density on r-space grids: -0.0000000491 Total charge density g-space grids: -0.0000000491 Overlap energy of the core charge distribution: 0.00000144149879 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.97938100537453 Hartree energy: 525.55417046369712 Exchange-correlation energy: -301.19501800936007 Dispersion energy: -0.62713972297556 Total energy: -1701.36792106901362 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.367915393985186 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000042947 0.0000068705 -0.0000015453 MD| *************************************************************************** MD| Step number 10 MD| Time [fs] 10.000000 MD| Conserved quantity [hartree] -0.170112423177E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 35.220283 46.551295 MD| Energy drift per atom [K] 0.306248277169E-01 0.904397871797E-01 MD| Potential energy [hartree] -0.170136791539E+04 -0.170129842727E+04 MD| Kinetic energy [hartree] 0.238060370021E+00 0.177130051733E+00 MD| Temperature [K] 576.041606 428.606741 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01785311 -1701.3867391774 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03096980 -1701.2678678200 1.19E-01 3 Broy./Diag. 0.10E+00 1.7 0.01796781 -1701.4682192709 -2.00E-01 4 Broy./Diag. 0.10E+00 1.7 0.00422420 -1701.3751388164 9.31E-02 5 Broy./Diag. 0.10E+00 1.7 0.00201834 -1701.4020277147 -2.69E-02 6 Broy./Diag. 0.10E+00 1.8 0.00084628 -1701.3929749045 9.05E-03 7 Broy./Diag. 0.10E+00 1.8 0.00089124 -1701.3850007304 7.97E-03 8 Broy./Diag. 0.10E+00 1.8 0.00034118 -1701.3876786010 -2.68E-03 9 Broy./Diag. 0.10E+00 1.8 0.00022961 -1701.3863212342 1.36E-03 10 Broy./Diag. 0.10E+00 1.8 0.00009260 -1701.3881666808 -1.85E-03 11 Broy./Diag. 0.10E+00 1.8 0.00010828 -1701.3880983935 6.83E-05 12 Broy./Diag. 0.10E+00 1.8 0.00003291 -1701.3885434312 -4.45E-04 13 Broy./Diag. 0.10E+00 1.8 0.00001778 -1701.3887946455 -2.51E-04 14 Broy./Diag. 0.10E+00 1.8 0.00000698 -1701.3888803216 -8.57E-05 15 Broy./Diag. 0.10E+00 1.8 0.00000436 -1701.3888583650 2.20E-05 16 Broy./Diag. 0.10E+00 1.8 0.00000356 -1701.3888273394 3.10E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000122 -1701.3888101554 1.72E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000216 -1701.3888038240 6.33E-06 19 Broy./Diag. 0.10E+00 1.8 0.00000148 -1701.3888114961 -7.67E-06 20 Broy./Diag. 0.10E+00 1.8 0.00000162 -1701.3888009945 1.05E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000089 -1701.3888101170 -9.12E-06 *** SCF run converged in 21 steps *** Electronic density on regular grids: -560.0000000469 -0.0000000469 Core density on regular grids: 560.0000000000 0.0000000000 Total charge density on r-space grids: -0.0000000469 Total charge density g-space grids: -0.0000000469 Overlap energy of the core charge distribution: 0.00000166514934 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.06288938717887 Hartree energy: 525.48973699126725 Exchange-correlation energy: -301.23450495451448 Dispersion energy: -0.62761695881416 Total energy: -1701.38881011698118 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.388812499945288 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000042256 0.0000072196 -0.0000020395 MD| *************************************************************************** MD| Step number 11 MD| Time [fs] 11.000000 MD| Conserved quantity [hartree] -0.170112429834E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 40.533167 46.004192 MD| Energy drift per atom [K] -0.208258472855E+00 0.632853999039E-01 MD| Potential energy [hartree] -0.170138881250E+04 -0.170130664411E+04 MD| Kinetic energy [hartree] 0.258184742014E+00 0.184498659940E+00 MD| Temperature [K] 624.737134 446.436777 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01525446 -1701.4076228360 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03000195 -1701.3100320160 9.76E-02 3 Broy./Diag. 0.10E+00 1.7 0.01762408 -1701.4958357762 -1.86E-01 4 Broy./Diag. 0.10E+00 1.7 0.00461654 -1701.4019692125 9.39E-02 5 Broy./Diag. 0.10E+00 1.7 0.00195764 -1701.4233046541 -2.13E-02 6 Broy./Diag. 0.10E+00 1.8 0.00116412 -1701.4144115662 8.89E-03 7 Broy./Diag. 0.10E+00 1.8 0.00102324 -1701.4058627531 8.55E-03 8 Broy./Diag. 0.10E+00 1.8 0.00029518 -1701.4094629390 -3.60E-03 9 Broy./Diag. 0.10E+00 1.8 0.00018440 -1701.4076890735 1.77E-03 10 Broy./Diag. 0.10E+00 1.8 0.00009446 -1701.4087604442 -1.07E-03 11 Broy./Diag. 0.10E+00 1.8 0.00011024 -1701.4089023482 -1.42E-04 12 Broy./Diag. 0.10E+00 1.8 0.00004008 -1701.4097438759 -8.42E-04 13 Broy./Diag. 0.10E+00 1.8 0.00000989 -1701.4100095302 -2.66E-04 14 Broy./Diag. 0.10E+00 1.8 0.00002098 -1701.4099669274 4.26E-05 15 Broy./Diag. 0.10E+00 1.8 0.00001283 -1701.4100249721 -5.80E-05 16 Broy./Diag. 0.10E+00 1.8 0.00000505 -1701.4099541538 7.08E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000223 -1701.4099864883 -3.23E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000244 -1701.4099654235 2.11E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000236 -1701.4099584919 6.93E-06 20 Broy./Diag. 0.10E+00 1.8 0.00000023 -1701.4099363366 2.22E-05 *** SCF run converged in 20 steps *** Electronic density on regular grids: -560.0000000475 -0.0000000475 Core density on regular grids: 560.0000000000 0.0000000000 Total charge density on r-space grids: -0.0000000475 Total charge density g-space grids: -0.0000000475 Overlap energy of the core charge distribution: 0.00000175013824 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.12118364947833 Hartree energy: 525.44154833523498 Exchange-correlation energy: -301.26521477693052 Dispersion energy: -0.62813904730905 Total energy: -1701.40993633663629 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.409944586298252 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000039986 0.0000073416 -0.0000025418 MD| *************************************************************************** MD| Step number 12 MD| Time [fs] 12.000000 MD| Conserved quantity [hartree] -0.170112445521E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 38.747409 45.399460 MD| Energy drift per atom [K] -0.771152508159E+00 -0.625109243474E-02 MD| Potential energy [hartree] -0.170140994459E+04 -0.170131525248E+04 MD| Kinetic energy [hartree] 0.265678661955E+00 0.191263660108E+00 MD| Temperature [K] 642.870390 462.806245 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01296740 -1701.4262457217 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.02889490 -1701.3611916297 6.51E-02 3 Broy./Diag. 0.10E+00 1.7 0.01754276 -1701.5281241902 -1.67E-01 4 Broy./Diag. 0.10E+00 1.7 0.00495834 -1701.4294131296 9.87E-02 5 Broy./Diag. 0.10E+00 1.7 0.00201812 -1701.4404762335 -1.11E-02 6 Broy./Diag. 0.10E+00 1.8 0.00153824 -1701.4313792669 9.10E-03 7 Broy./Diag. 0.10E+00 1.8 0.00107378 -1701.4238003425 7.58E-03 8 Broy./Diag. 0.10E+00 1.8 0.00024370 -1701.4297174197 -5.92E-03 9 Broy./Diag. 0.10E+00 1.8 0.00008808 -1701.4280642078 1.65E-03 10 Broy./Diag. 0.10E+00 1.8 0.00013179 -1701.4270209150 1.04E-03 11 Broy./Diag. 0.10E+00 1.8 0.00011769 -1701.4269223608 9.86E-05 12 Broy./Diag. 0.10E+00 1.8 0.00003866 -1701.4286607512 -1.74E-03 13 Broy./Diag. 0.10E+00 1.8 0.00005635 -1701.4286459522 1.48E-05 14 Broy./Diag. 0.10E+00 1.8 0.00002499 -1701.4288832003 -2.37E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000564 -1701.4286845021 1.99E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000583 -1701.4287200519 -3.55E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000512 -1701.4287760703 -5.60E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000239 -1701.4287332033 4.29E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000273 -1701.4287323469 8.56E-07 20 Broy./Diag. 0.10E+00 1.8 0.00000166 -1701.4287018816 3.05E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000030 -1701.4287115579 -9.68E-06 *** SCF run converged in 21 steps *** Electronic density on regular grids: -560.0000000465 -0.0000000465 Core density on regular grids: 560.0000000000 -0.0000000000 Total charge density on r-space grids: -0.0000000466 Total charge density g-space grids: -0.0000000466 Overlap energy of the core charge distribution: 0.00000178859851 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.13371357340861 Hartree energy: 525.42456699955926 Exchange-correlation energy: -301.27899715147566 Dispersion energy: -0.62868052069806 Total energy: -1701.42871155785565 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.428717600235586 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000038023 0.0000075748 -0.0000031663 MD| *************************************************************************** MD| Step number 13 MD| Time [fs] 13.000000 MD| Conserved quantity [hartree] -0.170112455161E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 40.524594 45.024470 MD| Energy drift per atom [K] -0.111707199548E+01 -0.916988542077E-01 MD| Potential energy [hartree] -0.170142871760E+04 -0.170132398057E+04 MD| Kinetic energy [hartree] 0.281896714833E+00 0.198235433548E+00 MD| Temperature [K] 682.113685 479.676048 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01243673 -1701.4415759851 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.02858493 -1701.4193559352 2.22E-02 3 Broy./Diag. 0.10E+00 1.7 0.01884253 -1701.5648596051 -1.46E-01 4 Broy./Diag. 0.10E+00 1.7 0.00544348 -1701.4566095738 1.08E-01 5 Broy./Diag. 0.10E+00 1.7 0.00188155 -1701.4520050898 4.60E-03 6 Broy./Diag. 0.10E+00 1.8 0.00207908 -1701.4425862129 9.42E-03 7 Broy./Diag. 0.10E+00 1.8 0.00106650 -1701.4366108881 5.98E-03 8 Broy./Diag. 0.10E+00 1.8 0.00035604 -1701.4471521157 -1.05E-02 9 Broy./Diag. 0.10E+00 1.8 0.00022145 -1701.4460009239 1.15E-03 10 Broy./Diag. 0.10E+00 1.8 0.00022737 -1701.4428458578 3.16E-03 11 Broy./Diag. 0.10E+00 1.8 0.00023971 -1701.4411500627 1.70E-03 12 Broy./Diag. 0.10E+00 1.8 0.00014356 -1701.4444661622 -3.32E-03 13 Broy./Diag. 0.10E+00 1.8 0.00007262 -1701.4439299799 5.36E-04 14 Broy./Diag. 0.10E+00 1.8 0.00002557 -1701.4445374092 -6.07E-04 15 Broy./Diag. 0.10E+00 1.8 0.00000875 -1701.4442766455 2.61E-04 16 Broy./Diag. 0.10E+00 1.8 0.00000862 -1701.4442869493 -1.03E-05 17 Broy./Diag. 0.10E+00 1.8 0.00000909 -1701.4443473878 -6.04E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000575 -1701.4442768441 7.05E-05 19 Broy./Diag. 0.10E+00 1.8 0.00000626 -1701.4442979957 -2.12E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000251 -1701.4442631330 3.49E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000179 -1701.4442759607 -1.28E-05 22 Broy./Diag. 0.10E+00 1.8 0.00000082 -1701.4442833282 -7.37E-06 *** SCF run converged in 22 steps *** Electronic density on regular grids: -560.0000000518 -0.0000000518 Core density on regular grids: 560.0000000002 0.0000000002 Total charge density on r-space grids: -0.0000000516 Total charge density g-space grids: -0.0000000516 Overlap energy of the core charge distribution: 0.00000179899890 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 792.08955530326375 Hartree energy: 525.44657171498704 Exchange-correlation energy: -301.27185020564536 Dispersion energy: -0.62924569252826 Total energy: -1701.44428332817256 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.444278695036928 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000035543 0.0000077672 -0.0000038519 MD| *************************************************************************** MD| Step number 14 MD| Time [fs] 14.000000 MD| Conserved quantity [hartree] -0.170112453622E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 42.222907 44.824359 MD| Energy drift per atom [K] -0.106186509951E+01 -0.160996443158E+00 MD| Potential energy [hartree] -0.170144427870E+04 -0.170133257329E+04 MD| Kinetic energy [hartree] 0.293432984287E+00 0.205035258601E+00 MD| Temperature [K] 710.028332 496.129782 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01328043 -1701.4538621153 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.02924906 -1701.4736147495 -1.98E-02 3 Broy./Diag. 0.10E+00 1.7 0.02090407 -1701.5986545805 -1.25E-01 4 Broy./Diag. 0.10E+00 1.7 0.00629704 -1701.4787611187 1.20E-01 5 Broy./Diag. 0.10E+00 1.7 0.00205329 -1701.4599165122 1.88E-02 6 Broy./Diag. 0.10E+00 1.8 0.00285234 -1701.4535823843 6.33E-03 7 Broy./Diag. 0.10E+00 1.8 0.00089792 -1701.4465858219 7.00E-03 8 Broy./Diag. 0.10E+00 1.8 0.00077739 -1701.4593373278 -1.28E-02 9 Broy./Diag. 0.10E+00 1.8 0.00025606 -1701.4598133062 -4.76E-04 10 Broy./Diag. 0.10E+00 1.8 0.00030346 -1701.4562420445 3.57E-03 11 Broy./Diag. 0.10E+00 1.8 0.00027015 -1701.4525903872 3.65E-03 12 Broy./Diag. 0.10E+00 1.8 0.00016667 -1701.4566141319 -4.02E-03 13 Broy./Diag. 0.10E+00 1.8 0.00008890 -1701.4560551947 5.59E-04 14 Broy./Diag. 0.10E+00 1.8 0.00003211 -1701.4569361827 -8.81E-04 15 Broy./Diag. 0.10E+00 1.8 0.00001143 -1701.4566545768 2.82E-04 16 Broy./Diag. 0.10E+00 1.8 0.00001122 -1701.4566626185 -8.04E-06 17 Broy./Diag. 0.10E+00 1.8 0.00000960 -1701.4567133426 -5.07E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000884 -1701.4566597567 5.36E-05 19 Broy./Diag. 0.10E+00 1.8 0.00001017 -1701.4567022944 -4.25E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000452 -1701.4566475022 5.48E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000254 -1701.4566596132 -1.21E-05 22 Broy./Diag. 0.10E+00 1.8 0.00000172 -1701.4566693696 -9.76E-06 23 Broy./Diag. 0.10E+00 1.8 0.00000214 -1701.4566617472 7.62E-06 24 Broy./Diag. 0.10E+00 1.8 0.00000121 -1701.4566795745 -1.78E-05 25 Broy./Diag. 0.10E+00 1.8 0.00000156 -1701.4566782250 1.35E-06 26 Broy./Diag. 0.10E+00 1.8 0.00000054 -1701.4566873721 -9.15E-06 *** SCF run converged in 26 steps *** Electronic density on regular grids: -560.0000000416 -0.0000000416 Core density on regular grids: 559.9999999997 -0.0000000003 Total charge density on r-space grids: -0.0000000419 Total charge density g-space grids: -0.0000000419 Overlap energy of the core charge distribution: 0.00000169406821 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.98749659904047 Hartree energy: 525.50828314828129 Exchange-correlation energy: -301.24333342342129 Dispersion energy: -0.62981914279598 Total energy: -1701.45668737207575 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.456673886475755 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000031970 0.0000077907 -0.0000045180 MD| *************************************************************************** MD| Step number 15 MD| Time [fs] 15.000000 MD| Conserved quantity [hartree] -0.170112452076E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 49.281171 45.121479 MD| Energy drift per atom [K] -0.100635898838E+01 -0.217353946173E+00 MD| Potential energy [hartree] -0.170145667389E+04 -0.170134084666E+04 MD| Kinetic energy [hartree] 0.289898668176E+00 0.210692819239E+00 MD| Temperature [K] 701.476244 509.819546 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01376105 -1701.4639213475 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.03178060 -1701.5240196517 -6.01E-02 3 Broy./Diag. 0.10E+00 1.7 0.02413259 -1701.6307556256 -1.07E-01 4 Broy./Diag. 0.10E+00 1.8 0.00650850 -1701.5001863890 1.31E-01 5 Broy./Diag. 0.10E+00 1.7 0.00341742 -1701.4638031203 3.64E-02 6 Broy./Diag. 0.10E+00 1.8 0.00395474 -1701.4639689753 -1.66E-04 7 Broy./Diag. 0.10E+00 1.8 0.00064937 -1701.4538421389 1.01E-02 8 Broy./Diag. 0.10E+00 1.8 0.00140775 -1701.4668322045 -1.30E-02 9 Broy./Diag. 0.10E+00 1.8 0.00024280 -1701.4704942394 -3.66E-03 10 Broy./Diag. 0.10E+00 1.8 0.00037740 -1701.4679161839 2.58E-03 11 Broy./Diag. 0.10E+00 1.8 0.00020281 -1701.4625948485 5.32E-03 12 Broy./Diag. 0.10E+00 1.8 0.00018256 -1701.4660899408 -3.50E-03 13 Broy./Diag. 0.10E+00 1.8 0.00010597 -1701.4658438833 2.46E-04 14 Broy./Diag. 0.10E+00 1.8 0.00001994 -1701.4669524032 -1.11E-03 15 Broy./Diag. 0.10E+00 1.8 0.00002093 -1701.4668111152 1.41E-04 16 Broy./Diag. 0.10E+00 1.8 0.00001362 -1701.4667914544 1.97E-05 17 Broy./Diag. 0.10E+00 1.8 0.00001081 -1701.4668095532 -1.81E-05 18 Broy./Diag. 0.10E+00 1.8 0.00001346 -1701.4667678265 4.17E-05 19 Broy./Diag. 0.10E+00 1.8 0.00001363 -1701.4668431542 -7.53E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000524 -1701.4667713912 7.18E-05 21 Broy./Diag. 0.10E+00 1.8 0.00000325 -1701.4667777212 -6.33E-06 22 Broy./Diag. 0.10E+00 1.8 0.00000314 -1701.4667784747 -7.53E-07 23 Broy./Diag. 0.10E+00 1.8 0.00000300 -1701.4667624841 1.60E-05 24 Broy./Diag. 0.10E+00 1.8 0.00000214 -1701.4667834538 -2.10E-05 25 Broy./Diag. 0.10E+00 1.8 0.00000272 -1701.4667834961 -4.23E-08 26 Broy./Diag. 0.10E+00 1.8 0.00000098 -1701.4668042103 -2.07E-05 *** SCF run converged in 26 steps *** Electronic density on regular grids: -560.0000000501 -0.0000000501 Core density on regular grids: 560.0000000000 0.0000000000 Total charge density on r-space grids: -0.0000000500 Total charge density g-space grids: -0.0000000500 Overlap energy of the core charge distribution: 0.00000144975656 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.83829016963705 Hartree energy: 525.60227621700790 Exchange-correlation energy: -301.19764869198195 Dispersion energy: -0.63040710745500 Total energy: -1701.46680421028395 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.466791822419054 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000028948 0.0000080218 -0.0000053593 MD| *************************************************************************** MD| Step number 16 MD| Time [fs] 16.000000 MD| Conserved quantity [hartree] -0.170112456052E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 49.355578 45.386111 MD| Energy drift per atom [K] -0.114905316452E+01 -0.275585147320E+00 MD| Potential energy [hartree] -0.170146679182E+04 -0.170134871824E+04 MD| Kinetic energy [hartree] 0.300126816657E+00 0.216282444078E+00 MD| Temperature [K] 726.225593 523.344924 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01371508 -1701.4717485647 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.04126412 -1701.5613226539 -8.96E-02 3 Broy./Diag. 0.10E+00 1.7 0.03665057 -1701.6799477466 -1.19E-01 4 Broy./Diag. 0.10E+00 1.7 0.00715850 -1701.5196665114 1.60E-01 5 Broy./Diag. 0.10E+00 1.8 0.00597003 -1701.4622539506 5.74E-02 6 Broy./Diag. 0.10E+00 1.8 0.00248871 -1701.4749526777 -1.27E-02 7 Broy./Diag. 0.10E+00 1.8 0.00196377 -1701.4647854301 1.02E-02 8 Broy./Diag. 0.10E+00 1.8 0.00221224 -1701.4704967878 -5.71E-03 9 Broy./Diag. 0.10E+00 1.8 0.00060895 -1701.4765658446 -6.07E-03 10 Broy./Diag. 0.10E+00 1.8 0.00024484 -1701.4766228143 -5.70E-05 11 Broy./Diag. 0.10E+00 1.8 0.00013745 -1701.4727625490 3.86E-03 12 Broy./Diag. 0.10E+00 1.8 0.00005659 -1701.4730037033 -2.41E-04 13 Broy./Diag. 0.10E+00 1.8 0.00012565 -1701.4732417468 -2.38E-04 14 Broy./Diag. 0.10E+00 1.8 0.00012316 -1701.4746192679 -1.38E-03 15 Broy./Diag. 0.10E+00 1.8 0.00010226 -1701.4741428706 4.76E-04 16 Broy./Diag. 0.10E+00 1.8 0.00002251 -1701.4749569768 -8.14E-04 17 Broy./Diag. 0.10E+00 1.8 0.00001627 -1701.4748776852 7.93E-05 18 Broy./Diag. 0.10E+00 1.8 0.00000716 -1701.4747062581 1.71E-04 19 Broy./Diag. 0.10E+00 1.8 0.00000474 -1701.4746829235 2.33E-05 20 Broy./Diag. 0.10E+00 1.8 0.00000868 -1701.4746743318 8.59E-06 21 Broy./Diag. 0.10E+00 1.8 0.00000617 -1701.4747230884 -4.88E-05 22 Broy./Diag. 0.10E+00 1.8 0.00000478 -1701.4746709433 5.21E-05 23 Broy./Diag. 0.10E+00 1.8 0.00000235 -1701.4746676284 3.31E-06 24 Broy./Diag. 0.10E+00 1.8 0.00000171 -1701.4746403493 2.73E-05 25 Broy./Diag. 0.10E+00 1.8 0.00000090 -1701.4746291757 1.12E-05 *** SCF run converged in 25 steps *** Electronic density on regular grids: -560.0000000496 -0.0000000496 Core density on regular grids: 560.0000000000 0.0000000000 Total charge density on r-space grids: -0.0000000496 Total charge density g-space grids: -0.0000000496 Overlap energy of the core charge distribution: 0.00000115485170 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.66654669105492 Hartree energy: 525.71265201710958 Exchange-correlation energy: -301.14350204493189 Dispersion energy: -0.63101074649662 Total energy: -1701.47462917566054 Integrated absolute spin density : 0.0000000000 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.474636447016792 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000024934 0.0000080721 -0.0000061334 MD| *************************************************************************** MD| Step number 17 MD| Time [fs] 17.000000 MD| Conserved quantity [hartree] -0.170112462717E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 47.596491 45.516133 MD| Energy drift per atom [K] -0.138822231750E+01 -0.341034392624E+00 MD| Potential energy [hartree] -0.170147463645E+04 -0.170135612519E+04 MD| Kinetic energy [hartree] 0.295217360742E+00 0.220925674470E+00 MD| Temperature [K] 714.346039 534.580284 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01339979 -1701.4770216899 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.20084015 -1701.5925028201 -1.15E-01 3 Broy./Diag. 0.10E+00 1.7 0.19824926 -1702.3386199609 -7.46E-01 4 Broy./Diag. 0.10E+00 1.7 0.00578746 -1701.5163692881 8.22E-01 5 Broy./Diag. 0.10E+00 1.8 0.00879701 -1701.5069882178 9.38E-03 6 Broy./Diag. 0.10E+00 1.8 0.01852427 -1701.4999729313 7.02E-03 7 Broy./Diag. 0.10E+00 1.8 0.01713434 -1701.4991242222 8.49E-04 8 Broy./Diag. 0.10E+00 1.8 0.00236800 -1701.4634050412 3.57E-02 9 Broy./Diag. 0.10E+00 1.8 0.00207323 -1701.4730186925 -9.61E-03 10 Broy./Diag. 0.10E+00 1.8 0.00140065 -1701.4853670691 -1.23E-02 11 Broy./Diag. 0.10E+00 1.8 0.00047455 -1701.4820367854 3.33E-03 12 Broy./Diag. 0.10E+00 1.8 0.00029360 -1701.4775378833 4.50E-03 13 Broy./Diag. 0.10E+00 1.8 0.00043396 -1701.4787165462 -1.18E-03 14 Broy./Diag. 0.10E+00 1.8 0.00011808 -1701.4776608607 1.06E-03 15 Broy./Diag. 0.10E+00 1.8 0.00003992 -1701.4794026934 -1.74E-03 16 Broy./Diag. 0.10E+00 1.8 0.00007483 -1701.4800841580 -6.81E-04 17 Broy./Diag. 0.10E+00 1.8 0.00006141 -1701.4800850897 -9.32E-07 18 Broy./Diag. 0.10E+00 1.8 0.00003621 -1701.4802058453 -1.21E-04 19 Broy./Diag. 0.10E+00 1.8 0.00005508 -1701.4799678954 2.38E-04 20 Broy./Diag. 0.10E+00 1.8 0.00003182 -1701.4801966789 -2.29E-04 21 Broy./Diag. 0.10E+00 1.8 0.00000441 -1701.4800314921 1.65E-04 22 Broy./Diag. 0.10E+00 1.8 0.00000377 -1701.4800349887 -3.50E-06 23 Broy./Diag. 0.10E+00 1.8 0.00000631 -1701.4800424474 -7.46E-06 24 Broy./Diag. 0.10E+00 1.8 0.00000370 -1701.4800035717 3.89E-05 25 Broy./Diag. 0.10E+00 1.8 0.00000402 -1701.4800021476 1.42E-06 26 Broy./Diag. 0.10E+00 1.8 0.00000356 -1701.4799805271 2.16E-05 27 Broy./Diag. 0.10E+00 1.8 0.00000196 -1701.4799935454 -1.30E-05 28 Broy./Diag. 0.10E+00 1.8 0.00000025 -1701.4799902798 3.27E-06 *** SCF run converged in 28 steps *** Electronic density on regular grids: -560.0000000507 -0.0000000507 Core density on regular grids: 560.0000000000 0.0000000000 Total charge density on r-space grids: -0.0000000507 Total charge density g-space grids: -0.0000000507 Overlap energy of the core charge distribution: 0.00000090551436 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.49899205057113 Hartree energy: 525.82229986579864 Exchange-correlation energy: -301.09032398503069 Dispersion energy: -0.63164286942892 Total energy: -1701.47999027982382 Integrated absolute spin density : 0.0000000001 Ideal and single determinant S**2 : 0.000000 -0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.479988561728760 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000021329 0.0000083431 -0.0000071135 MD| *************************************************************************** MD| Step number 18 MD| Time [fs] 18.000000 MD| Conserved quantity [hartree] -0.170112465049E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 52.946788 45.928947 MD| Energy drift per atom [K] -0.147190523810E+01 -0.403860550706E+00 MD| Potential energy [hartree] -0.170147998856E+04 -0.170136300649E+04 MD| Kinetic energy [hartree] 0.304767220957E+00 0.225583538164E+00 MD| Temperature [K] 737.454114 545.851052 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.01329867 -1701.4799769196 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.21528090 -1701.6148087546 -1.35E-01 3 Broy./Diag. 0.10E+00 1.7 0.20804129 -1702.4628429879 -8.48E-01 4 Broy./Diag. 0.10E+00 1.7 0.02741281 -1701.5394905562 9.23E-01 5 Broy./Diag. 0.10E+00 1.8 0.04229866 -1701.5415574145 -2.07E-03 6 Broy./Diag. 0.10E+00 1.8 0.17106554 -1701.5577206023 -1.62E-02 7 Broy./Diag. 0.10E+00 1.8 0.17635190 -1702.1818758703 -6.24E-01 8 Broy./Diag. 0.10E+00 1.8 0.00564576 -1701.4503342256 7.32E-01 9 Broy./Diag. 0.10E+00 1.8 0.00440030 -1701.4575956518 -7.26E-03 10 Broy./Diag. 0.10E+00 1.8 0.01600491 -1701.4681164416 -1.05E-02 11 Broy./Diag. 0.10E+00 1.8 0.01154507 -1701.5004266280 -3.23E-02 12 Broy./Diag. 0.10E+00 1.8 0.00173712 -1701.4744526617 2.60E-02 13 Broy./Diag. 0.10E+00 1.8 0.00343329 -1701.4837039335 -9.25E-03 14 Broy./Diag. 0.10E+00 1.8 0.00613942 -1701.4849072105 -1.20E-03 15 Broy./Diag. 0.10E+00 1.8 0.00319602 -1701.4818028147 3.10E-03 16 Broy./Diag. 0.10E+00 1.8 0.00204311 -1701.4825417552 -7.39E-04 17 Broy./Diag. 0.10E+00 1.8 0.00242580 -1701.4818452197 6.97E-04 18 Broy./Diag. 0.10E+00 1.8 0.00116249 -1701.4838862037 -2.04E-03 19 Broy./Diag. 0.10E+00 1.8 0.00037594 -1701.4822710368 1.62E-03 20 Broy./Diag. 0.10E+00 1.8 0.00010894 -1701.4833292902 -1.06E-03 21 Broy./Diag. 0.10E+00 1.8 0.00007898 -1701.4831227691 2.07E-04 22 Broy./Diag. 0.10E+00 1.8 0.00010298 -1701.4830217121 1.01E-04 23 Broy./Diag. 0.10E+00 1.8 0.00005499 -1701.4830197262 1.99E-06 24 Broy./Diag. 0.10E+00 1.8 0.00009229 -1701.4829626826 5.70E-05 25 Broy./Diag. 0.10E+00 1.8 0.00007858 -1701.4830891735 -1.26E-04 26 Broy./Diag. 0.10E+00 1.8 0.00001286 -1701.4829633986 1.26E-04 27 Broy./Diag. 0.10E+00 1.8 0.00001438 -1701.4830552349 -9.18E-05 28 Broy./Diag. 0.10E+00 1.8 0.00001174 -1701.4830041391 5.11E-05 29 Broy./Diag. 0.10E+00 1.8 0.00000181 -1701.4829914301 1.27E-05 30 Broy./Diag. 0.10E+00 1.8 0.00000816 -1701.4829812479 1.02E-05 31 Broy./Diag. 0.10E+00 1.8 0.00001060 -1701.4829719802 9.27E-06 32 Broy./Diag. 0.10E+00 1.8 0.00000509 -1701.4829764152 -4.43E-06 33 Broy./Diag. 0.10E+00 1.8 0.00000478 -1701.4829650738 1.13E-05 34 Broy./Diag. 0.10E+00 1.8 0.00000403 -1701.4829721175 -7.04E-06 35 Broy./Diag. 0.10E+00 1.8 0.00000161 -1701.4829620391 1.01E-05 36 Broy./Diag. 0.10E+00 1.8 0.00000087 -1701.4829615658 4.73E-07 *** SCF run converged in 36 steps *** Electronic density on regular grids: -560.0000000501 -0.0000000501 Core density on regular grids: 559.9999999999 -0.0000000001 Total charge density on r-space grids: -0.0000000502 Total charge density g-space grids: -0.0000000502 Overlap energy of the core charge distribution: 0.00000074928385 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.36594934727827 Hartree energy: 525.91135326239169 Exchange-correlation energy: -301.04865573847161 Dispersion energy: -0.63229293908170 Total energy: -1701.48296156584752 Integrated absolute spin density : 0.0000355967 Ideal and single determinant S**2 : 0.000000 0.000000 ENERGY| Total FORCE_EVAL ( QS ) energy [a.u.]: -1701.482959132447604 MD_VEL| Centre of mass motion (COM) MD_VEL| VCOM [a.u.] 0.0000016854 0.0000084241 -0.0000079801 MD| *************************************************************************** MD| Step number 19 MD| Time [fs] 19.000000 MD| Conserved quantity [hartree] -0.170112462387E+04 MD| --------------------------------------------------------------------------- MD| Instantaneous Averages MD| CPU time per MD step [s] 67.139536 47.045294 MD| Energy drift per atom [K] -0.137637529394E+01 -0.455045537192E+00 MD| Potential energy [hartree] -0.170148295913E+04 -0.170136931978E+04 MD| Kinetic energy [hartree] 0.298951829670E+00 0.229445027190E+00 MD| Temperature [K] 723.382443 555.194810 MD| *************************************************************************** MD| Estimated peak process memory after this step [MiB] 568 Spin 1 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Spin 2 Number of electrons: 280 Number of occupied orbitals: 280 Number of molecular orbitals: 380 Number of orbital functions: 1304 Number of independent orbital functions: 1304 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Parameters for the always stable predictor-corrector (ASPC) method: ASPC order: 3 B(1) = 3.000000 B(2) = -3.428571 B(3) = 1.928571 B(4) = -0.571429 B(5) = 0.071429 Extrapolation method: ASPC SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ 1 Broy./Diag. 0.10E+00 1.1 0.02421614 -1701.4815887252 -1.70E+03 2 Broy./Diag. 0.10E+00 1.7 0.17133231 -1701.6438177695 -1.62E-01 3 Broy./Diag. 0.10E+00 1.7 0.17489976 -1702.2289788101 -5.85E-01 4 Broy./Diag. 0.10E+00 1.8 0.06433482 -1701.4679205625 7.61E-01 5 Broy./Diag. 0.10E+00 1.8 0.17015764 -1701.7238721338 -2.56E-01 6 Broy./Diag. 0.10E+00 1.8 0.19699261 -1702.3802945722 -6.56E-01 7 Broy./Diag. 0.10E+00 1.8 0.19708469 -1701.4803223227 9.00E-01 8 Broy./Diag. 0.10E+00 1.8 0.19944875 -1702.3527593707 -8.72E-01 9 Broy./Diag. 0.10E+00 1.8 0.00651582 -1701.4475135971 9.05E-01 10 Broy./Diag. 0.10E+00 1.8 0.02013646 -1701.4705969645 -2.31E-02 11 Broy./Diag. 0.10E+00 1.8 0.20322083 -1701.4726136574 -2.02E-03 12 Broy./Diag. 0.10E+00 1.8 0.19881924 -1702.3421834813 -8.70E-01 13 Broy./Diag. 0.10E+00 1.8 0.00122322 -1701.4568944499 8.85E-01 14 Broy./Diag. 0.10E+00 1.8 0.01120260 -1701.4605290668 -3.63E-03 15 Broy./Diag. 0.10E+00 1.8 0.01044890 -1701.4567043251 3.82E-03 16 Broy./Diag. 0.10E+00 1.8 0.00206758 -1701.4585048666 -1.80E-03 17 Broy./Diag. 0.10E+00 1.8 0.19930029 -1701.4545797780 3.93E-03 18 Broy./Diag. 0.10E+00 1.8 0.20244698 -1702.3443713096 -8.90E-01 19 Broy./Diag. 0.10E+00 1.8 0.00156276 -1701.4411441577 9.03E-01 20 Broy./Diag. 0.10E+00 1.8 0.00173997 -1701.4514553663 -1.03E-02 21 Broy./Diag. 0.10E+00 1.8 0.00050693 -1701.4555264087 -4.07E-03 22 Broy./Diag. 0.10E+00 1.8 0.00064397 -1701.4558733822 -3.47E-04 23 Broy./Diag. 0.10E+00 1.8 0.19932758 -1701.4600492971 -4.18E-03 24 Broy./Diag. 0.10E+00 1.8 0.20211831 -1702.3469666680 -8.87E-01 25 Broy./Diag. 0.10E+00 1.8 0.00177051 -1701.4398849353 9.07E-01 26 Broy./Diag. 0.10E+00 1.8 0.00400725 -1701.4518305750 -1.19E-02 27 Broy./Diag. 0.10E+00 1.8 0.00291511 -1701.4583836788 -6.55E-03 28 Broy./Diag. 0.10E+00 1.8 0.00077673 -1701.4560909956 2.29E-03 29 Broy./Diag. 0.10E+00 1.8 0.19942507 -1701.4598231623 -3.73E-03 30 Broy./Diag. 0.10E+00 1.8 0.20220267 -1702.3506911647 -8.91E-01 31 Broy./Diag. 0.10E+00 1.8 0.00197827 -1701.4393000817 9.11E-01 32 Broy./Diag. 0.10E+00 1.8 0.00189782 -1701.4511269844 -1.18E-02 33 Broy./Diag. 0.10E+00 1.8 0.00114123 -1701.4560155267 -4.89E-03 34 Broy./Diag. 0.10E+00 1.8 0.00068182 -1701.4550847735 9.31E-04 35 Broy./Diag. 0.10E+00 1.8 0.19961242 -1701.4589946613 -3.91E-03 36 Broy./Diag. 0.10E+00 1.8 0.20259576 -1702.3491685568 -8.90E-01 37 Broy./Diag. 0.10E+00 1.8 0.00193754 -1701.4376281259 9.12E-01 38 Broy./Diag. 0.10E+00 1.8 0.00291136 -1701.4504938443 -1.29E-02 39 Broy./Diag. 0.10E+00 1.8 0.00177208 -1701.4564103783 -5.92E-03 40 Broy./Diag. 0.10E+00 1.8 0.00085558 -1701.4549236487 1.49E-03 41 Broy./Diag. 0.10E+00 1.8 0.19954706 -1701.4581244766 -3.20E-03 42 Broy./Diag. 0.10E+00 1.8 0.20276784 -1702.3500423733 -8.92E-01 43 Broy./Diag. 0.10E+00 1.8 0.00223652 -1701.4360927025 9.14E-01 44 Broy./Diag. 0.10E+00 1.8 0.00257854 -1701.4488230253 -1.27E-02 45 Broy./Diag. 0.10E+00 1.8 0.00179295 -1701.4553141053 -6.49E-03 46 Broy./Diag. 0.10E+00 1.8 0.00059066 -1701.4532148289 2.10E-03 47 Broy./Diag. 0.10E+00 1.8 0.19994031 -1701.4567604832 -3.55E-03 48 Broy./Diag. 0.10E+00 1.8 0.20285794 -1702.3493964964 -8.93E-01 49 Broy./Diag. 0.10E+00 1.8 0.00190697 -1701.4364088360 9.13E-01 50 Broy./Diag. 0.10E+00 1.8 0.00344414 -1701.4487746237 -1.24E-02 51 Broy./Diag. 0.10E+00 1.8 0.00243195 -1701.4557265328 -6.95E-03 52 Broy./Diag. 0.10E+00 1.8 0.00077426 -1701.4530588825 2.67E-03 53 Broy./Diag. 0.10E+00 1.8 0.19976727 -1701.4558681926 -2.81E-03 54 Broy./Diag. 0.10E+00 1.8 0.20302823 -1702.3499398300 -8.94E-01 55 Broy./Diag. 0.10E+00 1.8 0.00256504 -1701.4349314809 9.15E-01 56 Broy./Diag. 0.10E+00 1.8 0.00212364 -1701.4482311805 -1.33E-02 57 Broy./Diag. 0.10E+00 1.8 0.00152585 -1701.4536316959 -5.40E-03 58 Broy./Diag. 0.10E+00 1.8 0.00059842 -1701.4519537280 1.68E-03 59 Broy./Diag. 0.10E+00 1.8 0.20021915 -1701.4553005506 -3.35E-03 60 Broy./Diag. 0.10E+00 1.8 0.20294330 -1702.3492045608 -8.94E-01 61 Broy./Diag. 0.10E+00 1.8 0.00201005 -1701.4347034957 9.15E-01 62 Broy./Diag. 0.10E+00 1.8 0.00268630 -1701.4474722432 -1.28E-02 63 Broy./Diag. 0.10E+00 1.8 0.00161338 -1701.4534427968 -5.97E-03 64 Broy./Diag. 0.10E+00 1.8 0.00091330 -1701.4517367913 1.71E-03 65 Broy./Diag. 0.10E+00 1.8 0.19984400 -1701.4545832546 -2.85E-03 66 Broy./Diag. 0.10E+00 1.8 0.20353286 -1702.3498187089 -8.95E-01 67 Broy./Diag. 0.10E+00 1.8 0.00281677 -1701.4322948241 9.18E-01 68 Broy./Diag. 0.10E+00 1.8 0.00243152 -1701.4464501993 -1.42E-02 69 Broy./Diag. 0.10E+00 1.8 0.00175859 -1701.4529122739 -6.46E-03 70 Broy./Diag. 0.10E+00 1.8 0.00063427 -1701.4505034809 2.41E-03 71 Broy./Diag. 0.10E+00 1.8 0.20037110 -1701.4537401748 -3.24E-03 72 Broy./Diag. 0.10E+00 1.8 0.20386138 -1702.3490868919 -8.95E-01 73 Broy./Diag. 0.10E+00 1.8 0.00196138 -1701.4334470645 9.16E-01 74 Broy./Diag. 0.10E+00 1.8 0.00272342 -1701.4459367240 -1.25E-02 75 Broy./Diag. 0.10E+00 1.8 0.00179369 -1701.4522728859 -6.34E-03 76 Broy./Diag. 0.10E+00 1.8 0.00083716 -1701.4501303757 2.14E-03 77 Broy./Diag. 0.10E+00 1.8 0.20033606 -1701.4528604086 -2.73E-03 78 Broy./Diag. 0.10E+00 1.8 0.20404725 -1702.3501750221 -8.97E-01 79 Broy./Diag. 0.10E+00 1.8 0.00270920 -1701.4308380066 9.19E-01 80 Broy./Diag. 0.10E+00 1.8 0.00211593 -1701.4454906624 -1.47E-02 81 Broy./Diag. 0.10E+00 1.8 0.00165232 -1701.4515405945 -6.05E-03 82 Broy./Diag. 0.10E+00 1.8 0.00051928 -1701.4492959398 2.24E-03 83 Broy./Diag. 0.10E+00 1.8 0.20044611 -1701.4523062859 -3.01E-03 84 Broy./Diag. 0.10E+00 1.8 0.20313288 -1702.3499511561 -8.98E-01 85 Broy./Diag. 0.10E+00 1.8 0.00178217 -1701.4323462923 9.18E-01 86 Broy./Diag. 0.10E+00 1.8 0.00227463 -1701.4448326204 -1.25E-02 87 Broy./Diag. 0.10E+00 1.8 0.00156466 -1701.4508968084 -6.06E-03 88 Broy./Diag. 0.10E+00 1.8 0.00071745 -1701.4491155796 1.78E-03 89 Broy./Diag. 0.10E+00 1.8 0.20028233 -1701.4518434620 -2.73E-03 90 Broy./Diag. 0.10E+00 1.8 0.20337502 -1702.3509846550 -8.99E-01 91 Broy./Diag. 0.10E+00 1.8 0.00246350 -1701.4293071345 9.22E-01 92 Broy./Diag. 0.10E+00 1.8 0.00213419 -1701.4445291231 -1.52E-02 93 Broy./Diag. 0.10E+00 1.8 0.00175252 -1701.4509417110 -6.41E-03 94 Broy./Diag. 0.10E+00 1.8 0.00056669 -1701.4484690370 2.47E-03 95 Broy./Diag. 0.10E+00 1.8 0.20087978 -1701.4512666941 -2.80E-03 96 Broy./Diag. 0.10E+00 1.8 0.20359711 -1702.3501516019 -8.99E-01 97 Broy./Diag. 0.10E+00 1.8 0.00175976 -1701.4315394007 9.19E-01 98 Broy./Diag. 0.10E+00 1.8 0.00220492 -1701.4438639458 -1.23E-02 99 Broy./Diag. 0.10E+00 1.8 0.00155627 -1701.4500765470 -6.21E-03 100 Broy./Diag. 0.10E+00 1.8 0.00076250 -1701.4481879377 1.89E-03 Leaving inner SCF loop after reaching 100 steps. Electronic density on regular grids: -560.0000000518 -0.0000000518 Core density on regular grids: 559.9999999998 -0.0000000002 Total charge density on r-space grids: -0.0000000520 Total charge density g-space grids: -0.0000000520 Overlap energy of the core charge distribution: 0.00000068378651 Self energy of the core charge distribution: -2717.07931624724824 Core Hamiltonian energy: 791.32588968561299 Hartree energy: 525.96434214629767 Exchange-correlation energy: -301.02615078687461 Dispersion energy: -0.63295341925638 Total energy: -1701.44818793768218 From bamaz.97 at gmail.com Fri Oct 18 15:09:40 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 18 Oct 2024 08:09:40 -0700 (PDT) Subject: [CP2K-user] [CP2K:20785] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> Message-ID: <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> I'm using do_regtests.py script, not make regtesting, but I assume it makes no difference. As I mentioned in previous message for `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp with `--ompthreads 2` I observe similar errors as for psmp with the same setting, I provide example output as attachment. Thanks Bartosz pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > What happens if you set the number of OpenMP threads to 1 (add > '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the > ssmp? > Best, > Frederick > > bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: > >> Hi Frederick, >> >> thanks again for help. So I have tested different simulation variants and >> I know that the problem occurs when using OMP. For MPI calculations without >> OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` and >> `OMP_PLACES` parameters and apart from the effect on simulation time, >> they have no significant effect on the presence of errors. Below are the >> results for ssmp: >> >> ``` >> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >> spread, threads, 3850, 4144, 4, 290, 186min >> spread, cores, 3831, 4144, 3, 310, 183min >> spread, sockets, 3864, 4144, 3, 277, 104min >> close, threads, 3879, 4144, 3, 262, 171min >> close, cores, 3854, 4144, 0, 290, 168min >> close, sockets, 3865, 4144, 3, 276, 104min >> master, threads, 4121, 4144, 0, 23, 1002min >> master, cores, 4121, 4144, 0, 23, 986min >> master, sockets, 3942, 4144, 3, 199, 219min >> false, threads, 3918, 4144, 0, 226, 178min >> false, cores, 3919, 4144, 3, 222, 176min >> false, sockets, 3856, 4144, 4, 284, 104min >> ``` >> >> and psmp: >> >> ``` >> OMP_PROC_BIND, OMP_PLACES, results >> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >> spread, cores, 26 / 362 >> spread, cores, 26 / 362 >> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >> close, cores, 60 / 362 >> close, sockets, 13 / 362 >> master, threads, 13 / 362 >> master, cores, 79 / 362 >> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >> false, sockets, 96 / 362 >> not specified, not specified, Summary: correct: 4129 / 4227; failed: 98; >> 263min >> ``` >> >> Any ideas what I could do next to have more information about the source >> of the problem or maybe you see a potential solution at this stage? I would >> appreciate any further help. >> >> Best >> Bartosz >> >> >> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein napisa?(a): >> >>> Dear Bartosz, >>> If I am not mistaken, you used 8 OpenMP threads. The test do not run >>> that efficiently with such a large number of threads. 2 should be >>> sufficient. >>> The test result suggests that most of the functionality may work but due >>> to a missing backtrace (or similar information), it is hard to tell why >>> they fail. You could also try to run some of the single-node tests to >>> assess the stability of CP2K. >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >>> >>>> Sorry, forgot attachments. >>>> >>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/888943c2-f7b4-4de1-9ab1-338e4d786528n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: regtests_ssmp.out Type: application/octet-stream Size: 2947532 bytes Desc: not available URL: From f.stein at hzdr.de Fri Oct 18 15:18:38 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 18 Oct 2024 08:18:38 -0700 (PDT) Subject: [CP2K-user] [CP2K:20786] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> Message-ID: <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> Please pick one of the failing tests. Then, add the TRACE keyword to the &GLOBAL section and then run the test manually. This increases the size of the output file dramatically (to some million lines). Can you send me the last ~20 lines of the output? bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: > I'm using do_regtests.py script, not make regtesting, but I assume it > makes no difference. As I mentioned in previous message for `--ompthreads > 1` all tests were passed both for ssmp and psmp. For ssmp with > `--ompthreads 2` I observe similar errors as for psmp with the same > setting, I provide example output as attachment. > > Thanks > Bartosz > > pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein napisa?(a): > >> Dear Bartosz, >> What happens if you set the number of OpenMP threads to 1 (add >> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >> ssmp? >> Best, >> Frederick >> >> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >> >>> Hi Frederick, >>> >>> thanks again for help. So I have tested different simulation variants >>> and I know that the problem occurs when using OMP. For MPI calculations >>> without OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` >>> and `OMP_PLACES` parameters and apart from the effect on simulation >>> time, they have no significant effect on the presence of errors. Below are >>> the results for ssmp: >>> >>> ``` >>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>> spread, threads, 3850, 4144, 4, 290, 186min >>> spread, cores, 3831, 4144, 3, 310, 183min >>> spread, sockets, 3864, 4144, 3, 277, 104min >>> close, threads, 3879, 4144, 3, 262, 171min >>> close, cores, 3854, 4144, 0, 290, 168min >>> close, sockets, 3865, 4144, 3, 276, 104min >>> master, threads, 4121, 4144, 0, 23, 1002min >>> master, cores, 4121, 4144, 0, 23, 986min >>> master, sockets, 3942, 4144, 3, 199, 219min >>> false, threads, 3918, 4144, 0, 226, 178min >>> false, cores, 3919, 4144, 3, 222, 176min >>> false, sockets, 3856, 4144, 4, 284, 104min >>> ``` >>> >>> and psmp: >>> >>> ``` >>> OMP_PROC_BIND, OMP_PLACES, results >>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>> spread, cores, 26 / 362 >>> spread, cores, 26 / 362 >>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>> close, cores, 60 / 362 >>> close, sockets, 13 / 362 >>> master, threads, 13 / 362 >>> master, cores, 79 / 362 >>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>> false, sockets, 96 / 362 >>> not specified, not specified, Summary: correct: 4129 / 4227; failed: 98; >>> 263min >>> ``` >>> >>> Any ideas what I could do next to have more information about the source >>> of the problem or maybe you see a potential solution at this stage? I would >>> appreciate any further help. >>> >>> Best >>> Bartosz >>> >>> >>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein napisa?(a): >>> >>>> Dear Bartosz, >>>> If I am not mistaken, you used 8 OpenMP threads. The test do not run >>>> that efficiently with such a large number of threads. 2 should be >>>> sufficient. >>>> The test result suggests that most of the functionality may work but >>>> due to a missing backtrace (or similar information), it is hard to tell why >>>> they fail. You could also try to run some of the single-node tests to >>>> assess the stability of CP2K. >>>> Best, >>>> Frederick >>>> >>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >>>> >>>>> Sorry, forgot attachments. >>>>> >>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From till.hanke.94 at gmail.com Sat Oct 19 14:57:48 2024 From: till.hanke.94 at gmail.com (Till Hanke) Date: Sat, 19 Oct 2024 07:57:48 -0700 (PDT) Subject: [CP2K-user] [CP2K:20787] Error when trying to compile CP2K with spack at DBCSR stage Message-ID: I tried to compile cp2k on my amd based debian machine and ran into an error with the dbcsr compilation. So I downloaded the the ubuntu 24 docker and tried it there again. But the same error occures... The spec I used is the following: (basically just spack spec cp2k) cp2k at 2024.1%gcc at 13.2.0~cosma~cuda~dlaf~elpa~enable_regtests~ipo+libint~libvori+libxc+mpi~mpi_f08+openmp~pexsi~plumed~pytorch~quip~rocm~sirius~spglib~spla build_system=cmake build_type=Release generator=make lmax=5 patches=10f79df smm=libxsmm arch=linux-ubuntu24.04-zen These are the errors I get: -- The C compiler identification is GNU 13.2.0 -- The CXX compiler identification is GNU 13.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /opt/spack/lib/spack/env/gcc/gcc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /opt/spack/lib/spack/env/gcc/g++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- The Fortran compiler identification is GNU 13.2.0 -- Detecting Fortran compiler ABI info -- Detecting Fortran compiler ABI info - done -- Check for working Fortran compiler: /opt/spack/lib/spack/env/gcc/gfortran - skipped -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP_Fortran: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- Using libxsmm for Small Matrix Multiplication -- Found PkgConfig: /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/pkgconf-2.2.0-dmpw46qgijp2o6byqdcdhswqlk7ju5ql/bin/pkg-config (found version "2.2.0") -- Checking for module 'libxsmmext-static' -- Package 'libxsmmext-static' not found -- Checking for module 'libxsmmext' -- Found libxsmmext, version 1.17.0 -- Checking for module 'libxsmmf-static' -- Package 'libxsmmf-static' not found -- Checking for module 'libxsmmf' -- Found libxsmmf, version 1.17.0 -- Found BLAS: /opt/spack/opt/spack/linux-ubuntu24.04-zen/aocc-4.2.0/amdblis-4.2-6qw5nomuo7dgchd5tufstqu7w3o7dqy4/lib/libblis.so -- A library with LAPACK API found. -- Found Python: /opt/spack/opt/spack/linux-ubuntu24.04-zen/aocc-4.2.0/python-venv-1.0-vfc33uf7gasd3tc2vz5zj7plhy57qdmb/bin/python3.11 (found version "3.11.7") found components: Interpreter -- Found MPI_C: /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpi.so (found version "3.1") -- Found MPI_CXX: /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpicxx.so (found version "3.1") -- Found MPI_Fortran: /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpifort.so (found version "3.1") -- Found MPI: TRUE (found version "3.1") found components: C CXX Fortran CMake Error at CMakeLists.txt:194 (message): The listed MPI implementation does not provide the required mpi.mod interface. When using the GNU compiler in combination with Intel MPI, please use the Intel MPI compiler wrappers. Check the INSTALL.md for more information. -- Configuring incomplete, errors occurred! anybody got an idea what went wrong there? I cant seem to get it working. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/cc59f59d-6046-413a-9774-6a9cc81920dcn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From judithsoltis972 at gmail.com Sun Oct 20 12:30:57 2024 From: judithsoltis972 at gmail.com (Judith Soltis) Date: Sun, 20 Oct 2024 05:30:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20787] Regadring plotting minimum energy path Message-ID: <381bae7d-eca9-496b-b283-662cd7023ba8n@googlegroups.com> graph.psmp -ndim 2 -ndw 1 2 -file 1a-1.restart -find-path -point-a 5.471915 2.427548 -point-b 1.908504 5.594854 -cp2k I am using this command and in output as mep.data I am getting 1 5.5948539999999998 1.9085040000000000 -5.9392963086707329E-002 2 4.7411611158759808 1.9057636230091421 -5.5036109794596957E-002 3 3.8968492494178588 1.9153738548617325 -5.5144538726598288E-002 4 3.0444018318726567 2.1485266417287221 -4.3596831313924507E-002 5 2.6419646575578524 2.9355953851599383 -2.9634281260401407E-002 6 2.4919962548744730 3.7698386040636693 -2.4690964073578289E-002 7 2.4951555815573037 4.6211480722870553 -2.9169339472993380E-002 8 2.4275479999999998 5.4719150000000001 -3.0136963798356273E-002 If we observe 1st and 8th line their x and y coordinates are exchanged I want to ask is mep.data file give x and y coordinates in reverse order ? -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/381bae7d-eca9-496b-b283-662cd7023ba8n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Sun Oct 20 14:47:14 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Sun, 20 Oct 2024 07:47:14 -0700 (PDT) Subject: [CP2K-user] [CP2K:20789] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> Message-ID: <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> The error is: ``` LIBXSMM_VERSION: develop-1.17-3834 (25693946) CLX/DP TRY JIT STA COL 0..13 2 2 0 0 14..23 0 0 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 16 KB (gemm=2) Command (PID=2607388): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i H2O-9.inp -o H2O-9.out Uptime: 5.288243 s =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 0 PID 2607388 RUNNING AT r21c01b10 = KILLED BY SIGNAL: 11 (Segmentation fault) =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 1 PID 2607389 RUNNING AT r21c01b10 = KILLED BY SIGNAL: 9 (Killed) =================================================================================== ``` and the last 20 lines: ``` 000000:000002<< 13 76 pw_copy 0.001 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 19 pw_derive star t Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 19 pw_derive 0.00 2 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 168 pw_pool_create_pw start Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 14 97 pw_create_c1d start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 14 97 pw_create_c1d 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 168 pw_pool_create_pw 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 77 pw_copy start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 77 pw_copy 0.001 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 20 pw_derive star t Hostmem: 693 MB GPUmem: 0 MB ``` Thanks! pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein napisa?(a): > Please pick one of the failing tests. Then, add the TRACE keyword to the > &GLOBAL section and then run the test manually. This increases the size of > the output file dramatically (to some million lines). Can you send me the > last ~20 lines of the output? > bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: > >> I'm using do_regtests.py script, not make regtesting, but I assume it >> makes no difference. As I mentioned in previous message for `--ompthreads >> 1` all tests were passed both for ssmp and psmp. For ssmp with >> `--ompthreads 2` I observe similar errors as for psmp with the same >> setting, I provide example output as attachment. >> >> Thanks >> Bartosz >> >> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein napisa?(a): >> >>> Dear Bartosz, >>> What happens if you set the number of OpenMP threads to 1 (add >>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>> ssmp? >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >>> >>>> Hi Frederick, >>>> >>>> thanks again for help. So I have tested different simulation variants >>>> and I know that the problem occurs when using OMP. For MPI calculations >>>> without OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` >>>> and `OMP_PLACES` parameters and apart from the effect on simulation >>>> time, they have no significant effect on the presence of errors. Below are >>>> the results for ssmp: >>>> >>>> ``` >>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>> spread, threads, 3850, 4144, 4, 290, 186min >>>> spread, cores, 3831, 4144, 3, 310, 183min >>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>> close, threads, 3879, 4144, 3, 262, 171min >>>> close, cores, 3854, 4144, 0, 290, 168min >>>> close, sockets, 3865, 4144, 3, 276, 104min >>>> master, threads, 4121, 4144, 0, 23, 1002min >>>> master, cores, 4121, 4144, 0, 23, 986min >>>> master, sockets, 3942, 4144, 3, 199, 219min >>>> false, threads, 3918, 4144, 0, 226, 178min >>>> false, cores, 3919, 4144, 3, 222, 176min >>>> false, sockets, 3856, 4144, 4, 284, 104min >>>> ``` >>>> >>>> and psmp: >>>> >>>> ``` >>>> OMP_PROC_BIND, OMP_PLACES, results >>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>> spread, cores, 26 / 362 >>>> spread, cores, 26 / 362 >>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>> close, cores, 60 / 362 >>>> close, sockets, 13 / 362 >>>> master, threads, 13 / 362 >>>> master, cores, 79 / 362 >>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>> false, sockets, 96 / 362 >>>> not specified, not specified, Summary: correct: 4129 / 4227; failed: >>>> 98; 263min >>>> ``` >>>> >>>> Any ideas what I could do next to have more information about the >>>> source of the problem or maybe you see a potential solution at this stage? >>>> I would appreciate any further help. >>>> >>>> Best >>>> Bartosz >>>> >>>> >>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> Dear Bartosz, >>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not run >>>>> that efficiently with such a large number of threads. 2 should be >>>>> sufficient. >>>>> The test result suggests that most of the functionality may work but >>>>> due to a missing backtrace (or similar information), it is hard to tell why >>>>> they fail. You could also try to run some of the single-node tests to >>>>> assess the stability of CP2K. >>>>> Best, >>>>> Frederick >>>>> >>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >>>>> >>>>>> Sorry, forgot attachments. >>>>>> >>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/463cb4b0-c840-4e7d-9bca-09f007a69925n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shivam-gupta at pharmafoods.co.jp Mon Oct 21 00:58:48 2024 From: shivam-gupta at pharmafoods.co.jp (Shivam Gupta) Date: Sun, 20 Oct 2024 17:58:48 -0700 (PDT) Subject: [CP2K-user] [CP2K:20789] Re: start with CP2k In-Reply-To: References: Message-ID: Hello everyone, No reply in this question so far :( :( I have experience running molecular dynamics (MD) simulations using GROMACS, and now I?m looking to integrate these results with CP2K for more advanced analyses. However, I?m new to CP2K and would appreciate any guidance on how to get started. Any tutorial/material link for beginners. Thank you for your help! On Monday, October 7, 2024 at 5:31:32?PM UTC+9 Shivam Gupta wrote: > Hello everyone, > > I'm new to CP2K and would like to use it to study antigen-antibody > interactions. As I'm unsure how to begin, could anyone recommend a tutorial > or beginner-friendly protocol to help me get started? > > Thank you! > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/edc43e7d-5dfa-4416-9dc4-bd8a2563a532n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Mon Oct 21 06:58:33 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Sun, 20 Oct 2024 23:58:33 -0700 (PDT) Subject: [CP2K-user] [CP2K:20791] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> Message-ID: <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> Dear Bartosz, I have no idea about the issue with LibXSMM. Regarding the trace, I do not know either as there is not much that could break in pw_derive (it just performs multiplications) and the sequence of operations is to unspecific. It may be that the code actually breaks somewhere else. Can you do the same with the ssmp and post the last 100 lines? This way, we remove the asynchronicity issues for backtraces with the psmp version. Best, Frederick bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: > The error is: > > ``` > LIBXSMM_VERSION: develop-1.17-3834 (25693946) > CLX/DP TRY JIT STA COL > 0..13 2 2 0 0 > 14..23 0 0 0 0 > > 24..64 0 0 0 0 > Registry and code: 13 MB + 16 KB (gemm=2) > Command (PID=2607388): > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i > H2O-9.inp -o H2O-9.out > Uptime: 5.288243 s > > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 0 PID 2607388 RUNNING AT r21c01b10 > > = KILLED BY SIGNAL: 11 (Segmentation fault) > > =================================================================================== > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 1 PID 2607389 RUNNING AT r21c01b10 > = KILLED BY SIGNAL: 9 (Killed) > > =================================================================================== > ``` > > and the last 20 lines: > > ``` > 000000:000002<< 13 76 pw_copy > 0.001 > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 19 pw_derive > star > t Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 19 pw_derive > 0.00 > 2 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 168 > pw_pool_create_pw > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 14 97 > pw_create_c1d > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 14 97 > pw_create_c1d > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 168 > pw_pool_create_pw > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 77 pw_copy > start > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 77 pw_copy > 0.001 > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 20 pw_derive > star > t Hostmem: 693 MB GPUmem: 0 MB > ``` > > Thanks! > pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein napisa?(a): > >> Please pick one of the failing tests. Then, add the TRACE keyword to the >> &GLOBAL section and then run the test manually. This increases the size of >> the output file dramatically (to some million lines). Can you send me the >> last ~20 lines of the output? >> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >> >>> I'm using do_regtests.py script, not make regtesting, but I assume it >>> makes no difference. As I mentioned in previous message for `--ompthreads >>> 1` all tests were passed both for ssmp and psmp. For ssmp with >>> `--ompthreads 2` I observe similar errors as for psmp with the same >>> setting, I provide example output as attachment. >>> >>> Thanks >>> Bartosz >>> >>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein napisa?(a): >>> >>>> Dear Bartosz, >>>> What happens if you set the number of OpenMP threads to 1 (add >>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>> ssmp? >>>> Best, >>>> Frederick >>>> >>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >>>> >>>>> Hi Frederick, >>>>> >>>>> thanks again for help. So I have tested different simulation variants >>>>> and I know that the problem occurs when using OMP. For MPI calculations >>>>> without OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` >>>>> and `OMP_PLACES` parameters and apart from the effect on simulation >>>>> time, they have no significant effect on the presence of errors. Below are >>>>> the results for ssmp: >>>>> >>>>> ``` >>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>> ``` >>>>> >>>>> and psmp: >>>>> >>>>> ``` >>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>>> spread, cores, 26 / 362 >>>>> spread, cores, 26 / 362 >>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>> close, cores, 60 / 362 >>>>> close, sockets, 13 / 362 >>>>> master, threads, 13 / 362 >>>>> master, cores, 79 / 362 >>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>> false, sockets, 96 / 362 >>>>> not specified, not specified, Summary: correct: 4129 / 4227; failed: >>>>> 98; 263min >>>>> ``` >>>>> >>>>> Any ideas what I could do next to have more information about the >>>>> source of the problem or maybe you see a potential solution at this stage? >>>>> I would appreciate any further help. >>>>> >>>>> Best >>>>> Bartosz >>>>> >>>>> >>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> Dear Bartosz, >>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not run >>>>>> that efficiently with such a large number of threads. 2 should be >>>>>> sufficient. >>>>>> The test result suggests that most of the functionality may work but >>>>>> due to a missing backtrace (or similar information), it is hard to tell why >>>>>> they fail. You could also try to run some of the single-node tests to >>>>>> assess the stability of CP2K. >>>>>> Best, >>>>>> Frederick >>>>>> >>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >>>>>> >>>>>>> Sorry, forgot attachments. >>>>>>> >>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hutter at chem.uzh.ch Mon Oct 21 07:45:20 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Mon, 21 Oct 2024 07:45:20 +0000 Subject: [CP2K-user] [CP2K:20791] OVERLAP_DELTAT In-Reply-To: References: Message-ID: Hi this keyword/method is intended to be used with the NewtonX software/interface. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of T deJ Sent: Friday, October 18, 2024 2:52 PM To: cp2k Subject: [CP2K:20780] OVERLAP_DELTAT Dear all, I would like to inquire about the function of OVERLAP_DELTAT. It appears to do disable most calculation steps. Am I correct in assuming that OVERLAP_DELTAT is meant to be used by a file IO based, external code, that feeds CP2K the two consecutive geometries in one coordinate file? Best regards, Tjeerd -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/a3cac84f-6bfb-413e-9e93-d60419ee28dbn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB07594838DB77352B19D076329F432%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From bramvdlinden98 at gmail.com Mon Oct 21 08:53:27 2024 From: bramvdlinden98 at gmail.com (Bram Van der Linden) Date: Mon, 21 Oct 2024 01:53:27 -0700 (PDT) Subject: [CP2K-user] [CP2K:20793] Floating images in NEB Message-ID: Hi all, In my NEB simulations I observe significantly increasing distances between images as a function of NEB iterations. I have seen this behavior in different systems. My endpoints were optimized first, and I have already tried increasing and decreasing the spring constant. I attached a typical .ener file and one file with the settings used. What could be the root of this problem? Is there a parameter that needs to changed? Any help would be much appreaciated! With kind regards, Bram -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d5fce2a7-6f88-4da1-a05a-d29e18844fa3n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: task_NEB-1.ener Type: application/octet-stream Size: 27904 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: task_NEB.cpki Type: application/octet-stream Size: 3211 bytes Desc: not available URL: From pototschnig.johann at gmail.com Mon Oct 21 09:45:45 2024 From: pototschnig.johann at gmail.com (Johann Pototschnig) Date: Mon, 21 Oct 2024 02:45:45 -0700 (PDT) Subject: [CP2K-user] [CP2K:20794] Re: Error when trying to compile CP2K with spack at DBCSR stage In-Reply-To: References: Message-ID: It seems the GNU compilers are mixed with Intel MPI compilers and they are not compatible. Do you know how spack selects the MPI? On Saturday, October 19, 2024 at 4:57:48?PM UTC+2 Till Hanke wrote: > I tried to compile cp2k on my amd based debian machine and ran into an > error with the dbcsr compilation. So I downloaded the the ubuntu 24 docker > and tried it there again. But the same error occures... The spec I used is > the following: (basically just spack spec cp2k) > > cp2k at 2024.1%g... at 13.2.0~cosma~cuda~dlaf~elpa~enable_regtests~ipo+libint~libvori+libxc+mpi~mpi_f08+openmp~pexsi~plumed~pytorch~quip~rocm~sirius~spglib~spla > build_system=cmake build_type=Release generator=make lmax=5 patches=10f79df > smm=libxsmm arch=linux-ubuntu24.04-zen > > These are the errors I get: > -- The C compiler identification is GNU 13.2.0 > -- The CXX compiler identification is GNU 13.2.0 > -- Detecting C compiler ABI info > -- Detecting C compiler ABI info - done > -- Check for working C compiler: /opt/spack/lib/spack/env/gcc/gcc - skipped > -- Detecting C compile features > -- Detecting C compile features - done > -- Detecting CXX compiler ABI info > -- Detecting CXX compiler ABI info - done > -- Check for working CXX compiler: /opt/spack/lib/spack/env/gcc/g++ - > skipped > -- Detecting CXX compile features > -- Detecting CXX compile features - done > -- The Fortran compiler identification is GNU 13.2.0 > -- Detecting Fortran compiler ABI info > -- Detecting Fortran compiler ABI info - done > -- Check for working Fortran compiler: > /opt/spack/lib/spack/env/gcc/gfortran - skipped > -- Found OpenMP_C: -fopenmp (found version "4.5") > -- Found OpenMP_CXX: -fopenmp (found version "4.5") > -- Found OpenMP_Fortran: -fopenmp (found version "4.5") > -- Found OpenMP: TRUE (found version "4.5") > -- Using libxsmm for Small Matrix Multiplication > -- Found PkgConfig: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/pkgconf-2.2.0-dmpw46qgijp2o6byqdcdhswqlk7ju5ql/bin/pkg-config > (found version "2.2.0") > -- Checking for module 'libxsmmext-static' > -- Package 'libxsmmext-static' not found > -- Checking for module 'libxsmmext' > -- Found libxsmmext, version 1.17.0 > -- Checking for module 'libxsmmf-static' > -- Package 'libxsmmf-static' not found > -- Checking for module 'libxsmmf' > -- Found libxsmmf, version 1.17.0 > -- Found BLAS: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/aocc-4.2.0/amdblis-4.2-6qw5nomuo7dgchd5tufstqu7w3o7dqy4/lib/libblis.so > > -- A library with LAPACK API found. > -- Found Python: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/aocc-4.2.0/python-venv-1.0-vfc33uf7gasd3tc2vz5zj7plhy57qdmb/bin/python3.11 > (found version "3.11.7") found components: Interpreter > -- Found MPI_C: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpi.so > (found version "3.1") > -- Found MPI_CXX: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpicxx.so > (found version "3.1") > -- Found MPI_Fortran: > /opt/spack/opt/spack/linux-ubuntu24.04-zen/gcc-13.2.0/intel-oneapi-mpi-2021.12.1-vykyxbxai6wop6c7fn3nxxt6yh3eklyd/mpi/2021.12/lib/libmpifort.so > (found version "3.1") > -- Found MPI: TRUE (found version "3.1") found components: C CXX Fortran > CMake Error at CMakeLists.txt:194 (message): > The listed MPI implementation does not provide the required mpi.mod > interface. When using the GNU compiler in combination with Intel MPI, > please use the Intel MPI compiler wrappers. Check the INSTALL.md for > more > information. > > > -- Configuring incomplete, errors occurred! > > anybody got an idea what went wrong there? I cant seem to get it working. > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/b8d19621-e11a-463e-b993-a2da70461b8bn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From judithsoltis972 at gmail.com Mon Oct 21 12:28:29 2024 From: judithsoltis972 at gmail.com (Judith Soltis) Date: Mon, 21 Oct 2024 05:28:29 -0700 (PDT) Subject: [CP2K-user] =?utf-8?b?W0NQMks6MjA3OTVdINiz2KfZitiq2YjYqtmDINmB?= =?utf-8?b?2Yog2KfZhNil2YXYp9ix2KfYqiB8IDAwOTcxNTUzMDMxODQ2IHwg2K0=?= =?utf-8?b?2KjZiNioINin2YTYpdis2YfYp9i2INmBINin2YTYpdmF2KfYsdin2Ko=?= Message-ID: <5f42a14e-15e7-4c6f-9592-efa1bc5ce66fn@googlegroups.com> ??????? ?? ??? ????? ?????? ?? ???????? ???? ????? ??? ???? ??? ???????????? ??? ?????? ?? ??? ????? ??? ?? ??? ???????? ????????? ??????? ???????. ??????? ????????? ?? ?????? ?? ??????????? ?????? ??? ??????? ?????? ??????? ????? ?????? ???????. ???????????????? %30 - 00971553031846 https://linktr.ee/cytotic_d_nur ????? ????? ????? ?? ???????? ????? ?????? ???? ????? ??????? ?? ???? ??? ??????? ??? ?? ??? ???????? ?????????? ?????? ???????? ?????? ???????? ?? ??? ???? ??????? ?????? ????? ??? ??????? ???. ??? ???? ?? ???? ?????? ???????? ??????? ?????? ??? ??????? ???????? ?? ?? ????? ???? ?? ???????? ?????? 00971553031846 ???? ?????? ??? ???? ??????? ?????? ??????? ???????. ????? ????? ???? ??????? ???????????? ??????? ???????? ?? ????? ???????? ???? ?????? ????? ????????? ????? ???????? ???? ???????. ?? ??????? ????? ?? ?????? ??? ???? ???????? ?? ?????? ?? ?? ??? ?????? ????? ???? ??????? ???? ??? ?????? ??? ????? 00971553031846 ????? ???? ??????? ?????? ?????. ???? ?? ???? ????? ??? ????????? ????? ????? ???? ????? ?????? ?????? ?? ????????? ??????. ?? ????? ???? ?? ????? ??? ????? ??????? ???????? ???????? ??? ????? ????????? ??????? ???? ????? ??????. ??? ??? ???????? ??? ????? ?????? ?????? ??????? ????? ????????? ????????. ?????? ???? ????? ?????? ??? ???? ??????? ????? ????? ????? ??? ???? ??? ????? ??? ?????? ????? ?????? ????? ?????? ???. ?? ?????? ?? ??????? ?????? ?????? ??? ?????? ?? ?????????? ?????? ?????? ??? ????? ?? ?? ????. ?????? ?????? ??????: ???????? ????????: ??? ???? ?????? ?? ?????? ?? ??????? ???? ??????? ?? ????????? ??? ??????? ?? ?????? ??????. ???? ??????? ?????: ????? ?????? ?????? ?????? ?? ?? ???? ??????? ?? ???? ???????. ??????? ?? ???? ???????: ??????? ?????? ???????? ?? ????? ???????. ???? ??????? ?????? ??????: ????? ????? ?????? ?? ???????? ?????? ????? ??????. ??? ???? ???????: ???? ?? ???? ??????? ????? ????? ??? ?????? ???? ????????. ????? ????????? ??????: ???? ???????: ???? ??? ????? ?? ???? ??????? ??????? ?? ????????? ???????. ???? ?????? ?????: ??? ??? ?? ????? ????? ??????? ?? ????? ????? ??? ???? ????? ????? ?? ???? ?????? ??. ???? ???? ???????: ?????? ????? ?????? ???????? ??? ?????? ????. ?????? ???? ????? ??? ????? ??? ???????? ???????? ??? ??????? ??? ?? ???? ?????? ??? ????? ?????? ????? ??? ?? ?? ???? ????? ????? ???? ???????. ???????? ???????? ??? ??????? ???????? ???????? ??? ???? ?????????? ???????? ???? ??????? ????? ??? ???? ??????? ????? ???????? ???? ???????? ??? ??? ???? ??????? ???????? ??????? ?? ???? ??????? ??? ??????? ???? ???????? ???? ??????? ?????? ?????? ??? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ??? ???? ??????? ?????? ???????? ??? ???? ??????? ??? ???????? ??????? ???????? ??????? ????? ???????? ???????? ????? ???????? ???? ?????????? ????? ???? ??????? ????? ???????? ???? ??????? ????? ?????? ???? ???????? ??? ??? ???? ??????? ??????? ??????? ?? ???? ??????? ???? ??????? ???? ???????? ???? ??????? ?????? ?????? ???????? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ???????? ???? ??????? ?????? ??????? ??? ???? ??????? ???????? ???? ??????? ????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ????? ???? ???? ??????? ??? ???? ????? ???? ??????? ?????? ???? ?????? ????? ???????? ???? ????? ???? ??????? ???? ????? ?????? ??????? ??????? ?? ???? ??????? ???? ??? ???? ????? ?????? ???? ??????? ?????? ?????? ??????? ???????? ???? ????? ????? ?????? ???? ????? ????? ?? ????? ????? ????? ????? ????? ??? ????? ???????? ???? ????? ?????? ????? ???? ???????? ???? ????? ?????? ????? ??? ???? ????? ?????? ???? ?????? ???????? ????? ?????? ???????? ??? ????? ????? ???????? ???? ????? ?????? ?????? ???? ????? ????? ???? ????? ???? ???????? ???? ???? ????? ?????? ???? ???? ????? ???????? ????? ????? ?????? ????? ???????? #????_???????? #????????_???????? #????_??????? #????????_??????? #????_???????_??? #????_???????? #????_???????_?????? #????_????????_??_???????? #????_????????_????? #????????_????? #????_???????_???????? #????_???????_??? #????_???????_??_???????? #????_???????_????_???????? #???????_??_???????? #????_??????? #????_????_???????_???????? #????_???????_?????? #????_???????_???_????? #????_???????_??_????????? ???????? ???????? ??? ??????? ???????? ???????? ??? ???? ?????????? ???????? ???? ??????? ????? ??? ???? ??????? ????? ???????? ???? ???????? ??? ??? ???? ??????? ???????? ??????? ?? ???? ??????? ??? ??????? ???? ???????? ???? ??????? ?????? ?????? ??? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ??? ???? ??????? ?????? ???????? ??? ???? ??????? ??? ???? ??????? ???????? ???? ??????? ??? ???? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ??? ???? ???? ??????? ???????? ???? ????? ???? ??????? ??? ???? ?????? ????? ???????? ???? ????? ???? ??? ???? ????? ?????? ???????? ??????? ?? ???? ??????? ??? ???? ????? ???????? ???? ??????? ?????? ?????? ??? ???? ????? ????? ???????? ???? ????? ????? ?? ????? ????? ??? ????? ????? ???????? ???? ????? ?????? ??? ???? ????? ?????? ????? ???????? ???? ????? ?????? ???? ?????? ??? ????? ?????? ???????? ??? ????? ????? ??? ???? ????? ?????? ???????? ???? ????? ????? ???? ????? ??? ???? ???? ????? ?????? ???? ???? ???????? ????? ????? ?????? ??? ???????? ??????? ???????? ??????? ????? ???????? ???????? ????? ???????? ???? ?????????? ????? ???? ??????? ????? ???????? ???? ??????? ????? ?????? ???? ???????? ??? ??? ???? ??????? ??????? ??????? ?? ???? ??????? ???? ??????? ???? ???????? ???? ??????? ?????? ?????? ???????? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ???????? ???? ??????? ?????? ??????? ??? ???? ??????? ???????? ???? ??????? ????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ????? ???? ???? ??????? ??? ???? ????? ???? ??????? ?????? ???? ?????? ????? ???????? ???? ????? ???? ??????? ???? ????? ?????? ??????? ??????? ?? ???? ??????? ???? ??? ???? ????? ?????? ???? ??????? ?????? ?????? ??????? ???????? ???? ????? ????? ?????? ???? ????? ????? ?? ????? ????? ????? ????? ????? ??? ????? ???????? ???? ????? ?????? ????? ???? ???????? ???? ????? ?????? ????? ??? ???? ????? ?????? ???? ?????? ???????? ????? ?????? ???????? ??? ????? ????? ???????? ???? ????? ?????? ?????? ???? ????? ????? ???? ????? ???? ???????? ???? ???? ????? ?????? ???? ???? ????? ???????? ????? ????? ?????? ????? ???????? ???? ????? ????? ?? ????? ???? ??????? ?? ???? ???? ??????? ????? ?? ???? ???????? ?? ???? ???????? ???? ????? ????? ?? ???? ???? ??????? ?? ???? ??? ???? ????? ?? ???? ????? ???? ????? ?? ???? ????? ??????? ?? ???? ??????? ?????? ????? ?? ???? ???????? ???? ????? ??????? ????? ???? ???? ??????? ?? ????? ???? ???????? ???????? ?????? ??????? ???????? ???????? ?????? ???? ?????????? ???????? ???? ??????? ????? ?????? ???? ??????? ????? ???????? ???? ???????? ?????? ??? ???? ??????? ???????? ??????? ?? ???? ??????? ?????? ??????? ???? ???????? ???? ??????? ?????? ?????? ???????? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ?????? ???? ??????? ?????? ???????? ??? ???? ??????? ?????? ???? ????? ????? ?????? ???? ????? ????? ?? ????? ????? ?????? ????? ????? ?????? ???? ????? ?????? ?????? ???? ????? ?????? ????? ?????? ???? ????? ?????? ???? ?????? ?????? ????? ?????? ?????? ??? ????? ????? ?????? ???? ????? ?????? ?????? ???? ????? ????? ???? ????? ?????? ???? ???? ????? ?????? ???? ???? ?????? ????? ????? ?????? ?????? ???????? ??????? ?????? ??????? ??????? ?????? ???????? ????? ?????? ???? ?????????? ???????? ?????? ???? ??????? ????? ?????? ???? ??????? ????? ?????? ???? ???????? ?????? ??? ???? ??????? ?????? ??????? ?? ???? ??????? ??????? ??????? ???? ?????? ???? ??????? ?????? ?????? ???????? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ???????? ???? ??????? ?????? ??????? ??? ???? ??????? ???????? ???????? ???????? ??? ??????? ???????? ???????? ??? ???? ?????????? ???????? ???? ??????? ????? ??? ???? ??????? ????? ???????? ???? ???????? ??? ??? ???? ??????? ???????? ??????? ?? ???? ??????? ??? ??????? ???? ???????? ???? ??????? ?????? ?????? ??? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ??? ???? ??????? ?????? ???????? ??? ???? ??????? ??? ???? ??????? ???????? ???? ??????? ??? ???? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ??? ???? ???? ??????? ???????? ???? ????? ???? ??????? ??? ???? ?????? ????? ???????? ???? ????? ???? ??? ???? ????? ?????? ???????? ??????? ?? ???? ??????? ??? ???? ????? ???????? ???? ??????? ?????? ?????? ??? ???? ????? ????? ???????? ???? ????? ????? ?? ????? ????? ??? ????? ????? ???????? ???? ????? ?????? ??? ???? ????? ?????? ????? ???????? ???? ????? ?????? ???? ?????? ??? ????? ?????? ???????? ??? ????? ????? ??? ???? ????? ?????? ???????? ???? ????? ????? ???? ????? ??? ???? ???? ????? ?????? ???? ???? ???????? ????? ????? ?????? ??? ???????? ??????? ???????? ??????? ????? ???????? ???????? ????? ???????? ???? ?????????? ????? ???? ??????? ????? ???????? ???? ??????? ????? ?????? ???? ???????? ??? ??? ???? ??????? ??????? ??????? ?? ???? ??????? ???? ??????? ???? ???????? ???? ??????? ?????? ?????? ???????? ??????? ?????? ?????? ???????? ??? ???? ??????? ?????? ???????? ???? ??????? ?????? ??????? ??? ???? ??????? ???????? ???? ??????? ????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ????? ???? ???? ??????? ??? ???? ????? ???? ??????? ?????? ???? ?????? ????? ???????? ???? ????? ???? ??????? ???? ????? ?????? ??????? ??????? ?? ???? ??????? ???? ??? ???? ????? ?????? ???? ??????? ?????? ?????? ??????? ???????? ???? ????? ????? ?????? ???? ????? ????? ?? ????? ????? ????? ????? ????? ??? ????? ???????? ???? ????? ?????? ????? ???? ???????? ???? ????? ?????? ????? ??? ???? ????? ?????? ???? ?????? ???????? ????? ?????? ???????? ??? ????? ????? ???????? ???? ????? ?????? ?????? ???? ????? ????? ???? ????? ???? ???????? ???? ???? ????? ?????? ???? ???? ????? ???????? ????? ????? ?????? ????? ???????? #????_???????? #????????_???????? #????_??????? #????????_??????? #????_???????_??? #????_???????? #????_???????_?????? #????_????????_??_???????? #????_????????_????? #????????_????? #????_???????_???????? #????_???????_??? #????_???????_??_???????? #????_???????_????_???????? #???????_??_???????? #????_??????? #????_????_???????_???????? #????_???????_?????? #????_???????_???_????? #????_???????_??_????????? ??????? ?? ???????? ?? ????????? ??????? ?????? ????? ?? ???????? ???? ???? ????? ?? ???????? ????? ????? ????? ?? ???????? ?????? ????? ?? ???????? ????? ????? ?? ???????? ???? ??????? ???? ??????? ?? ???????? ???? ??????? ???????? ???? ??????? ??? ???? ??????? ???????? ???? ??????? ?? ????????? ??? ????? ???? ??????? ?? ????? ????? ???????? ???? ????? ???? ??????? ??? ???? ???? ??????? ???????? ???? ????? ???? ??????? ??? ???? ?????? ????? ???????? ???? ????? ???? ??? ???? ????? ?????? ???????? ??????? ?? ???? ??????? ??? ???? ????? ???????? ???? ??????? ?????? ?????? ??? ???? ????? ????? ???????? ???? ????? ????? ?? ????? ????? ??? ????? ????? ???????? ???? ????? ?????? ??? ???? ????? ?????? ????? ???????? ???? ????? ?????? ???? ?????? ??? ????? ?????? ???????? ??? ????? ????? ??? ???? ????? ?????? ???????? ???? ????? ????? ???? ????? ??? ???? ???? ????? ?????? ???? ???? ???????? ????? ????? ?????? ??? ????? ??? ????? ??????? ????? ????? ????? ?? ??????? ????? ??? ?????? ????? ??????? ????? ????? ????? ?? ????? ???? ? ?????? ???????? ?????? ??????? ?????? ???? ??????? ?????? ???? ??????? ?????? ??????? ?????? ?????? ?????? ???? ??????? ?????? ???? ??????? ?????? ???? ?????? ???? ?????? ????? ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ????? ??????? ?? ?? ???? ??? ?? ????? ??? ???????? ??????? ????? ?? ??? ??? ???? ????? ?? ??? ??????? ?????? ?? ??????? ?????? ?????? ??? ????????? ??????? ?????????? ??? ????? ?????? ??? ??? ??????? ?????? ??????? ??????. ?? ??????? ?? ??? ??????? ??? ??????? ????? ???????? ?????? ?????? ????? ?????? ????. ??? ????? ??????? ??? ?? ??????? ??? ??????? ???? ????? ???? ??? ???? ?? ???? ?????? ????? ?? ???? ??????? ????? ?????? ?? ?? ???? ??? ?????? ??????? ??????. ?? ???????? ????? ???????? ????? ??????? ?????? ?????? ????? ?? ?? ????? ??????? ?????? ??? ????????? ??????? ?????????? ??????? ???? ????? ??? ??????? ??????? ???????? ???. ????? ??? ???????? ?????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ??????? ???? ????? ???? ???????? ???? ????? ????? ??????? ???? ??????? ????? ??? ?????? ????? ??????? ??????? ?????? ???? ???? ???? ???? ??? ???? ??? ?? ??? ?????? ??????? ?? Cytotec ??? ?? ???? ?????? ??? ???? ??????? ??????? ???? ????? ????? ????? ??????? ???? ??????? ????? ??? ?????? ????? ??????? ??????? ?????? ????? ????? ???????? ???????? ???? ?????? ?? ??? ???????? ?? ???? ????????? ?????? ???????_ ??? ????? ?? ????? ??? ??????_ ????? ?????? ???? ??????_ ?????? ?????? ?????????? ??????? ??????? ???????? ??????? ?????? ????? ?????? ???? ???? ???? ???? ??? ???? ??? ???? ??? ??????? ????? ??????? ?? ??????? ?? ???? ???? ?????? ????????? ?????????? ?????? ??????? ????????? ????????? ???? ????? ???? ????? ??????? ???????? ????? ??????? cytotec ????? ??? ???????? ???? ???? ???? ??? ?????? ??????? ???? ??? ???? ????? ????? ????? ???? ???? ???? ?????? ?????? ??? ??????? ??????? ????? ???? ????? ?????? ??? ?????? ????? ?????? ?? ???? ???? ????? ????? ????? ??? ???? ????? ???? ???? ???? ???? ???? ??????? ?????? ????? ?? ????? ???? ????? ???? ???? ?????? ????? ?????? ????? ?????? ??????? ????? ?????? ?????? ??????? ??? ????? ?????? ????? ???? ???? ?????? ????? ??? ?????? ??????? ?????? ????? ?????? ???? ???? ???? ?????? ??? ???? ????? ?????? ?????? ?? ????? ???? ???? ??? ?????? ???? ????? ??? ???????? ???? ??? ????????? ??????? ?cytotec ??????? ??? ????? ??????? ?????? ????? ???? ??????? cytotec ??????? ???? ??????? ???? ??????? ??????? ???? ??????? ??????? ??????? ??????? ??????? ?? ????? ?????? ??????? ??????? ??????? ??????? ???? ??????? ??????? ??????? ???? ??????? ??????? ???? ??????? ??????? ??????? ????? ????? ???? ??????? ????? ????? ??????? ????? ???? cytotec ????? ???? ??????? ????? ???? ??????? ????? ???? ??????? ??????? ????? ???? ??????? ????? ??????? ????? ??????? ??????? ????? ?????? ????? ??? ????? ??????? ????? ????? ????? ?? ??????? ????? ??? ?????? ????? ??????? ????? ????? ????? ????? ??????? ????? ??????? ??????? ??????? ??????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ??????? ????? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ?? ???????? ??? ???? ????? ??? ???? ????? ???? ??? ???? ????? ????? ??? ???? ??????? ??? ???? ??????? ?? ???????? ??? ??????? ???? cytotec ??????? ???? cytotec ????? ???? ????? ????? ???? ????? ????? ???? ????? ???? ????? ???????? ???? ????? ?????? ???? ????? ????? ?? ???????? ???? ????? ????? ?? ?????? ???? ????? ????? ????? ???? ????? ????? ????? ?? ????? ???? ???? ????? ?????? ???? ????? ???????? ???? ????? ????? ????? ???? ????? ??????? ???? ????? ??????? ???? ????? ????? ???? ???? ????? ???? ???? ????? ????? ???? ????? ?? ???????? ???? ????? ?? ?????? ???? ????? ?? ????? ???? ???? ????? ?? ???? ???? ????? ?? ??? ???? ????? ??? ???? ????? ????? ???? ????? ????? ?? ????? ???? ???? ????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ???? ????? ?????? ?? ???????? ???? ??????? ????? ???? ??????? cytotec ???? ??????? ???????? ???? ??????? ????? ???? ??????? ?????? ???? ??????? ??? ???? ???? ??????? ??????? ???????? ???? ??????? ?? ??????? ????? ???? ??????? ?? ???????? ???? ??????? ?? ???????? ???? ??????? ?? ????? ????? ???? ??????? ?? ????? ?????? ???? ??????? ?? ???? ???? ??????? ??? ???? ??????? ????? ?? ???????? ???? ??????? ???? ????? ?????? ???? ????? ????? ??????? ???? ????? ????? ????? ?? ???????? ???? ????? ?? ???? ??????? ???? ??????? ???????? ???? ??????? ?????? ???? ??????? ????? ????? ???? ??????? ??? ???? ???? ??????? ??????? ???? ??????? ????? ??????? ???? ??????? ????? ????????? ???? ??????? ???? ???? ??????? ?? ???????? ???? ??????? ?? ?????? ???? ??????? ?? ???????? ???? ??????? ?? ????? ?????? ???? ??????? ?? ??? ???? ??????? ??? ???? ??????? ??????? ???? ??????? ??????? ??????? ???? ??????? ????? ???? ??????? ????? ???????? ???? ??????? ????? ?? ???????? ???? ??????? ????? ?? ???????? ???? ??????? ????? ?? ????? ???? ???? ??????? ????? ?? ??? ???? ???????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ???? ???????? ?? ???????? ???? ???????? ??????? ???? ???????? ??????? ?? ????? ?????? ???? ???????? ??????? ?? ????? ?????? ???? ???? ?????? ???? ?????? ?????? ???? ??????? ?? ???????? ???? ??????? ??????? ???? ??????? ???????? ???? ??????? ?? ???????? ???? ??????? ???? ??????? ????? ??????? ???? ??????? ??????? ???? ??????? ?? ???????? ???? ??????? ?? ???????? ???? ??????? ??????? ???? ??????? ??????? ???? ???????? ????? ??????? ???? ??????? ??????? ???? ???????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ??????? 200 ????? ????????? ??????? ????? ??????? ????? ??????? ???????? ??????? ?????? ??????? ???????? ??????? ??????? ??????? ???? ????? ??????? ???? ??????? ??????? ????? ????????? ??????? ????? ??????? ?? ???????? ??????? ?? ?????? ??????? ?? ???????? ??????? ?? ????? ?????? ??????? ?? ????? ?????? ??????? ?? ??? ??????? ????? ????????? ??????? ??????? ??????? ??????? ????? ????? ??????? ??????? ???? ????? ??????? ??????? ????? ??????? ???? ??????? ??????? ??????? ?? ????? ????? ??????? ??????? ?? ????? ?????? ??????? ??????? ?? ????? ?????? ??????? ????? ??????? ????? ?? ?????? ??????? ????? ?? ???????? ???????? 200 ??????? ???????? ????? ????????? ???????? ?? ???????? ???????? ?? ????? ????? ???? ???? ??????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ???? ???? ??????? ???? ??????? ???? ????? ???? ??????? ????? ??? ???? cytotec ??????? ????? ??? ???? ??????? ????? ??? ???? ??????? ????? ??????? cytotec ????? ??????? ????? cytotec ????? ??????? ???? ??????? ????? ??????? ???? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ??????? ????? ??????? ???? ??????? ??????? ?? ??????? ????? ????? ??????? ???? ??????? ??????? ?? ??????? ????? ????? ??????? ???? ??????? ??????? ?? ??????? ????? ????? ??????? ???? ??????? ??????? ?? ??????? ?????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ????? ??????? ???? ???????? ??????? ????? ??????? ??????? ????? ??????? ??????? ??????? ????? ??????? ??????? ?? ????? ?????? ????? ??????? ??????? ??????? ????? ??????? ??????? ??????? ?? ????? ????? ????? ??????? ??????? ??????? ?? ????? ?????? ????? ??????? cytotec ????? ??????? ???? ??????? ???????? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ???????? ????? ??????? ???? ??????? ????? ??????? ???? ???????? ????? ??????? ??????? ????? ??????? ??????? ??????? ????? ??????? ??????? ??????? ?? ????? ????? ????? ??????? ??????? ??????? ?? ????? ?????? ????? ??????? ??????? ??????? ?? ????? ?????? ????? ??????? ???????? ????? ???? ??????? ????? ???? ??????? ?? ????? ????? ????? ???? ??????? ????? ???????? ???? cytotec ??????? ???? ???? ??????? ??????? ???? ??????? ??????? ????? ??? ???? ??????? ????? ??? ???? ???????? ????? ??????? ????? ??????? ??????? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ????? ??????? ??????? ????? ??????? ??????? ??????? ????? ??????? ??????? ??????? ?? ????? ?????? ????? ??????? ???????? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ????? ??????? ??????? ???? ????? ???? ??????? ??????? ???? ???? ??????? ???? ?????? ??? ???? ??????? ?? ???? ??????? ???? ??????? ??? ????? ?????? ??? ????? ???? ???????? ??? ???? ??? ????? ????? ????? ???? ???? ??? ??? ???????? ?? ???? ????? ?????? ?? ????? ??????? ?? ?? ???? ??????? ?????? ?????? ??? ??? ????? ??? ???????? ??? ???? ???? ???????? ??? ???? ??????? ??? ????? ???? ???????? ?? ???? ??????? ???? ?? ?????????? ?? ???? ???? ??????? ????? ???? ??????? ?? ????????? ????? ???? ??????? ?? ????? ????? ??????? ?? ???? ??????? ???? ???? ??????? ???? ????? ???? ??????? ??? ?????? ????? ?????? ???? ???? ??????? ?? ???????? ??? ???? ???? ????? ?? ?????????? ?????? ???? ?????? ?????? ????? ??? ??????? ?? ???? ???? ???? ??????? ?? ????????? ????? ????? ????? ??? ????? ???? ??????? ?? ????? ????? ?? ??? ?????? ???????? ???? ??????? ?? ????????? ?? ?? ?????? ???? ???? ??????? ??? ???? ?? ???? ?????? ??????? ????????? ?? ???????? ????? ?? ????? ??????? ???? ???? ?????? ???? ????? ???? ????? ?????? ??????? ?????? ??? ???? ?????? ??? ?? ??? ????? ?? ?? ?????? ???? ????? ??? ???? ??????? ?? ???? ??? ??? ???? ????? ??????? ???? ????? ?????? ???? ?????? ????? ????? ?????? ??????? ?????? ???? ?????? ?????? ????? ???? ???? ???? ?????? ????? ???? ???? ??? ???? ??? ????? ??? ?? ????? ???? ?????? ??? ???? ?? ????? ?????? ???? ???? ???? ???? ?????? ?? ???????? ???? ?? ?? ??? ??? ?????? ???? ??? ?? ???? ?????? ???? ???? ??????? ?? ???? ?? ???? ???? ???? ????? ?? ????? ???? ????? ?? ??????? ?????? ??? ???? ??????? ?? ??????? ????? ?? ?????? ????? ???? ???? ??? ???? ?????? ???? ??? ???? ??? ????? ???????? ??? ???? ??? ???? ????? ??? ????? ?? ??? ??? ????? ??? ?????? ??? ???? ?? ????? ?????? ???? ???? ??? ???? ?? ????? ????? ???? ???? ????? ????? ?????? ??? ???????? ???? ?????? ??????? ??????????? ???? ???? ?????? ??? ????? ????? ??? ????? ?????? ?? ????? ????? ???? ????? ??????? ?? ???????? ???? ??????? ?????? ?? ????? ??? ???? ??????? ??????? ??????? ?????? ?????? ???? ??????? ?? ????? ???? ????? ????? ?? ??? ??????? ???? ????? ???? ?????? ???? ?????? ????? ???? ???? ??????? ??????? ??????? ??????? ????? ????? ??? ??????? ???? ????? ??? ??????? Misoprostol mifepristone ??????????? ??????????? ?????? ?????? Cytotec ??????? ???? ???? ???????? ??????? ????? ????? ????? ????? ????? ????? ?????? ???? ???? ????? ??????? ???? ??? ???? ????? ?????? ???? ?????? ?????? ??????? ??? ???? ?????? ???? ???? ?????? ????? ?????? ??????? ??? ???? ??? ???? ??????? ?cytotec ????? ?? ????? ???? ?cytotec ???? ?cytotec ??????? ????? ????? ????? ????? ????? ????? ???? ???? ??????? ????? ?????? ????? ?????? ????? ???? ??????? ???? ??????? ??????? ??????? ???? ??????? ??????? ???? ??????? ????? ????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ????? ????? ?????? ????? ????? ????? ????? ????? ????? ????? ??????? ????? ????? ????? ????? ????? ????? ????? ?????? ????? ???? ???? ????? ??????? ????? ??????? ??????? ?????? ??????? ????? ????? ?? ????? ????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ????? ????? ??????? ????? ????? ????? ????? ?????? ????? ????? ????? ????? ????? ??????? ????? ??????? ????? ??????? ??????? ????? ??????? ????? ????? ?????? ????? ????? ????? ????? ????? ????? ????? ????? ????? ??? ???? ????? ??? ???? ????? ????? ????? ?????? ????? ????? ???? cytotec ???? cytotec ??????? ???? ????? ?????? ???? ????? ????? ???? ????? ????? ???? ????? ???? ????? ???????? ???? ????? ????? ?? ???????? ???? ????? ????? ????? ???? ????? ????? ???? ????? ??????? ???? ????? ?? ?????? ???? ????? ?? ???????? ???? ????? ????? ???? ????? ???? ????? ?????? ???? ????? ?????? ?? ???????? ???? ??????? ????? ???? ??????? ???? ??????? cytotec ???? ??????? ???????? ???? ??????? ????? ???? ??????? ????? ??? ???????? ???? ??????? ??? ???? ???? ??????? ?? ???????? ???? ??????? ??????? ???? ??????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ???? ???? ?????? ???? ???? ????? ???? ????? ?????? ???? ????? ????? ???? ????? ????? ????? ???? ????? ????? ???? ????? ?????? ???? ????? ????? ???? ????? ????? ??????? ???? ????? ????? ??????? ???? ????? ????? ?? ???????? ???? ????? ?? ???? ??????? ???? ??????? ??? ???? ???? ??????? ??????? ???? ??????? ????? ?? ?????? ???? ??????? ????? ?? ???????? ???? ???????? ?? ???????? ???? ???????? ??????? ???? ???? ?????? ???? ?????? ?????? ???? ?????? ????? ???? ??????? ???? ??????? ?? ???????? ???? ?? ????? ???? ????? ???? ????? ?????? ???? ????? ?????? ???? ????? ???? ????? ?????? ???? ????? ????? ???? ????? ????? ???? ????? ??? ???? ????? ?????? ???? ????? ????? ???? ??????? ??????? ???? ??????? ???????? ???? ??????? ?? ???????? ???? ????? ???? ????? ????? ???? ????? ????? ???? ??????? ?? ???????? ???? ??????? ??????? ???? ?????? ?????? ???? ?????? ?????? ???? ?????? ????? ???? ?????? ????? ???? ??????? ??????? cytotec ??????? ????? ??????? ????? ??????? ???????? ??????? ????? ??????? ???? ??????? ???? ????? ??? ??????? ???? ??????? ??????? ?????? ????? ??????? ??????? ??????? ??????? ???? ????? ??????? ????? ?? ???????? ???????? ???? ???? ???? ??????? ??? ???? ????? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ?????? ????? ?????? ????? ?cytotec ???? ????? ???? ??????? ???? ??????? ??????? ???? ????????? ??????? ????? ????? ????? ????? ?????? ????? ????? ????? ????? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ??? ???? ???? ??????? ????? ????? ????? ????? ?????? ????? ????? ????? ????? ????? ??????? ????? ??????? ????? ??????? ??????? ????? ??????? ??? ???? ????? ??? ???? ????? ???? ??? ???? ????? ????? ??? ???? ??????? ??? ???? ??????? ?? ???????? ??? ??????? ???? cytotec ????? ???? ????? ????? ???? ????? ?????? ???? ????? ????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ???? ????? ???? ????? ???????? ???? ????? ??????? ???? ????? ?????? ???? ????? ????? ?? ???????? ???? ????? ????? ????? ???? ????? ????? ????? ?? ????? ???? ???? ????? ????????? ???? ????? ??? ???? ????? ??????? ???? ????? ??????? ????? ?? ????? ???? ???? ???? ????? ????? ???? ???? ????? ???? ???? ????? ?? ?????? ???? ????? ?? ???????? ???? ????? ?? ??????? ???? ????? ?? ??? ???? ????? ?? ????? ???? ???? ????? ?? ???? ???? ????? ?? ??? ???? ????? ?? ???? ???? ????? ??? ???? ????? ????? ???? ????? ????? ?? ????? ???? ???? ????? ?? ??????? ???? ????? ???? ????? ?????? ?? ???????? ???? ??????? ????? ???? ??????? cytotec ???? ??????? ???????? ???? ??????? ????? ???? ??????? ??? ???? ???? ??????? ?? ???????? ???? ??????? ?? ???? ???? ??????? ??? ???? ??????? ????? ?? ???????? ???? ??????? ???? ???? ??????? ???? ????? ???? ????????? ???? ???? ????? ???? ????? ????? ???? ????? ????? ????? ???? ????? ????? ???? ????? ????? ???? ????? ????? ??????? ???? ????? ????? ??????? ???? ????? ????? ?? ???????? ???? ????? ????? ????? ?? ???????? ???? ??????? ??????? ????? ???? ??????? ???????? ???? ??????? ??????? ???? ??????? ??? ???? ???? ??????? ??? ???? ??????? ???? ???? ??????? ?? ???????? ???? ??????? ?? ??? ???? ??????? ?? ??? ???? ??????? ??? ???? ??????? ??????? ???? ??????? ????? ???? ??????? ????? ?? ?????? ???? ??????? ????? ?? ???????? ???? ??????? ????? ?? ??? ???? ??????? ????? ?? ????? ???? ???? ??????? ????? ?? ??? ???? ??????? ????? ?? ???? ???? ???????? ??????? ???? ???????? ?? ???????? ???? ?????? ?????? ???? ??????? ???? ??????? ?? ???????? ???? ??????? ?? ??????? ???? ?? ????? ???? ?? ????? ???? ????? ????? ???? ??????? ??????? ???? ??????? ???????? ???? ??????? ?? ???????? ???? ??????? ?? ???????? ??????? ????? ??????? ????? ??????? ???????? ??????? ??????? ??????? ???? ????? ??? ??????? ???? ????? ??????? ???? ??????? ??????? ?? ???????? ??????? ??????? ???? ????? ??????? ??????? ??? ???? ??????? ??????? ??????? ??? ???? ??????? ??????? ????? ??????? ????? ??? ???? ??????? ??????? ????? ?? ???? ???????? 200 ??????? ???????? ?? ???????? ??? ???? ????? ????? ??? ???? ??????? ??? ???? ??????? ??????? ??? ???? ??????? ??????? ????? ??? ???? ????????? ??? ???? ???? ??????? ????? ??? ???? ??????? ?? ???????? ??? ???? ??????? ?? ????? ???? ??? ???? ??????? ?? ???? ??? ???? ??????? ??? ???? ????? ????? ??????? ???? ??????? ????? ??????? ???? ????????? ??????? ?? ???????? ???????? ?? ???????? ?cytotec ????? ??? ???? ????? ??? ???? ????? ???? ??? ???? ????? ????? ??? ???? ??????? ??? ??????? ???? cytotec ???? cytotec ????? ???? ????? ????? ???? ????? ???????? ???? ????? ????? ?? ?????? ???? ????? ????? ????? ???? ????? ????? ????? ?? ????? ???? ???? ????? ?????? ???? ????? ?????? ???? ????? ????????? ???? ????? ??????? ???? ????? ??? ???? ????? ??? ???? ????? ??????? ????? ?? ????? ???? ???? ???? ????? ????? ???? ???? ????? ???? ???? ????? ?? ?????? ???? ????? ?? ???????? ???? ????? ?? ?????? ???? ????? ?? ?????? ???? ????? ?? ??? ???? ????? ?? ????? ???? ???? ????? ?? ???? ???? ????? ?? ??? ???? ????? ?? ???? ???? ????? ????? ???? ????? ????? ?? ????? ???? ???? ??????? ???????? ???? ??????? ????? ???? ??????? ???????? ???? ??????? ??????? ????? ???? ??????? ?? ???????? ???? ??????? ?? ???????? ???? ??????? ?? ???? ???? ??????? ????? ?? ???????? ???? ??????? ???? ???? ????????? ???? ????? ?????? ???? ????? ????? ????? ?? ???????? ???? ??????? 200 ???? ??????? ???????? ???? ??????? ?????? ???? ??????? ?????? ???? ??????? ??????? ???? ??????? ??? ???? ??????? ??? ???? ??????? ???? ???? ??????? ?? ???????? ???? ??????? ?? ?????? ???? ??????? ?? ???????? ???? ??????? ?? ?????? ???? ??????? ?? ??? ???? ??????? ?? ??? ???? ??????? ??? ???? ??????? ????? ???? ??????? ????? ???????? ???? ??????? ????? ?? ???????? ???? ??????? ????? ?? ???????? ???? ??????? ????? ?? ?????? ???? ??????? ????? ?? ??? ???? ??????? ????? ?? ??? ???? ??????? ????? ?? ????? ???? ???? ??????? ????? ?? ??? ???? ??????? ????? ?? ???? ???? ??????? ???? ???? ???????? ?? ???????? ???? ??????? ??????? ???? ??????? ?? ???????? ??????? 200 ??????? ?????? ????? ??????? ???????? ??????? ?????? ??????? ???????? ??????? ?????? ??????? ???? ????? ??????? ??? ??????? ?? ?????? ??????? ??????? ??? ???? ??????? ??????? ??????? ??? ???? ??????? ??????? ????? ??????? ????? ?? ?????? ??????? ????? ?? ???????? ??????? ????? ?? ???? ???????? ????? ?? ?????? ???????? ???? ??? ???? ????? ????? ??? ???? ??????? ??? ???? ??????? ??????? ??? ???? ??????? ??????? ????? ??? ???? ????????? ??? ???? ??????? ??????? ??? ???? ??????? ?? ???????? ??? ???? ??????? ?? ????? ???? ??? ???? ??????? ?? ???? ??? ??????? ???? ???? ???? ??????? ???? ???? ??????? ????? ???? ??????? ???? ????? ?? ??? ???? ????? ??? ???? ????? ????? ????? ???? ??? ???? ??????? ??????? ????? ???? ??????? ????? ?????? ???? ??????? ????? ?? ???????? ??????? ????? ??? ???? ??????? ??????? ????? ?? ???????? ??????? ????? ?? ?????? ??? ???? ??????? ??????? ????? ???? ??????? ??? ???? ??????? ?? ???????? ???? ????? ???????? ???? ????? ?????? ???? ??????? ?????? ???? ??????? ????? ?????? ???? ????? ????? ???? ??????? ???????? ???? ??????? ????? ?? ?????? ???? ??????? ?? ???????? ??????? ??????? ??? ???? ???? ??????? ????? ??? ???? ???? ??????? #??????? #??????? 200 #??????? ???????? ??????? ??????? ??????? ????????? ???????? ????? ????? ??????? ????? ??????? 200 ??????? ?? ????? ??????? ?????? ???????? ??????? ??? ??????? ?????? ??????? ????? ??????? ??????? ??????? ???? ??? ??????? ????? ??????? 200 ???????? ????? ????????? ??????? ?? ????? ??????? ?????? ???????? ??????? ??? ??????? ?????? ??????? ??????? ??????? ???? ??? ??????? ?? ????? ?????? ??? ??????? ??? ???? ??????? ????? ????????? ???? ???????? ??????? ????? ???? ??? ??????? ??? ??????? ???? ??????? ??? ??????? ??? ??????? ???????? ??????? ???????? ??? ??????? ??????? ???????? ??????? ??????? ??????? ??????? ?? ????? ?????? ??????? ????? #?????????????????????? ??????? ?? ????? ??????? ??? ?????? ???? ???? ??????? ??????? ???? ??????? ??????? ???? ??????? ??????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?????? ???????? ???? ??????? ??????? ????? ???? ??????? ??????? ??? ????? ??? ???? ??????? ??????? ???????? ??? ???? ??????? ??????? ??????? ???? ??????? ?????? ???? ???? ??????? ?????? ??? ???? ??????? ?????? ???? ???? ??????? ?????? ????? ???? ??????? ?????? ?????? ???? ??????? ?????? ?????? ???? ??????? ?????? ???? ???? ??????? ?????? ???? ???? ??????? ?????? ??? ???? ??????? ?????? ??????? ????? ????? ????? ???? ????? ????? ????? ??? ????? ????? ????? ?????? ????? ??? ????? ????? ????? ??? ?????? ????? ????? ????? ????? ????? ?????? ??? ????? ????? ????? ????? ????? ????? ??? ????? ????? ????? ?????? ????? ????? ????? ????? ???? ????? ????? ???? ????? ????? ???? ???? ????? ??????? ???? ??????? ????? ??????? ???? ??????? ??????? ????? ??????? ???? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ?????? ????? ????? ??????? ???? ??????? #?????_???????_????_??????? ????? ??????? ???? #???????_??????? #?????_???????_????_???????_??_?????_????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ?????? ????? ????? ??????? ???? ??????? #??????? #??????? #????_???????_??????? ???? #??????? #??????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?????? ???????? #?????_???????_????_??????? ????? ??????? ???? #???????_??????? #?????_???????_????_???????_??_?????_????? ????? ??????? ???? ??????? ??????? ?? ????? ????? ????? ??????? ???? ??????? ??????? ?? ????? ?????? ????? ??????? ???? ??????? ?????? ????? ????? ??????? ???? ??????? #??????? #??????? #????_???????_??????? ???? #??????? #??????? ???????? ???? ??????? ??????? ???????? ???? ??????? ?????? ???????? ???? ??????? ??????? #????? ???? ??????? ??????? #??? ????? ??? #????_???????_???????_???????? ??? ???? ??????? ??????? ??????? #???????? #??? #???????? #???????_ #???? _ #?????_ #?????_ #?????_ #???????_ #????_ #??????? #????? #??? #?????? #????? #??????? #??????? _ #??????_????? #???????? #????????_ #???? #?????? #??? #?? #??? #???????? #?? #???? #????????? #?????? #???????_ #???_ #?????_ #??_ #?????_ #???_ #?????? #??? #?????_ #??????_ #????_ #??????_ #?????? #?????? #?????????? #??????? #??????? #???????? #??????? #?????? #????? #?????? #???? #???? #???? #???? #??? #???? #??? #???? #??? #??????? #????? #??????? #?? #??????? #?? #???? #???? #?????? #????????? #?????????? #?????? #??????? #????? #???? #????? #??????? #???????? #????? #??????? #cytotec #???? #???? #???? #??? #?????? #??????? #???? #??? #???? #????? #????? #???? #???? #???? #?????? #??? #??????? #?????? #??? #?????? #????? #?????? #?? #???? #???? #????? #????? #???? #????? #??? #????? #????????? ?????????? #???? #???? #???? #???? #???? #??????? #?????? #????? #?? #????? #???? #????? #???? #???? #?????? #????? #?????? #????? #????? #????? #???? #??????? #????? #???? #?????? #?????? #??????? #????? #?????? #??????? #????? #??? #????? #???? #?????? #???? #?????? #????? #??? #?????? #??????? #?????? #????? #?????? #???? #???? #???? #?????? #??? #???? #????? #?????? #?????? #?? #????? #???? #???? #?????? #????? #??? #???????? #???? Misoprostol mifepristone ??????????? ??????????? ?????? ?????? Cytotec ??????? ???? ???? ???????? ??????? ????? ????? ????? ????? ????? ????? ?????? ???? ???? ????? ??????? ???? ??? ???? ????? ?????? ???? ?????? ?????? ??????? ??? ???? ?????? ???? ???? ?????? ????? ?????? ??????? ??? ???? ??? ???? ??????? ??????? ??????? ????? ????? ??? ??????? ???? ????? ??? ??????? ??????? ???? ????? ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ???? ?????? ???? ?????? ????? ???? ???? ??????? ?? ???? ?????? ??? ???? ??????? ?? ???? ??????? ???? ??????? ??? ????? ?????? ??? ????? ???? ???????? ??? ???? ??? ????? ????? ????? ???? ???? ??? ??? ???????? ?? ???? ????? ?????? ?? ????? ??????? ?? ?? ???? ??????? ?????? ?????? ??? ??? ????? ??? ???????? ??? ???? ???? ???????? ??? ???? ??????? ??? ????? ???? ???????? ?? ???? ??????? ???? ?? ?????????? ?? ???? ???? ??????? ????? ???? ??????? ?? ????????? ????? ???? ??????? ?? ????? ????? ??????? ?? ???? ??????? ???? ???? ??????? ???? ????? ???? ??????? ??? ?????? ????? ?????? ???? ???? ??????? ?? ???????? ??? ???? ???? ????? ?? ?????????? ?????? ???? ?????? ?????? ????? ??? ??????? ?? ???? ???? ???? ??????? ?? ????????? ????? ????? ????? ??? ????? ???? ??????? ?? ????? ????? ?? ??? ?????? ???????? ???? ??????? ?? ????????? ?? ?? ?????? ???? ???? ??????? ?? ???? ??????? ???? ??????? ??? ????? ?????? ??? ????? ???? ???????? ??? ???? ??? ????? ????? ????? ???? ???? ??? ??? ???????? ?? ???? ????? ?????? ?? ????? ??????? ?? ?? ???? ??????? ?????? ?????? ??? ??? ????? ??? ???????? ??? ???? ???? ???????? ??? ???? ??????? ??? ????? ???? ???????? ?? ???? ??????? ???? ?? ?????????? ?? ???? ???? ??????? ????? ???? ??????? ?? ????????? ????? ???? ??????? ?? ????? ????? ??????? ?? ???? ??????? ???? ???? ??????? ???? ????? ???? ??????? ??? ?????? ????? ?????? ???? ???? ??????? ?? ???????? ??? ???? ???? ????? ?? ?????????? ?????? ???? ?????? ?????? ????? ??? ??????? ?? ???? ???? ???? ??????? ?? ????????? ????? ????? ????? ??? ????? ???? ??????? ?? ????? ????? Misoprostol mifepristone ??????????? ??????????? ?????? ?????? Cytotec ??????? ???? ???? ???????? ??????? ????? ????? ????? ????? ????? ????? ?????? ???? ???? ????? ??????? ???? ??? ???? ????? ?????? ???? ?????? ?????? ??????? ??? ???? ?????? ???? ???? ?????? ????? ?????? ??????? ??? ???? ??? ???? ??????? ??????? ??????? ????? ????? ??? ??????? ???? ????? ??? ??????? ??????? ???? ????? ???? ?????? ???? ?????? ????? ??? ??????? ?????? ?????? ??? ?????? ??? ?? ??? ?????? ??? ??? ????? ???. ??? ?? ???? ??? ??????? ?????? ???????? ?? ???????? ??? ??????? ??????? ???????? ???? ????????. ??? ?????? ?? ?????? ????? ????? ?????? ????? ????? ??? ??????. ???????? ??? ?????? ????????? ????? ???????? ????? ??? ????????? ????? ??????. ????? ???????? ????? ?????? ??????? ???????? ?? ?????? ???? ????? ????? ???????. ??? ??? ?????? ?? ??????? ???????? ??? ?????? ?????? ???? ??????? ?? ?????? ????? ??????? ?? ?? ??????? ?????? ??? ??????? ???????? ?????????? ???????. ??? ?? ??????? ??????? ??? ???????? ?????? ??????? ?? ???????? ??????? ???????. ??????? ?? ????? ?? ???? ?????? ?? ????? ??????? ?????. ??? ????? ??? ???? ????? ???? ?????????????? ???? ???? ??? ????? ????? ????? ????? ?????? ???? ????? ?????? ??????? ?? ?????. ?????? ??????? ?????? ????? ???????? ?? ???? ???????? ????? ?? ??? ??????? ?? ??? ???? ?????? ???????? ?????? ??????? ?????. ???? ??????? ??? ???? ??? ????? ???????? ?? ?????? ?????? ?????? ??? ????????? ??????? ???????? ???????. ??? ??? ???? ?? ??????? ?? ????????? ??? ???? ??????? ???????? ???? "???????" ????? ????? ?? ????? ??????? ?????? ?????. ??? ???? ??? ?? ???? ??? ??? ??? ??????? ??????? ?????? ??? ?????? ?? ???????? ??????? ???????. ?????? ??? ???? ???????? ??? ???????? ??????? ?? ????? ??????? ?????? ???????? ????????? ?? ???????? ??????? ????. ????? ??????? ??????? ?????? ????? ?????? ??????? ?????? ?????? ??????? ?? ????? ??????. ??? ?? ???? ?? ????? ??? ????? ?? ?????? ?? ?????? ????? ?????? ?????? ????? ???????? ?????? ???? ??? ????? ?????. ???? ????? ????? ??????? ???? ?????? ??? ??????? ???????? ??? "??? ???? ?????? ??? ???? ??????? ??????? ?? ???????? ??????" ?? ???????? ??????? ???????? ????? ??????? ???? ??????? ???? ????? ??????? ??????. ???? ????? ?? ?? ???? "???????" ????? ??????? ?? ??? ?????? ??? ???? ??? ??????? ?????? ?? ???????? ??????? ???????. ??????? ???? "???????" ??????? ??? ??? ????? ??????? ???? ?????? ????? ?? ???????? ??? ????? ????? ???? ???????? ??????? ???????. ?????? ??????? ?????????? ?????? ?? ??? ???? ?????? ???? ??????? ?????? ?????? ?????. ??? ???? ??? ?? ???? ??? ?? ????? ?? ???? ??????? ????? ?? ???????? ??? ?????? ????? ??????? ???????? ?????????. ?? ??? ????? ???? ??? ????? ??? ????? ???? ???????? ????????? ?????? ???????? ?????????? ?? ???????? ?????? ??? ??????? ??????. ?? ???? ??????? ????? ????? ???? ??????? ?? ????????? ???? ???? ????? ???? ?????? ?????? ???? ??????: ???? ??????? ??????: ????? ????? ?????? ??????? ?? ????????: ???? ?????? ????? ??? ?????? ?? ????? ?????? ?????????? ?????? ??? ??? ??????. ????? ??????? ?????? ????????: ???? ????????? ?????? ??????? ?????? ???????? ?????? ??? ??????? ?????? ????????. ???????? ?????????: ????? ??????? ?????????: ???? ????? ????? ??????? ?? ???????? ???? ????? ?????? ??????? ?????? ?????? ???????? ??? ?? ??? ????? ????? ????????. ?? ??????? ????? ?? ???????? ???????? ????? ?????? ????? ???? ?????? ???????? ?????????. ???? ???? ???? ???????? ??????? ?????? ?????????? ??????? ?? ???????? ??????? ??????? ?????? ??? ??? ?????? ???? ????? ???????? ?????? ?????????? ???????? ????????. ???? ?? ???? ????? ????? ??????? ??????????? ?? ???? ?????? ?????? ????? ??????. ??? ?????? ????? ????? ????? ????? ?????? ????? ???????? ??? ???? ?????? ????? ?? ?????? ?? ???????? ??? ?? ??? ???????? ??????? ???????. ??????? ?? ??? ????? ????? ?? ???? ?????????????? ??????? ??? ?????? ???? ?????? ????? ?????? ????? ???????. ???? ???? ?????? ??? ?????? ??? ????? ???? ????? ??????? ??????? ?????? ???????? ????? ???? ????? ????????? ??????. ?? ??????? ?? ??????? ?? ????? ??? ????? ?? ???? ?????? ?????. ???? ?????? ????? ????????? ??????? ?????? ????? ?????? ?????? ???????? ?? ??????? ???? ????????? ?????? ??????. ???? ??? ?? ??? ????? ?? ???? ??????? ??? ?????? ??? ?????? ?? ???? ?????? ??? ???? ?????? ?? ???????? ??????? ??? ????? ?? ???? ?????? ?????. ???? ??? ?????? ??????? ?? ???????? ???????????????? %30 - ??? ??? ??????? ? ????? ?? ???????? ????????? ? Cytotec in KSA-UAE - ????? ?? ?????? ???? ????? ??????? ??????? ???? ?????? ??? ?? ???? ?????? ??????? ??? ???? ????? ???? ?????? ?????? ??????? ????? ???????. ???????? ????? ???????? ?? ?????? ???? ???? ?? ????? ?????? ???? ?????? ??????? ?? ???? ?????????? ???????. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/5f42a14e-15e7-4c6f-9592-efa1bc5ce66fn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholaslaws8 at gmail.com Mon Oct 21 14:14:11 2024 From: nicholaslaws8 at gmail.com (Nicholas Laws) Date: Mon, 21 Oct 2024 07:14:11 -0700 (PDT) Subject: [CP2K-user] [CP2K:20794] All-electron Geometry Optimization of EMIBF4 Message-ID: <9ae46c98-afd4-4625-b769-6cf39a3fb5d2n@googlegroups.com> Hi all, I am trying to do a geometry optimization for energy minimization of EMIBF4 using the aug-cc-pvtz basis set (attached below) and WB97X-D XC-functional. It seems that my optimization requires hundreds of SCF steps before convergence (as seen in the attached .out file) and I was wondering if there are any recommendations for doing all-electron geometry optimizations, especially for the one I discuss in this post (current implementation can be viewed in the attached .inp file)? Please let me know if there any additional information that I can clarify. Thank you, and I look forward to hearing from you. All my best, Nick -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/9ae46c98-afd4-4625-b769-6cf39a3fb5d2n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EMIBF4_Geometry_Optimization.inp Type: chemical/x-gamess-input Size: 3478 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EMIBF4_Geometry_Optimization.out Type: application/octet-stream Size: 1604793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: aug-cc-pvtz.1.cp2k Type: application/octet-stream Size: 9392 bytes Desc: not available URL: From bamaz.97 at gmail.com Mon Oct 21 14:25:38 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Mon, 21 Oct 2024 07:25:38 -0700 (PDT) Subject: [CP2K-user] [CP2K:20794] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> Message-ID: <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> The error for ssmp is: ``` LIBXSMM_VERSION: develop-1.17-3834 (25693946) CLX/DP TRY JIT STA COL 0..13 4 4 0 0 14..23 0 0 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 32 KB (gemm=4) Command (PID=54845): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i H2O-9.inp -o H2O-9.out Uptime: 2.861583 s /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 Segmentation fault (core dumped) /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i H2O-9.inp -o H2O-9.out ``` and the last 100 lines of output: ``` 000000:000001>> 12 20 mp_sum_d start Ho stmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 20 mp_sum_d 0.000 Ho stmem: 380 MB GPUmem: 0 MB 000000:000001<< 11 13 dbcsr_dot_sd 0.000 H ostmem: 380 MB GPUmem: 0 MB 000000:000001<< 10 12 calculate_ptrace_kp 0.0 00 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 9 6 evaluate_core_matrix_traces 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 9 6 rebuild_ks_matrix start Ho stmem: 380 MB GPUmem: 0 MB 000000:000001>> 10 6 qs_ks_build_kohn_sham_matrix start Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 11 140 pw_pool_create_pw st art Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 79 pw_create_c1d sta rt Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 79 pw_create_c1d 0.0 00 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 11 140 pw_pool_create_pw 0. 000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 11 141 pw_pool_create_pw st art Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 80 pw_create_c1d sta rt Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 80 pw_create_c1d 0.0 00 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 11 141 pw_pool_create_pw 0. 000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 11 61 pw_copy start Hostme m: 380 MB GPUmem: 0 MB 000000:000001<< 11 61 pw_copy 0.004 Hostme m: 380 MB GPUmem: 0 MB 000000:000001>> 11 35 pw_axpy start Hostme m: 380 MB GPUmem: 0 MB 000000:000001<< 11 35 pw_axpy 0.002 Hostme m: 380 MB GPUmem: 0 MB 000000:000001>> 11 6 pw_poisson_solve sta rt Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 6 pw_poisson_rebuild start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 6 pw_poisson_rebuild 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 142 pw_pool_create_pw start Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 81 pw_create_c1d start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 81 pw_create_c1d 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 142 pw_pool_create_pw 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 62 pw_copy start Hos tmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 62 pw_copy 0.003 Hos tmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 6 pw_multiply_with start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 6 pw_multiply_with 0.002 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 63 pw_copy start Hos tmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 63 pw_copy 0.003 Hos tmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 6 pw_integral_ab st art Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 12 6 pw_integral_ab 0. 005 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 12 7 pw_poisson_set st art Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 143 pw_pool_create_pw start Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 14 82 pw_create_c1d start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 14 82 pw_create_c1d 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 143 pw_pool_create_pw 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 64 pw_copy start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 64 pw_copy 0.003 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 16 pw_derive star t Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 16 pw_derive 0.00 6 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 144 pw_pool_create_pw start Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 14 83 pw_create_c1d start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 14 83 pw_create_c1d 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 144 pw_pool_create_pw 0.000 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 65 pw_copy start Hostmem: 380 MB GPUmem: 0 MB 000000:000001<< 13 65 pw_copy 0.004 Hostmem: 380 MB GPUmem: 0 MB 000000:000001>> 13 17 pw_derive star t Hostmem: 380 MB GPUmem: 0 MB ``` for psmp the last 100 lines is: ``` 000000:000002<< 9 7 evaluate_core_matrix_traces 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 9 7 rebuild_ks_matrix start Ho stmem: 693 MB GPUmem: 0 MB 000000:000002>> 10 7 qs_ks_build_kohn_sham_matrix start Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 11 164 pw_pool_create_pw st art Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 93 pw_create_c1d sta rt Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 93 pw_create_c1d 0.0 00 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 11 164 pw_pool_create_pw 0. 000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 11 165 pw_pool_create_pw st art Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 94 pw_create_c1d sta rt Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 94 pw_create_c1d 0.0 00 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 11 165 pw_pool_create_pw 0. 000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 11 73 pw_copy start Hostme m: 693 MB GPUmem: 0 MB 000000:000002<< 11 73 pw_copy 0.001 Hostme m: 693 MB GPUmem: 0 MB 000000:000002>> 11 41 pw_axpy start Hostme m: 693 MB GPUmem: 0 MB 000000:000002<< 11 41 pw_axpy 0.001 Hostme m: 693 MB GPUmem: 0 MB 000000:000002>> 11 52 mp_sum_d start Hostm em: 693 MB GPUmem: 0 MB 000000:000002<< 11 52 mp_sum_d 0.000 Hostm em: 693 MB GPUmem: 0 MB 000000:000002>> 11 7 pw_poisson_solve sta rt Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 7 pw_poisson_rebuild start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 7 pw_poisson_rebuild 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 166 pw_pool_create_pw start Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 95 pw_create_c1d start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 95 pw_create_c1d 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 166 pw_pool_create_pw 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 74 pw_copy start Hos tmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 74 pw_copy 0.001 Hos tmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 7 pw_multiply_with start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 7 pw_multiply_with 0.001 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 75 pw_copy start Hos tmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 75 pw_copy 0.001 Hos tmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 7 pw_integral_ab st art Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 53 mp_sum_d start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 53 mp_sum_d 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 12 7 pw_integral_ab 0. 003 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 12 8 pw_poisson_set st art Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 167 pw_pool_create_pw start Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 14 96 pw_create_c1d start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 14 96 pw_create_c1d 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 167 pw_pool_create_pw 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 76 pw_copy start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 76 pw_copy 0.001 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 19 pw_derive star t Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 19 pw_derive 0.00 2 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 168 pw_pool_create_pw start Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 14 97 pw_create_c1d start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 14 97 pw_create_c1d 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 168 pw_pool_create_pw 0.000 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 77 pw_copy start Hostmem: 693 MB GPUmem: 0 MB 000000:000002<< 13 77 pw_copy 0.001 Hostmem: 693 MB GPUmem: 0 MB 000000:000002>> 13 20 pw_derive star t Hostmem: 693 MB GPUmem: 0 MB ``` Thanks Bartosz poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > I have no idea about the issue with LibXSMM. > Regarding the trace, I do not know either as there is not much that could > break in pw_derive (it just performs multiplications) and the sequence of > operations is to unspecific. It may be that the code actually breaks > somewhere else. Can you do the same with the ssmp and post the last 100 > lines? This way, we remove the asynchronicity issues for backtraces with > the psmp version. > Best, > Frederick > > bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: > >> The error is: >> >> ``` >> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >> CLX/DP TRY JIT STA COL >> 0..13 2 2 0 0 >> 14..23 0 0 0 0 >> >> 24..64 0 0 0 0 >> Registry and code: 13 MB + 16 KB (gemm=2) >> Command (PID=2607388): >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >> H2O-9.inp -o H2O-9.out >> Uptime: 5.288243 s >> >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >> >> = KILLED BY SIGNAL: 11 (Segmentation fault) >> >> =================================================================================== >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >> = KILLED BY SIGNAL: 9 (Killed) >> >> =================================================================================== >> ``` >> >> and the last 20 lines: >> >> ``` >> 000000:000002<< 13 76 pw_copy >> 0.001 >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 19 pw_derive >> star >> t Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 19 pw_derive >> 0.00 >> 2 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 168 >> pw_pool_create_pw >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 14 97 >> pw_create_c1d >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 14 97 >> pw_create_c1d >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 168 >> pw_pool_create_pw >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 77 pw_copy >> start >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 77 pw_copy >> 0.001 >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 20 pw_derive >> star >> t Hostmem: 693 MB GPUmem: 0 MB >> ``` >> >> Thanks! >> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein napisa?(a): >> >>> Please pick one of the failing tests. Then, add the TRACE keyword to the >>> &GLOBAL section and then run the test manually. This increases the size of >>> the output file dramatically (to some million lines). Can you send me the >>> last ~20 lines of the output? >>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >>> >>>> I'm using do_regtests.py script, not make regtesting, but I assume it >>>> makes no difference. As I mentioned in previous message for `--ompthreads >>>> 1` all tests were passed both for ssmp and psmp. For ssmp with >>>> `--ompthreads 2` I observe similar errors as for psmp with the same >>>> setting, I provide example output as attachment. >>>> >>>> Thanks >>>> Bartosz >>>> >>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> Dear Bartosz, >>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>> ssmp? >>>>> Best, >>>>> Frederick >>>>> >>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >>>>> >>>>>> Hi Frederick, >>>>>> >>>>>> thanks again for help. So I have tested different simulation variants >>>>>> and I know that the problem occurs when using OMP. For MPI calculations >>>>>> without OMP all tests pass. I have also tested the effect of the `OMP_PROC_BIND` >>>>>> and `OMP_PLACES` parameters and apart from the effect on simulation >>>>>> time, they have no significant effect on the presence of errors. Below are >>>>>> the results for ssmp: >>>>>> >>>>>> ``` >>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>> ``` >>>>>> >>>>>> and psmp: >>>>>> >>>>>> ``` >>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>>>> spread, cores, 26 / 362 >>>>>> spread, cores, 26 / 362 >>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>> close, cores, 60 / 362 >>>>>> close, sockets, 13 / 362 >>>>>> master, threads, 13 / 362 >>>>>> master, cores, 79 / 362 >>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>> false, sockets, 96 / 362 >>>>>> not specified, not specified, Summary: correct: 4129 / 4227; failed: >>>>>> 98; 263min >>>>>> ``` >>>>>> >>>>>> Any ideas what I could do next to have more information about the >>>>>> source of the problem or maybe you see a potential solution at this stage? >>>>>> I would appreciate any further help. >>>>>> >>>>>> Best >>>>>> Bartosz >>>>>> >>>>>> >>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> Dear Bartosz, >>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not run >>>>>>> that efficiently with such a large number of threads. 2 should be >>>>>>> sufficient. >>>>>>> The test result suggests that most of the functionality may work but >>>>>>> due to a missing backtrace (or similar information), it is hard to tell why >>>>>>> they fail. You could also try to run some of the single-node tests to >>>>>>> assess the stability of CP2K. >>>>>>> Best, >>>>>>> Frederick >>>>>>> >>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 UTC+2: >>>>>>> >>>>>>>> Sorry, forgot attachments. >>>>>>>> >>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/3c0b331a-fe25-4176-8692-ff5d7c466c44n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykhuangnku at gmail.com Tue Oct 22 04:11:07 2024 From: ykhuangnku at gmail.com (Yike Huang) Date: Mon, 21 Oct 2024 21:11:07 -0700 (PDT) Subject: [CP2K-user] [CP2K:20796] Converting GTH pseudopotentials with NLCC to UPF format Message-ID: <30d1b275-2052-4f12-982b-6fb2b6cd9646n@googlegroups.com> Hi, CP2K users and developers, I want to convert some pseudopotentials with NLCC (nonlinear core correction) to UPF format with ATOM module, but I find CP2K exits with errors like "l value too high". Is there any parameter that I gave wrong value? &GLOBAL PROGRAM_NAME ATOM &END GLOBAL &ATOM ELEMENT C ELECTRON_CONFIGURATION [He] 2s2 2p2 CORE [He] &METHOD METHOD_TYPE KOHN-SHAM RELATIVISTIC DKH(3) &XC &XC_FUNCTIONAL PBE &END XC_FUNCTIONAL &END XC &END METHOD &PP_BASIS BASIS_TYPE GEOMETRICAL_GTO &END PP_BASIS &POTENTIAL PSEUDO_TYPE GTH >H_POTENTIAL 2 2 0 0 0.32387795219724 2 -8.73819331462556 1.36795900163569 NLCC 1 0.34810477564070 1 5.99009979578183 1 0.30104244546747 1 9.77172008414211 &END &END POTENTIAL &PRINT &ANALYZE_BASIS OVERLAP_CONDITION_NUMBER T COMPLETENESS T &END ANALYZE_BASIS &UPF_FILE FILENAME C_GTH_NLCC.UPF &END &END &END ATOM Very best wishes, Yike HUANG -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/30d1b275-2052-4f12-982b-6fb2b6cd9646n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hutter at chem.uzh.ch Tue Oct 22 07:46:28 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Tue, 22 Oct 2024 07:46:28 +0000 Subject: [CP2K-user] [CP2K:20796] Converting GTH pseudopotentials with NLCC to UPF format In-Reply-To: <30d1b275-2052-4f12-982b-6fb2b6cd9646n@googlegroups.com> References: <30d1b275-2052-4f12-982b-6fb2b6cd9646n@googlegroups.com> Message-ID: Hi you have to specify ELEMENT C ELECTRON_CONFIGURATION CORE 2s2 2p2 CORE [He] regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Yike Huang Sent: Tuesday, October 22, 2024 6:11 AM To: cp2k Subject: [CP2K:20796] Converting GTH pseudopotentials with NLCC to UPF format Hi, CP2K users and developers, I want to convert some pseudopotentials with NLCC (nonlinear core correction) to UPF format with ATOM module, but I find CP2K exits with errors like "l value too high". Is there any parameter that I gave wrong value? &GLOBAL PROGRAM_NAME ATOM &END GLOBAL &ATOM ELEMENT C ELECTRON_CONFIGURATION [He] 2s2 2p2 CORE [He] &METHOD METHOD_TYPE KOHN-SHAM RELATIVISTIC DKH(3) &XC &XC_FUNCTIONAL PBE &END XC_FUNCTIONAL &END XC &END METHOD &PP_BASIS BASIS_TYPE GEOMETRICAL_GTO &END PP_BASIS &POTENTIAL PSEUDO_TYPE GTH >H_POTENTIAL 2 2 0 0 0.32387795219724 2 -8.73819331462556 1.36795900163569 NLCC 1 0.34810477564070 1 5.99009979578183 1 0.30104244546747 1 9.77172008414211 &END &END POTENTIAL &PRINT &ANALYZE_BASIS OVERLAP_CONDITION_NUMBER T COMPLETENESS T &END ANALYZE_BASIS &UPF_FILE FILENAME C_GTH_NLCC.UPF &END &END &END ATOM Very best wishes, Yike HUANG -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/30d1b275-2052-4f12-982b-6fb2b6cd9646n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB0759F2ED33EABD13B846D52C9F4C2%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From hutter at chem.uzh.ch Tue Oct 22 07:48:12 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Tue, 22 Oct 2024 07:48:12 +0000 Subject: [CP2K-user] [CP2K:20797] All-electron Geometry Optimization of EMIBF4 In-Reply-To: <9ae46c98-afd4-4625-b769-6cf39a3fb5d2n@googlegroups.com> References: <9ae46c98-afd4-4625-b769-6cf39a3fb5d2n@googlegroups.com> Message-ID: Hi you are missing the &HF section in your specification of the hybrid functional. Libxc only covers the density functional part, see the many examples in the tests/QS sections. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Nicholas Laws Sent: Monday, October 21, 2024 4:14 PM To: cp2k Subject: [CP2K:20794] All-electron Geometry Optimization of EMIBF4 Hi all, I am trying to do a geometry optimization for energy minimization of EMIBF4 using the aug-cc-pvtz basis set (attached below) and WB97X-D XC-functional. It seems that my optimization requires hundreds of SCF steps before convergence (as seen in the attached .out file) and I was wondering if there are any recommendations for doing all-electron geometry optimizations, especially for the one I discuss in this post (current implementation can be viewed in the attached .inp file)? Please let me know if there any additional information that I can clarify. Thank you, and I look forward to hearing from you. All my best, Nick -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/9ae46c98-afd4-4625-b769-6cf39a3fb5d2n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB0759B20CE1B7D5F17302888B9F4C2%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From ykhuangnku at gmail.com Tue Oct 22 08:34:14 2024 From: ykhuangnku at gmail.com (Yike Huang) Date: Tue, 22 Oct 2024 01:34:14 -0700 (PDT) Subject: [CP2K-user] [CP2K:20799] Converting GTH pseudopotentials with NLCC to UPF format In-Reply-To: References: <30d1b275-2052-4f12-982b-6fb2b6cd9646n@googlegroups.com> Message-ID: Dear Prof. Hutter, Thanks very much! Now I manage to convert pseudopotentials! Very best wishes, Yike HUANG ?2024?10?22???? UTC+8 15:46:36 ??? > Hi > > you have to specify > > ELEMENT C > ELECTRON_CONFIGURATION CORE 2s2 2p2 > CORE [He] > > regards > JH > > ________________________________________ > From: cp... at googlegroups.com on behalf of Yike > Huang > Sent: Tuesday, October 22, 2024 6:11 AM > To: cp2k > Subject: [CP2K:20796] Converting GTH pseudopotentials with NLCC to UPF > format > > Hi, CP2K users and developers, > > I want to convert some pseudopotentials with NLCC (nonlinear core > correction) to UPF format with ATOM module, but I find CP2K exits with > errors like "l value too high". Is there any parameter that I gave wrong > value? > > &GLOBAL > PROGRAM_NAME ATOM > &END GLOBAL > &ATOM > ELEMENT C > ELECTRON_CONFIGURATION [He] 2s2 2p2 > CORE [He] > > &METHOD > METHOD_TYPE KOHN-SHAM > RELATIVISTIC DKH(3) > &XC > &XC_FUNCTIONAL PBE > &END XC_FUNCTIONAL > &END XC > &END METHOD > > &PP_BASIS > BASIS_TYPE GEOMETRICAL_GTO > &END PP_BASIS > > &POTENTIAL > PSEUDO_TYPE GTH > >H_POTENTIAL > 2 2 0 0 > 0.32387795219724 2 -8.73819331462556 1.36795900163569 > NLCC 1 > 0.34810477564070 1 5.99009979578183 > 1 > 0.30104244546747 1 9.77172008414211 > &END > &END POTENTIAL > > &PRINT > &ANALYZE_BASIS > OVERLAP_CONDITION_NUMBER T > COMPLETENESS T > &END ANALYZE_BASIS > &UPF_FILE > FILENAME C_GTH_NLCC.UPF > &END > &END > &END ATOM > > Very best wishes, > Yike HUANG > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com >. > To view this discussion on the web visit > https://groups.google.com/d/msgid/cp2k/30d1b275-2052-4f12-982b-6fb2b6cd9646n%40googlegroups.com > < > https://groups.google.com/d/msgid/cp2k/30d1b275-2052-4f12-982b-6fb2b6cd9646n%40googlegroups.com?utm_medium=email&utm_source=footer > >. > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/aeb6f9f3-71fb-4736-a808-9135b989e92fn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Tue Oct 22 09:12:57 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Tue, 22 Oct 2024 02:12:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20800] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> Message-ID: <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> Dear Bartosz, I am currently running some tests with the latest Intel compiler myself. What bothers me about your setup is the module GCC13/13.3.0 . Why is it loaded? Can you unload it? This would at least reduce potential interferences with between the Intel and the GCC compilers. Best, Frederick bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: > The error for ssmp is: > > ``` > LIBXSMM_VERSION: develop-1.17-3834 (25693946) > CLX/DP TRY JIT STA COL > 0..13 4 4 0 0 > 14..23 0 0 0 0 > 24..64 0 0 0 0 > Registry and code: 13 MB + 32 KB (gemm=4) > Command (PID=54845): > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i > H2O-9.inp -o H2O-9.out > Uptime: 2.861583 s > /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 > Segmentation fault (core dumped) > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i > H2O-9.inp -o H2O-9.out > ``` > > and the last 100 lines of output: > > ``` > 000000:000001>> 12 20 mp_sum_d > start Ho > stmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 20 mp_sum_d > 0.000 Ho > stmem: 380 MB GPUmem: 0 MB > 000000:000001<< 11 13 dbcsr_dot_sd > 0.000 H > ostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 10 12 calculate_ptrace_kp > 0.0 > 00 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 9 6 > evaluate_core_matrix_traces > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 9 6 rebuild_ks_matrix > start Ho > stmem: 380 MB GPUmem: 0 MB > 000000:000001>> 10 6 > qs_ks_build_kohn_sham_matrix > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 11 140 pw_pool_create_pw > st > art Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 79 pw_create_c1d > sta > rt Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 79 pw_create_c1d > 0.0 > 00 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 11 140 pw_pool_create_pw > 0. > 000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 11 141 pw_pool_create_pw > st > art Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 80 pw_create_c1d > sta > rt Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 80 pw_create_c1d > 0.0 > 00 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 11 141 pw_pool_create_pw > 0. > 000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 11 61 pw_copy start > Hostme > m: 380 MB GPUmem: 0 MB > 000000:000001<< 11 61 pw_copy 0.004 > Hostme > m: 380 MB GPUmem: 0 MB > 000000:000001>> 11 35 pw_axpy start > Hostme > m: 380 MB GPUmem: 0 MB > 000000:000001<< 11 35 pw_axpy 0.002 > Hostme > m: 380 MB GPUmem: 0 MB > 000000:000001>> 11 6 pw_poisson_solve > sta > rt Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 6 > pw_poisson_rebuild > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 6 > pw_poisson_rebuild > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 142 pw_pool_create_pw > > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 81 pw_create_c1d > > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 81 pw_create_c1d > > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 142 pw_pool_create_pw > > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 62 pw_copy > start Hos > tmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 62 pw_copy > 0.003 Hos > tmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 6 pw_multiply_with > > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 6 pw_multiply_with > > 0.002 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 63 pw_copy > start Hos > tmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 63 pw_copy > 0.003 Hos > tmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 6 pw_integral_ab > st > art Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 12 6 pw_integral_ab > 0. > 005 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 12 7 pw_poisson_set > st > art Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 143 > pw_pool_create_pw > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 14 82 > pw_create_c1d > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 14 82 > pw_create_c1d > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 143 > pw_pool_create_pw > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 64 pw_copy > start > Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 64 pw_copy > 0.003 > Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 16 pw_derive > star > t Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 16 pw_derive > 0.00 > 6 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 144 > pw_pool_create_pw > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 14 83 > pw_create_c1d > start Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 14 83 > pw_create_c1d > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 144 > pw_pool_create_pw > 0.000 Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 65 pw_copy > start > Hostmem: 380 MB GPUmem: 0 MB > 000000:000001<< 13 65 pw_copy > 0.004 > Hostmem: 380 MB GPUmem: 0 MB > 000000:000001>> 13 17 pw_derive > star > t Hostmem: 380 MB GPUmem: 0 MB > ``` > > for psmp the last 100 lines is: > > ``` > 000000:000002<< 9 7 > evaluate_core_matrix_traces > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 9 7 rebuild_ks_matrix > start Ho > > stmem: 693 MB GPUmem: 0 MB > 000000:000002>> 10 7 > qs_ks_build_kohn_sham_matrix > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 11 164 pw_pool_create_pw > st > art Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 93 pw_create_c1d > sta > rt Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 93 pw_create_c1d > 0.0 > 00 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 11 164 pw_pool_create_pw > 0. > 000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 11 165 pw_pool_create_pw > st > art Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 94 pw_create_c1d > sta > rt Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 94 pw_create_c1d > 0.0 > 00 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 11 165 pw_pool_create_pw > 0. > 000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 11 73 pw_copy start > Hostme > > m: 693 MB GPUmem: 0 MB > 000000:000002<< 11 73 pw_copy 0.001 > Hostme > > m: 693 MB GPUmem: 0 MB > 000000:000002>> 11 41 pw_axpy start > Hostme > > m: 693 MB GPUmem: 0 MB > 000000:000002<< 11 41 pw_axpy 0.001 > Hostme > > m: 693 MB GPUmem: 0 MB > 000000:000002>> 11 52 mp_sum_d start > Hostm > > em: 693 MB GPUmem: 0 MB > 000000:000002<< 11 52 mp_sum_d 0.000 > Hostm > > em: 693 MB GPUmem: 0 MB > 000000:000002>> 11 7 pw_poisson_solve > sta > rt Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 7 > pw_poisson_rebuild > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 7 > pw_poisson_rebuild > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 166 pw_pool_create_pw > > > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 95 pw_create_c1d > > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 95 pw_create_c1d > > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 166 pw_pool_create_pw > > > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 74 pw_copy > start Hos > > tmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 74 pw_copy > 0.001 Hos > > tmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 7 pw_multiply_with > > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 7 pw_multiply_with > > 0.001 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 75 pw_copy > start Hos > > tmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 75 pw_copy > 0.001 Hos > > tmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 7 pw_integral_ab > st > art Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 53 mp_sum_d > start > > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 53 mp_sum_d > 0.000 > > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 12 7 pw_integral_ab > 0. > 003 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 12 8 pw_poisson_set > st > art Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 167 > pw_pool_create_pw > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 14 96 > pw_create_c1d > > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 14 96 > pw_create_c1d > > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 167 > pw_pool_create_pw > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 76 pw_copy > start > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 76 pw_copy > 0.001 > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 19 pw_derive > star > t Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 19 pw_derive > 0.00 > 2 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 168 > pw_pool_create_pw > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 14 97 > pw_create_c1d > start Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 14 97 > pw_create_c1d > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 168 > pw_pool_create_pw > 0.000 Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 77 pw_copy > start > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002<< 13 77 pw_copy > 0.001 > Hostmem: 693 MB GPUmem: 0 MB > 000000:000002>> 13 20 pw_derive > star > t Hostmem: 693 MB GPUmem: 0 MB > ``` > > Thanks > Bartosz > > poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein > napisa?(a): > >> Dear Bartosz, >> I have no idea about the issue with LibXSMM. >> Regarding the trace, I do not know either as there is not much that could >> break in pw_derive (it just performs multiplications) and the sequence of >> operations is to unspecific. It may be that the code actually breaks >> somewhere else. Can you do the same with the ssmp and post the last 100 >> lines? This way, we remove the asynchronicity issues for backtraces with >> the psmp version. >> Best, >> Frederick >> >> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >> >>> The error is: >>> >>> ``` >>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>> CLX/DP TRY JIT STA COL >>> 0..13 2 2 0 0 >>> 14..23 0 0 0 0 >>> >>> 24..64 0 0 0 0 >>> Registry and code: 13 MB + 16 KB (gemm=2) >>> Command (PID=2607388): >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>> H2O-9.inp -o H2O-9.out >>> Uptime: 5.288243 s >>> >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>> >>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>> >>> =================================================================================== >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>> = KILLED BY SIGNAL: 9 (Killed) >>> >>> =================================================================================== >>> ``` >>> >>> and the last 20 lines: >>> >>> ``` >>> 000000:000002<< 13 76 pw_copy >>> 0.001 >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 19 pw_derive >>> star >>> t Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 19 pw_derive >>> 0.00 >>> 2 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 168 >>> pw_pool_create_pw >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 14 97 >>> pw_create_c1d >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 14 97 >>> pw_create_c1d >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 168 >>> pw_pool_create_pw >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 77 pw_copy >>> start >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 77 pw_copy >>> 0.001 >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 20 pw_derive >>> star >>> t Hostmem: 693 MB GPUmem: 0 MB >>> ``` >>> >>> Thanks! >>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein napisa?(a): >>> >>>> Please pick one of the failing tests. Then, add the TRACE keyword to >>>> the &GLOBAL section and then run the test manually. This increases the size >>>> of the output file dramatically (to some million lines). Can you send me >>>> the last ~20 lines of the output? >>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >>>> >>>>> I'm using do_regtests.py script, not make regtesting, but I assume it >>>>> makes no difference. As I mentioned in previous message for `--ompthreads >>>>> 1` all tests were passed both for ssmp and psmp. For ssmp with >>>>> `--ompthreads 2` I observe similar errors as for psmp with the same >>>>> setting, I provide example output as attachment. >>>>> >>>>> Thanks >>>>> Bartosz >>>>> >>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> Dear Bartosz, >>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>> ssmp? >>>>>> Best, >>>>>> Frederick >>>>>> >>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >>>>>> >>>>>>> Hi Frederick, >>>>>>> >>>>>>> thanks again for help. So I have tested different simulation >>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from the >>>>>>> effect on simulation time, they have no significant effect on the presence >>>>>>> of errors. Below are the results for ssmp: >>>>>>> >>>>>>> ``` >>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>> ``` >>>>>>> >>>>>>> and psmp: >>>>>>> >>>>>>> ``` >>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>>>>> spread, cores, 26 / 362 >>>>>>> spread, cores, 26 / 362 >>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>>> close, cores, 60 / 362 >>>>>>> close, sockets, 13 / 362 >>>>>>> master, threads, 13 / 362 >>>>>>> master, cores, 79 / 362 >>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>> false, sockets, 96 / 362 >>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; failed: >>>>>>> 98; 263min >>>>>>> ``` >>>>>>> >>>>>>> Any ideas what I could do next to have more information about the >>>>>>> source of the problem or maybe you see a potential solution at this stage? >>>>>>> I would appreciate any further help. >>>>>>> >>>>>>> Best >>>>>>> Bartosz >>>>>>> >>>>>>> >>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>> napisa?(a): >>>>>>> >>>>>>>> Dear Bartosz, >>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not >>>>>>>> run that efficiently with such a large number of threads. 2 should be >>>>>>>> sufficient. >>>>>>>> The test result suggests that most of the functionality may work >>>>>>>> but due to a missing backtrace (or similar information), it is hard to tell >>>>>>>> why they fail. You could also try to run some of the single-node tests to >>>>>>>> assess the stability of CP2K. >>>>>>>> Best, >>>>>>>> Frederick >>>>>>>> >>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>> UTC+2: >>>>>>>> >>>>>>>>> Sorry, forgot attachments. >>>>>>>>> >>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/9c503856-0751-48b3-8bfb-e37cf5cb91a6n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Tue Oct 22 09:56:40 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Tue, 22 Oct 2024 02:56:40 -0700 (PDT) Subject: [CP2K-user] [CP2K:20800] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> Message-ID: I was loading it as it was needed for compilation. I have unloaded the module, but the error still occurs: ``` LIBXSMM_VERSION: develop-1.17-3834 (25693946) CLX/DP TRY JIT STA COL 0..13 2 2 0 0 14..23 0 0 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 16 KB (gemm=2) Command (PID=15485): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i H2O-9.inp -o H2O-9.out Uptime: 1.757102 s =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 0 PID 15485 RUNNING AT r30c01b01 = KILLED BY SIGNAL: 11 (Segmentation fault) =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 1 PID 15486 RUNNING AT r30c01b01 = KILLED BY SIGNAL: 9 (Killed) =================================================================================== ``` and the last 100 lines: ``` 000000:000002>> 11 37 pw_create_c1d start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 11 37 pw_create_c1d 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 10 64 pw_pool_create_pw 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 10 25 pw_copy start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 10 25 pw_copy 0.001 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 10 17 pw_axpy start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 10 17 pw_axpy 0.001 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 10 19 mp_sum_d start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 10 19 mp_sum_d 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 10 3 pw_poisson_solve start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 11 3 pw_poisson_rebuild s tart Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 11 3 pw_poisson_rebuild 0 .000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 11 65 pw_pool_create_pw st art Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 38 pw_create_c1d sta rt Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 38 pw_create_c1d 0.0 00 Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 11 65 pw_pool_create_pw 0. 000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 11 26 pw_copy start Hostme m: 697 MB GPUmem: 0 MB 000000:000002<< 11 26 pw_copy 0.001 Hostme m: 697 MB GPUmem: 0 MB 000000:000002>> 11 3 pw_multiply_with sta rt Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 11 3 pw_multiply_with 0.0 01 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 11 27 pw_copy start Hostme m: 697 MB GPUmem: 0 MB 000000:000002<< 11 27 pw_copy 0.001 Hostme m: 697 MB GPUmem: 0 MB 000000:000002>> 11 3 pw_integral_ab start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 20 mp_sum_d start Ho stmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 20 mp_sum_d 0.001 Ho stmem: 697 MB GPUmem: 0 MB 000000:000002<< 11 3 pw_integral_ab 0.004 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 11 4 pw_poisson_set start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 66 pw_pool_create_pw start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 13 39 pw_create_c1d start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 13 39 pw_create_c1d 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 66 pw_pool_create_pw 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 28 pw_copy start Hos tmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 28 pw_copy 0.001 Hos tmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 7 pw_derive start H ostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 7 pw_derive 0.002 H ostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 67 pw_pool_create_pw start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 13 40 pw_create_c1d start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 13 40 pw_create_c1d 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 67 pw_pool_create_pw 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 29 pw_copy start Hos tmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 29 pw_copy 0.001 Hos tmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 8 pw_derive start H ostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 8 pw_derive 0.002 H ostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 68 pw_pool_create_pw start Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 13 41 pw_create_c1d start Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 13 41 pw_create_c1d 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 68 pw_pool_create_pw 0.000 Hostmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 30 pw_copy start Hos tmem: 697 MB GPUmem: 0 MB 000000:000002<< 12 30 pw_copy 0.001 Hos tmem: 697 MB GPUmem: 0 MB 000000:000002>> 12 9 pw_derive start H ostmem: 697 MB GPUmem: 0 MB ``` This is the list of currently loaded modules (all come with intel): ``` Currently Loaded Modulefiles: 1) GCCcore/13.3.0 7) impi/2021.13.0-intel-compilers-2024.2.0 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a 6) UCX/1.16.0-GCCcore-13.3.0 ``` wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > I am currently running some tests with the latest Intel compiler myself. > What bothers me about your setup is the module GCC13/13.3.0 . Why is it > loaded? Can you unload it? This would at least reduce potential > interferences with between the Intel and the GCC compilers. > Best, > Frederick > > bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: > >> The error for ssmp is: >> >> ``` >> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >> CLX/DP TRY JIT STA COL >> 0..13 4 4 0 0 >> 14..23 0 0 0 0 >> 24..64 0 0 0 0 >> Registry and code: 13 MB + 32 KB (gemm=4) >> Command (PID=54845): >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >> H2O-9.inp -o H2O-9.out >> Uptime: 2.861583 s >> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >> Segmentation fault (core dumped) >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >> H2O-9.inp -o H2O-9.out >> ``` >> >> and the last 100 lines of output: >> >> ``` >> 000000:000001>> 12 20 mp_sum_d >> start Ho >> stmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 20 mp_sum_d >> 0.000 Ho >> stmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 11 13 dbcsr_dot_sd >> 0.000 H >> ostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 10 12 calculate_ptrace_kp >> 0.0 >> 00 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 9 6 >> evaluate_core_matrix_traces >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 9 6 rebuild_ks_matrix >> start Ho >> stmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 10 6 >> qs_ks_build_kohn_sham_matrix >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 11 140 pw_pool_create_pw >> st >> art Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 79 pw_create_c1d >> sta >> rt Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 79 pw_create_c1d >> 0.0 >> 00 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 11 140 pw_pool_create_pw >> 0. >> 000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 11 141 pw_pool_create_pw >> st >> art Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 80 pw_create_c1d >> sta >> rt Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 80 pw_create_c1d >> 0.0 >> 00 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 11 141 pw_pool_create_pw >> 0. >> 000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 11 61 pw_copy start >> Hostme >> m: 380 MB GPUmem: 0 MB >> 000000:000001<< 11 61 pw_copy 0.004 >> Hostme >> m: 380 MB GPUmem: 0 MB >> 000000:000001>> 11 35 pw_axpy start >> Hostme >> m: 380 MB GPUmem: 0 MB >> 000000:000001<< 11 35 pw_axpy 0.002 >> Hostme >> m: 380 MB GPUmem: 0 MB >> 000000:000001>> 11 6 pw_poisson_solve >> sta >> rt Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 6 >> pw_poisson_rebuild >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 6 >> pw_poisson_rebuild >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 142 >> pw_pool_create_pw >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 81 pw_create_c1d >> >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 81 pw_create_c1d >> >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 142 >> pw_pool_create_pw >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 62 pw_copy >> start Hos >> tmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 62 pw_copy >> 0.003 Hos >> tmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 6 pw_multiply_with >> >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 6 pw_multiply_with >> >> 0.002 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 63 pw_copy >> start Hos >> tmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 63 pw_copy >> 0.003 Hos >> tmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 6 pw_integral_ab >> st >> art Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 12 6 pw_integral_ab >> 0. >> 005 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 12 7 pw_poisson_set >> st >> art Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 143 >> pw_pool_create_pw >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 14 82 >> pw_create_c1d >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 14 82 >> pw_create_c1d >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 143 >> pw_pool_create_pw >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 64 pw_copy >> start >> Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 64 pw_copy >> 0.003 >> Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 16 pw_derive >> star >> t Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 16 pw_derive >> 0.00 >> 6 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 144 >> pw_pool_create_pw >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 14 83 >> pw_create_c1d >> start Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 14 83 >> pw_create_c1d >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 144 >> pw_pool_create_pw >> 0.000 Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 65 pw_copy >> start >> Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001<< 13 65 pw_copy >> 0.004 >> Hostmem: 380 MB GPUmem: 0 MB >> 000000:000001>> 13 17 pw_derive >> star >> t Hostmem: 380 MB GPUmem: 0 MB >> ``` >> >> for psmp the last 100 lines is: >> >> ``` >> 000000:000002<< 9 7 >> evaluate_core_matrix_traces >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 9 7 rebuild_ks_matrix >> start Ho >> >> stmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 10 7 >> qs_ks_build_kohn_sham_matrix >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 164 pw_pool_create_pw >> st >> art Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 93 pw_create_c1d >> sta >> rt Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 93 pw_create_c1d >> 0.0 >> 00 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 11 164 pw_pool_create_pw >> 0. >> 000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 165 pw_pool_create_pw >> st >> art Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 94 pw_create_c1d >> sta >> rt Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 94 pw_create_c1d >> 0.0 >> 00 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 11 165 pw_pool_create_pw >> 0. >> 000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 73 pw_copy start >> Hostme >> >> m: 693 MB GPUmem: 0 MB >> 000000:000002<< 11 73 pw_copy 0.001 >> Hostme >> >> m: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 41 pw_axpy start >> Hostme >> >> m: 693 MB GPUmem: 0 MB >> 000000:000002<< 11 41 pw_axpy 0.001 >> Hostme >> >> m: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 52 mp_sum_d >> start Hostm >> >> em: 693 MB GPUmem: 0 MB >> 000000:000002<< 11 52 mp_sum_d >> 0.000 Hostm >> >> em: 693 MB GPUmem: 0 MB >> 000000:000002>> 11 7 pw_poisson_solve >> sta >> rt Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 7 >> pw_poisson_rebuild >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 7 >> pw_poisson_rebuild >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 166 >> pw_pool_create_pw >> >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 95 pw_create_c1d >> >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 95 pw_create_c1d >> >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 166 >> pw_pool_create_pw >> >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 74 pw_copy >> start Hos >> >> tmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 74 pw_copy >> 0.001 Hos >> >> tmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 7 pw_multiply_with >> >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 7 pw_multiply_with >> >> 0.001 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 75 pw_copy >> start Hos >> >> tmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 75 pw_copy >> 0.001 Hos >> >> tmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 7 pw_integral_ab >> st >> art Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 53 mp_sum_d >> start >> >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 53 mp_sum_d >> 0.000 >> >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 12 7 pw_integral_ab >> 0. >> 003 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 12 8 pw_poisson_set >> st >> art Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 167 >> pw_pool_create_pw >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 14 96 >> pw_create_c1d >> >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 14 96 >> pw_create_c1d >> >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 167 >> pw_pool_create_pw >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 76 pw_copy >> start >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 76 pw_copy >> 0.001 >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 19 pw_derive >> star >> t Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 19 pw_derive >> 0.00 >> 2 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 168 >> pw_pool_create_pw >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 14 97 >> pw_create_c1d >> start Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 14 97 >> pw_create_c1d >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 168 >> pw_pool_create_pw >> 0.000 Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 77 pw_copy >> start >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002<< 13 77 pw_copy >> 0.001 >> Hostmem: 693 MB GPUmem: 0 MB >> 000000:000002>> 13 20 pw_derive >> star >> t Hostmem: 693 MB GPUmem: 0 MB >> ``` >> >> Thanks >> Bartosz >> >> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >> napisa?(a): >> >>> Dear Bartosz, >>> I have no idea about the issue with LibXSMM. >>> Regarding the trace, I do not know either as there is not much that >>> could break in pw_derive (it just performs multiplications) and the >>> sequence of operations is to unspecific. It may be that the code actually >>> breaks somewhere else. Can you do the same with the ssmp and post the last >>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>> with the psmp version. >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >>> >>>> The error is: >>>> >>>> ``` >>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>> CLX/DP TRY JIT STA COL >>>> 0..13 2 2 0 0 >>>> 14..23 0 0 0 0 >>>> >>>> 24..64 0 0 0 0 >>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>> Command (PID=2607388): >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>> H2O-9.inp -o H2O-9.out >>>> Uptime: 5.288243 s >>>> >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>> >>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>> >>>> =================================================================================== >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>> = KILLED BY SIGNAL: 9 (Killed) >>>> >>>> =================================================================================== >>>> ``` >>>> >>>> and the last 20 lines: >>>> >>>> ``` >>>> 000000:000002<< 13 76 pw_copy >>>> 0.001 >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 19 pw_derive >>>> star >>>> t Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 19 pw_derive >>>> 0.00 >>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 168 >>>> pw_pool_create_pw >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 14 97 >>>> pw_create_c1d >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 14 97 >>>> pw_create_c1d >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 168 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 77 pw_copy >>>> start >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 77 pw_copy >>>> 0.001 >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 20 pw_derive >>>> star >>>> t Hostmem: 693 MB GPUmem: 0 MB >>>> ``` >>>> >>>> Thanks! >>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> Please pick one of the failing tests. Then, add the TRACE keyword to >>>>> the &GLOBAL section and then run the test manually. This increases the size >>>>> of the output file dramatically (to some million lines). Can you send me >>>>> the last ~20 lines of the output? >>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >>>>> >>>>>> I'm using do_regtests.py script, not make regtesting, but I assume it >>>>>> makes no difference. As I mentioned in previous message for `--ompthreads >>>>>> 1` all tests were passed both for ssmp and psmp. For ssmp with >>>>>> `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>> setting, I provide example output as attachment. >>>>>> >>>>>> Thanks >>>>>> Bartosz >>>>>> >>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> Dear Bartosz, >>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>> ssmp? >>>>>>> Best, >>>>>>> Frederick >>>>>>> >>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 UTC+2: >>>>>>> >>>>>>>> Hi Frederick, >>>>>>>> >>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from the >>>>>>>> effect on simulation time, they have no significant effect on the presence >>>>>>>> of errors. Below are the results for ssmp: >>>>>>>> >>>>>>>> ``` >>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>> ``` >>>>>>>> >>>>>>>> and psmp: >>>>>>>> >>>>>>>> ``` >>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>>>>>> spread, cores, 26 / 362 >>>>>>>> spread, cores, 26 / 362 >>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>>>> close, cores, 60 / 362 >>>>>>>> close, sockets, 13 / 362 >>>>>>>> master, threads, 13 / 362 >>>>>>>> master, cores, 79 / 362 >>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>>> false, sockets, 96 / 362 >>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>> failed: 98; 263min >>>>>>>> ``` >>>>>>>> >>>>>>>> Any ideas what I could do next to have more information about the >>>>>>>> source of the problem or maybe you see a potential solution at this stage? >>>>>>>> I would appreciate any further help. >>>>>>>> >>>>>>>> Best >>>>>>>> Bartosz >>>>>>>> >>>>>>>> >>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>> napisa?(a): >>>>>>>> >>>>>>>>> Dear Bartosz, >>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not >>>>>>>>> run that efficiently with such a large number of threads. 2 should be >>>>>>>>> sufficient. >>>>>>>>> The test result suggests that most of the functionality may work >>>>>>>>> but due to a missing backtrace (or similar information), it is hard to tell >>>>>>>>> why they fail. You could also try to run some of the single-node tests to >>>>>>>>> assess the stability of CP2K. >>>>>>>>> Best, >>>>>>>>> Frederick >>>>>>>>> >>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>>> UTC+2: >>>>>>>>> >>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>> >>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/d3cc4f35-c1e5-4685-831f-53e4eb90eb90n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Tue Oct 22 11:12:49 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Tue, 22 Oct 2024 04:12:49 -0700 (PDT) Subject: [CP2K-user] [CP2K:20802] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> Message-ID: I can reproduce the error locally. I am investigating it now. bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: > I was loading it as it was needed for compilation. I have unloaded the > module, but the error still occurs: > > ``` > LIBXSMM_VERSION: develop-1.17-3834 (25693946) > CLX/DP TRY JIT STA COL > 0..13 2 2 0 0 > 14..23 0 0 0 0 > 24..64 0 0 0 0 > Registry and code: 13 MB + 16 KB (gemm=2) > Command (PID=15485): > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i > H2O-9.inp -o H2O-9.out > Uptime: 1.757102 s > > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 0 PID 15485 RUNNING AT r30c01b01 > > = KILLED BY SIGNAL: 11 (Segmentation fault) > > =================================================================================== > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 1 PID 15486 RUNNING AT r30c01b01 > > = KILLED BY SIGNAL: 9 (Killed) > > =================================================================================== > ``` > > > and the last 100 lines: > > ``` > 000000:000002>> 11 37 pw_create_c1d > start > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 11 37 pw_create_c1d > 0.000 > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 10 64 pw_pool_create_pw > 0.000 > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 10 25 pw_copy start > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002<< 10 25 pw_copy 0.001 > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002>> 10 17 pw_axpy start > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002<< 10 17 pw_axpy 0.001 > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002>> 10 19 mp_sum_d start > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002<< 10 19 mp_sum_d 0.000 > Hostmem: > 697 MB GPUmem: 0 MB > 000000:000002>> 10 3 pw_poisson_solve > start > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 11 3 pw_poisson_rebuild > s > tart Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 11 3 pw_poisson_rebuild > 0 > .000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 11 65 pw_pool_create_pw > st > art Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 38 pw_create_c1d > sta > rt Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 38 pw_create_c1d > 0.0 > 00 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 11 65 pw_pool_create_pw > 0. > 000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 11 26 pw_copy start > Hostme > m: 697 MB GPUmem: 0 MB > 000000:000002<< 11 26 pw_copy 0.001 > Hostme > m: 697 MB GPUmem: 0 MB > 000000:000002>> 11 3 pw_multiply_with > sta > rt Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 11 3 pw_multiply_with > 0.0 > 01 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 11 27 pw_copy start > Hostme > m: 697 MB GPUmem: 0 MB > 000000:000002<< 11 27 pw_copy 0.001 > Hostme > m: 697 MB GPUmem: 0 MB > 000000:000002>> 11 3 pw_integral_ab > start > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 20 mp_sum_d > start Ho > stmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 20 mp_sum_d > 0.001 Ho > stmem: 697 MB GPUmem: 0 MB > 000000:000002<< 11 3 pw_integral_ab > 0.004 > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 11 4 pw_poisson_set > start > Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 66 pw_pool_create_pw > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 13 39 pw_create_c1d > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 13 39 pw_create_c1d > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 66 pw_pool_create_pw > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 28 pw_copy > start Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 28 pw_copy > 0.001 Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 7 pw_derive > start H > ostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 7 pw_derive > 0.002 H > ostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 67 pw_pool_create_pw > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 13 40 pw_create_c1d > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 13 40 pw_create_c1d > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 67 pw_pool_create_pw > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 29 pw_copy > start Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 29 pw_copy > 0.001 Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 8 pw_derive > start H > ostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 8 pw_derive > 0.002 H > ostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 68 pw_pool_create_pw > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 13 41 pw_create_c1d > > start Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 13 41 pw_create_c1d > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 68 pw_pool_create_pw > > 0.000 Hostmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 30 pw_copy > start Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002<< 12 30 pw_copy > 0.001 Hos > tmem: 697 MB GPUmem: 0 MB > 000000:000002>> 12 9 pw_derive > start H > ostmem: 697 MB GPUmem: 0 MB > ``` > > This is the list of currently loaded modules (all come with intel): > > ``` > Currently Loaded Modulefiles: > 1) GCCcore/13.3.0 7) > impi/2021.13.0-intel-compilers-2024.2.0 > 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 > > 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a > > 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a > > 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a > > 6) UCX/1.16.0-GCCcore-13.3.0 > ``` > wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein napisa?(a): > >> Dear Bartosz, >> I am currently running some tests with the latest Intel compiler myself. >> What bothers me about your setup is the module GCC13/13.3.0 . Why is it >> loaded? Can you unload it? This would at least reduce potential >> interferences with between the Intel and the GCC compilers. >> Best, >> Frederick >> >> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >> >>> The error for ssmp is: >>> >>> ``` >>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>> CLX/DP TRY JIT STA COL >>> 0..13 4 4 0 0 >>> 14..23 0 0 0 0 >>> 24..64 0 0 0 0 >>> Registry and code: 13 MB + 32 KB (gemm=4) >>> Command (PID=54845): >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>> H2O-9.inp -o H2O-9.out >>> Uptime: 2.861583 s >>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>> Segmentation fault (core dumped) >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>> H2O-9.inp -o H2O-9.out >>> ``` >>> >>> and the last 100 lines of output: >>> >>> ``` >>> 000000:000001>> 12 20 mp_sum_d >>> start Ho >>> stmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 20 mp_sum_d >>> 0.000 Ho >>> stmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 11 13 dbcsr_dot_sd >>> 0.000 H >>> ostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 10 12 calculate_ptrace_kp >>> 0.0 >>> 00 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 9 6 >>> evaluate_core_matrix_traces >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 9 6 rebuild_ks_matrix >>> start Ho >>> stmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 10 6 >>> qs_ks_build_kohn_sham_matrix >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 11 140 pw_pool_create_pw >>> st >>> art Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 79 pw_create_c1d >>> sta >>> rt Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 79 pw_create_c1d >>> 0.0 >>> 00 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 11 140 pw_pool_create_pw >>> 0. >>> 000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 11 141 pw_pool_create_pw >>> st >>> art Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 80 pw_create_c1d >>> sta >>> rt Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 80 pw_create_c1d >>> 0.0 >>> 00 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 11 141 pw_pool_create_pw >>> 0. >>> 000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 11 61 pw_copy >>> start Hostme >>> m: 380 MB GPUmem: 0 MB >>> 000000:000001<< 11 61 pw_copy >>> 0.004 Hostme >>> m: 380 MB GPUmem: 0 MB >>> 000000:000001>> 11 35 pw_axpy >>> start Hostme >>> m: 380 MB GPUmem: 0 MB >>> 000000:000001<< 11 35 pw_axpy >>> 0.002 Hostme >>> m: 380 MB GPUmem: 0 MB >>> 000000:000001>> 11 6 pw_poisson_solve >>> sta >>> rt Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 6 >>> pw_poisson_rebuild >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 6 >>> pw_poisson_rebuild >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 142 >>> pw_pool_create_pw >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 81 >>> pw_create_c1d >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 81 >>> pw_create_c1d >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 142 >>> pw_pool_create_pw >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 62 pw_copy >>> start Hos >>> tmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 62 pw_copy >>> 0.003 Hos >>> tmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 6 >>> pw_multiply_with >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 6 >>> pw_multiply_with >>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 63 pw_copy >>> start Hos >>> tmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 63 pw_copy >>> 0.003 Hos >>> tmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 6 pw_integral_ab >>> st >>> art Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 12 6 pw_integral_ab >>> 0. >>> 005 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 12 7 pw_poisson_set >>> st >>> art Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 143 >>> pw_pool_create_pw >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 14 82 >>> pw_create_c1d >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 14 82 >>> pw_create_c1d >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 143 >>> pw_pool_create_pw >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 64 pw_copy >>> start >>> Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 64 pw_copy >>> 0.003 >>> Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 16 pw_derive >>> star >>> t Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 16 pw_derive >>> 0.00 >>> 6 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 144 >>> pw_pool_create_pw >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 14 83 >>> pw_create_c1d >>> start Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 14 83 >>> pw_create_c1d >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 144 >>> pw_pool_create_pw >>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 65 pw_copy >>> start >>> Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001<< 13 65 pw_copy >>> 0.004 >>> Hostmem: 380 MB GPUmem: 0 MB >>> 000000:000001>> 13 17 pw_derive >>> star >>> t Hostmem: 380 MB GPUmem: 0 MB >>> ``` >>> >>> for psmp the last 100 lines is: >>> >>> ``` >>> 000000:000002<< 9 7 >>> evaluate_core_matrix_traces >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 9 7 rebuild_ks_matrix >>> start Ho >>> >>> stmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 10 7 >>> qs_ks_build_kohn_sham_matrix >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 164 pw_pool_create_pw >>> st >>> art Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 93 pw_create_c1d >>> sta >>> rt Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 93 pw_create_c1d >>> 0.0 >>> 00 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 11 164 pw_pool_create_pw >>> 0. >>> 000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 165 pw_pool_create_pw >>> st >>> art Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 94 pw_create_c1d >>> sta >>> rt Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 94 pw_create_c1d >>> 0.0 >>> 00 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 11 165 pw_pool_create_pw >>> 0. >>> 000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 73 pw_copy >>> start Hostme >>> >>> m: 693 MB GPUmem: 0 MB >>> 000000:000002<< 11 73 pw_copy >>> 0.001 Hostme >>> >>> m: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 41 pw_axpy >>> start Hostme >>> >>> m: 693 MB GPUmem: 0 MB >>> 000000:000002<< 11 41 pw_axpy >>> 0.001 Hostme >>> >>> m: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 52 mp_sum_d >>> start Hostm >>> >>> em: 693 MB GPUmem: 0 MB >>> 000000:000002<< 11 52 mp_sum_d >>> 0.000 Hostm >>> >>> em: 693 MB GPUmem: 0 MB >>> 000000:000002>> 11 7 pw_poisson_solve >>> sta >>> rt Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 7 >>> pw_poisson_rebuild >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 7 >>> pw_poisson_rebuild >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 166 >>> pw_pool_create_pw >>> >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 95 >>> pw_create_c1d >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 95 >>> pw_create_c1d >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 166 >>> pw_pool_create_pw >>> >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 74 pw_copy >>> start Hos >>> >>> tmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 74 pw_copy >>> 0.001 Hos >>> >>> tmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 7 >>> pw_multiply_with >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 7 >>> pw_multiply_with >>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 75 pw_copy >>> start Hos >>> >>> tmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 75 pw_copy >>> 0.001 Hos >>> >>> tmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 7 pw_integral_ab >>> st >>> art Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 53 mp_sum_d >>> start >>> >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 53 mp_sum_d >>> 0.000 >>> >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 12 7 pw_integral_ab >>> 0. >>> 003 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 12 8 pw_poisson_set >>> st >>> art Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 167 >>> pw_pool_create_pw >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 14 96 >>> pw_create_c1d >>> >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 14 96 >>> pw_create_c1d >>> >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 167 >>> pw_pool_create_pw >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 76 pw_copy >>> start >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 76 pw_copy >>> 0.001 >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 19 pw_derive >>> star >>> t Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 19 pw_derive >>> 0.00 >>> 2 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 168 >>> pw_pool_create_pw >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 14 97 >>> pw_create_c1d >>> start Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 14 97 >>> pw_create_c1d >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 168 >>> pw_pool_create_pw >>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 77 pw_copy >>> start >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002<< 13 77 pw_copy >>> 0.001 >>> Hostmem: 693 MB GPUmem: 0 MB >>> 000000:000002>> 13 20 pw_derive >>> star >>> t Hostmem: 693 MB GPUmem: 0 MB >>> ``` >>> >>> Thanks >>> Bartosz >>> >>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>> napisa?(a): >>> >>>> Dear Bartosz, >>>> I have no idea about the issue with LibXSMM. >>>> Regarding the trace, I do not know either as there is not much that >>>> could break in pw_derive (it just performs multiplications) and the >>>> sequence of operations is to unspecific. It may be that the code actually >>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>> with the psmp version. >>>> Best, >>>> Frederick >>>> >>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >>>> >>>>> The error is: >>>>> >>>>> ``` >>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>> CLX/DP TRY JIT STA COL >>>>> 0..13 2 2 0 0 >>>>> 14..23 0 0 0 0 >>>>> >>>>> 24..64 0 0 0 0 >>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>> Command (PID=2607388): >>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>> H2O-9.inp -o H2O-9.out >>>>> Uptime: 5.288243 s >>>>> >>>>> >>>>> >>>>> =================================================================================== >>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>> >>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>> >>>>> =================================================================================== >>>>> >>>>> >>>>> =================================================================================== >>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>> >>>>> =================================================================================== >>>>> ``` >>>>> >>>>> and the last 20 lines: >>>>> >>>>> ``` >>>>> 000000:000002<< 13 76 pw_copy >>>>> 0.001 >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 19 pw_derive >>>>> star >>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 19 pw_derive >>>>> 0.00 >>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 168 >>>>> pw_pool_create_pw >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 14 97 >>>>> pw_create_c1d >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 14 97 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 168 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 77 pw_copy >>>>> start >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 77 pw_copy >>>>> 0.001 >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 20 pw_derive >>>>> star >>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>> ``` >>>>> >>>>> Thanks! >>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> Please pick one of the failing tests. Then, add the TRACE keyword to >>>>>> the &GLOBAL section and then run the test manually. This increases the size >>>>>> of the output file dramatically (to some million lines). Can you send me >>>>>> the last ~20 lines of the output? >>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >>>>>> >>>>>>> I'm using do_regtests.py script, not make regtesting, but I assume >>>>>>> it makes no difference. As I mentioned in previous message for >>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>> setting, I provide example output as attachment. >>>>>>> >>>>>>> Thanks >>>>>>> Bartosz >>>>>>> >>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>> napisa?(a): >>>>>>> >>>>>>>> Dear Bartosz, >>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>> ssmp? >>>>>>>> Best, >>>>>>>> Frederick >>>>>>>> >>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>> UTC+2: >>>>>>>> >>>>>>>>> Hi Frederick, >>>>>>>>> >>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from >>>>>>>>> the effect on simulation time, they have no significant effect on the >>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> and psmp: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; 495min >>>>>>>>> spread, cores, 26 / 362 >>>>>>>>> spread, cores, 26 / 362 >>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>>>>> close, cores, 60 / 362 >>>>>>>>> close, sockets, 13 / 362 >>>>>>>>> master, threads, 13 / 362 >>>>>>>>> master, cores, 79 / 362 >>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>>>> false, sockets, 96 / 362 >>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>> failed: 98; 263min >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> Any ideas what I could do next to have more information about the >>>>>>>>> source of the problem or maybe you see a potential solution at this stage? >>>>>>>>> I would appreciate any further help. >>>>>>>>> >>>>>>>>> Best >>>>>>>>> Bartosz >>>>>>>>> >>>>>>>>> >>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>> napisa?(a): >>>>>>>>> >>>>>>>>>> Dear Bartosz, >>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not >>>>>>>>>> run that efficiently with such a large number of threads. 2 should be >>>>>>>>>> sufficient. >>>>>>>>>> The test result suggests that most of the functionality may work >>>>>>>>>> but due to a missing backtrace (or similar information), it is hard to tell >>>>>>>>>> why they fail. You could also try to run some of the single-node tests to >>>>>>>>>> assess the stability of CP2K. >>>>>>>>>> Best, >>>>>>>>>> Frederick >>>>>>>>>> >>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>>>> UTC+2: >>>>>>>>>> >>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>> >>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/e642808f-43c6-47de-800c-94b4e334515en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Tue Oct 22 13:24:03 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Tue, 22 Oct 2024 06:24:03 -0700 (PDT) Subject: [CP2K-user] [CP2K:20803] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> Message-ID: <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> I have a fix for it. In contrast to my first thought, it is a case of invalid type conversion from real to complex numbers (yes, Fortran is rather strict about it) in pw_derive. This may also be present in a few other spots. I am currently running more tests and I will open a pull request within the next few days. Best, Frederick Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 UTC+2: > I can reproduce the error locally. I am investigating it now. > > bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: > >> I was loading it as it was needed for compilation. I have unloaded the >> module, but the error still occurs: >> >> ``` >> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >> CLX/DP TRY JIT STA COL >> 0..13 2 2 0 0 >> 14..23 0 0 0 0 >> 24..64 0 0 0 0 >> Registry and code: 13 MB + 16 KB (gemm=2) >> Command (PID=15485): >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >> H2O-9.inp -o H2O-9.out >> Uptime: 1.757102 s >> >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 0 PID 15485 RUNNING AT r30c01b01 >> >> = KILLED BY SIGNAL: 11 (Segmentation fault) >> >> =================================================================================== >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 1 PID 15486 RUNNING AT r30c01b01 >> >> = KILLED BY SIGNAL: 9 (Killed) >> >> =================================================================================== >> ``` >> >> >> and the last 100 lines: >> >> ``` >> 000000:000002>> 11 37 pw_create_c1d >> start >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 37 pw_create_c1d >> 0.000 >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 10 64 pw_pool_create_pw >> 0.000 >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 10 25 pw_copy start >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002<< 10 25 pw_copy 0.001 >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002>> 10 17 pw_axpy start >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002<< 10 17 pw_axpy 0.001 >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002>> 10 19 mp_sum_d start >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002<< 10 19 mp_sum_d 0.000 >> Hostmem: >> 697 MB GPUmem: 0 MB >> 000000:000002>> 10 3 pw_poisson_solve >> start >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 3 pw_poisson_rebuild >> s >> tart Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 3 pw_poisson_rebuild >> 0 >> .000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 65 pw_pool_create_pw >> st >> art Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 38 pw_create_c1d >> sta >> rt Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 38 pw_create_c1d >> 0.0 >> 00 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 65 pw_pool_create_pw >> 0. >> 000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 26 pw_copy start >> Hostme >> m: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 26 pw_copy 0.001 >> Hostme >> m: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 3 pw_multiply_with >> sta >> rt Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 3 pw_multiply_with >> 0.0 >> 01 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 27 pw_copy start >> Hostme >> m: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 27 pw_copy 0.001 >> Hostme >> m: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 3 pw_integral_ab >> start >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 20 mp_sum_d >> start Ho >> stmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 20 mp_sum_d >> 0.001 Ho >> stmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 11 3 pw_integral_ab >> 0.004 >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 11 4 pw_poisson_set >> start >> Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 66 >> pw_pool_create_pw >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 13 39 pw_create_c1d >> >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 13 39 pw_create_c1d >> >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 66 >> pw_pool_create_pw >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 28 pw_copy >> start Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 28 pw_copy >> 0.001 Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 7 pw_derive >> start H >> ostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 7 pw_derive >> 0.002 H >> ostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 67 >> pw_pool_create_pw >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 13 40 pw_create_c1d >> >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 13 40 pw_create_c1d >> >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 67 >> pw_pool_create_pw >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 29 pw_copy >> start Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 29 pw_copy >> 0.001 Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 8 pw_derive >> start H >> ostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 8 pw_derive >> 0.002 H >> ostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 68 >> pw_pool_create_pw >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 13 41 pw_create_c1d >> >> start Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 13 41 pw_create_c1d >> >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 68 >> pw_pool_create_pw >> 0.000 Hostmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 30 pw_copy >> start Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002<< 12 30 pw_copy >> 0.001 Hos >> tmem: 697 MB GPUmem: 0 MB >> 000000:000002>> 12 9 pw_derive >> start H >> ostmem: 697 MB GPUmem: 0 MB >> ``` >> >> This is the list of currently loaded modules (all come with intel): >> >> ``` >> Currently Loaded Modulefiles: >> 1) GCCcore/13.3.0 7) >> impi/2021.13.0-intel-compilers-2024.2.0 >> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >> >> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >> >> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >> >> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >> >> 6) UCX/1.16.0-GCCcore-13.3.0 >> ``` >> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein napisa?(a): >> >>> Dear Bartosz, >>> I am currently running some tests with the latest Intel compiler myself. >>> What bothers me about your setup is the module GCC13/13.3.0 . Why is it >>> loaded? Can you unload it? This would at least reduce potential >>> interferences with between the Intel and the GCC compilers. >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>> >>>> The error for ssmp is: >>>> >>>> ``` >>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>> CLX/DP TRY JIT STA COL >>>> 0..13 4 4 0 0 >>>> 14..23 0 0 0 0 >>>> 24..64 0 0 0 0 >>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>> Command (PID=54845): >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>> H2O-9.inp -o H2O-9.out >>>> Uptime: 2.861583 s >>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>>> Segmentation fault (core dumped) >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>> H2O-9.inp -o H2O-9.out >>>> ``` >>>> >>>> and the last 100 lines of output: >>>> >>>> ``` >>>> 000000:000001>> 12 20 mp_sum_d >>>> start Ho >>>> stmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 20 mp_sum_d >>>> 0.000 Ho >>>> stmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>> 0.000 H >>>> ostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 10 12 calculate_ptrace_kp >>>> 0.0 >>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 9 6 >>>> evaluate_core_matrix_traces >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>> start Ho >>>> stmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 10 6 >>>> qs_ks_build_kohn_sham_matrix >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 11 140 pw_pool_create_pw >>>> st >>>> art Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 79 pw_create_c1d >>>> sta >>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 79 pw_create_c1d >>>> 0.0 >>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 11 140 pw_pool_create_pw >>>> 0. >>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 11 141 pw_pool_create_pw >>>> st >>>> art Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 80 pw_create_c1d >>>> sta >>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 80 pw_create_c1d >>>> 0.0 >>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 11 141 pw_pool_create_pw >>>> 0. >>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 11 61 pw_copy >>>> start Hostme >>>> m: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 11 61 pw_copy >>>> 0.004 Hostme >>>> m: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 11 35 pw_axpy >>>> start Hostme >>>> m: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 11 35 pw_axpy >>>> 0.002 Hostme >>>> m: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 11 6 pw_poisson_solve >>>> sta >>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 6 >>>> pw_poisson_rebuild >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 6 >>>> pw_poisson_rebuild >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 142 >>>> pw_pool_create_pw >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 81 >>>> pw_create_c1d >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 81 >>>> pw_create_c1d >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 142 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 62 pw_copy >>>> start Hos >>>> tmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 62 pw_copy >>>> 0.003 Hos >>>> tmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 6 >>>> pw_multiply_with >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 6 >>>> pw_multiply_with >>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 63 pw_copy >>>> start Hos >>>> tmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 63 pw_copy >>>> 0.003 Hos >>>> tmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 6 pw_integral_ab >>>> st >>>> art Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 12 6 pw_integral_ab >>>> 0. >>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 12 7 pw_poisson_set >>>> st >>>> art Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 143 >>>> pw_pool_create_pw >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 14 82 >>>> pw_create_c1d >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 14 82 >>>> pw_create_c1d >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 143 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 64 pw_copy >>>> start >>>> Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 64 pw_copy >>>> 0.003 >>>> Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 16 pw_derive >>>> star >>>> t Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 16 pw_derive >>>> 0.00 >>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 144 >>>> pw_pool_create_pw >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 14 83 >>>> pw_create_c1d >>>> start Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 14 83 >>>> pw_create_c1d >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 144 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 65 pw_copy >>>> start >>>> Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001<< 13 65 pw_copy >>>> 0.004 >>>> Hostmem: 380 MB GPUmem: 0 MB >>>> 000000:000001>> 13 17 pw_derive >>>> star >>>> t Hostmem: 380 MB GPUmem: 0 MB >>>> ``` >>>> >>>> for psmp the last 100 lines is: >>>> >>>> ``` >>>> 000000:000002<< 9 7 >>>> evaluate_core_matrix_traces >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>> start Ho >>>> >>>> stmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 10 7 >>>> qs_ks_build_kohn_sham_matrix >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 164 pw_pool_create_pw >>>> st >>>> art Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 93 pw_create_c1d >>>> sta >>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 93 pw_create_c1d >>>> 0.0 >>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 11 164 pw_pool_create_pw >>>> 0. >>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 165 pw_pool_create_pw >>>> st >>>> art Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 94 pw_create_c1d >>>> sta >>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 94 pw_create_c1d >>>> 0.0 >>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 11 165 pw_pool_create_pw >>>> 0. >>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 73 pw_copy >>>> start Hostme >>>> >>>> m: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 11 73 pw_copy >>>> 0.001 Hostme >>>> >>>> m: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 41 pw_axpy >>>> start Hostme >>>> >>>> m: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 11 41 pw_axpy >>>> 0.001 Hostme >>>> >>>> m: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 52 mp_sum_d >>>> start Hostm >>>> >>>> em: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 11 52 mp_sum_d >>>> 0.000 Hostm >>>> >>>> em: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 11 7 pw_poisson_solve >>>> sta >>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 7 >>>> pw_poisson_rebuild >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 7 >>>> pw_poisson_rebuild >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 166 >>>> pw_pool_create_pw >>>> >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 95 >>>> pw_create_c1d >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 95 >>>> pw_create_c1d >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 166 >>>> pw_pool_create_pw >>>> >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 74 pw_copy >>>> start Hos >>>> >>>> tmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 74 pw_copy >>>> 0.001 Hos >>>> >>>> tmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 7 >>>> pw_multiply_with >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 7 >>>> pw_multiply_with >>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 75 pw_copy >>>> start Hos >>>> >>>> tmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 75 pw_copy >>>> 0.001 Hos >>>> >>>> tmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 7 pw_integral_ab >>>> st >>>> art Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 53 mp_sum_d >>>> start >>>> >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 53 mp_sum_d >>>> 0.000 >>>> >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 12 7 pw_integral_ab >>>> 0. >>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 12 8 pw_poisson_set >>>> st >>>> art Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 167 >>>> pw_pool_create_pw >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 14 96 >>>> pw_create_c1d >>>> >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 14 96 >>>> pw_create_c1d >>>> >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 167 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 76 pw_copy >>>> start >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 76 pw_copy >>>> 0.001 >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 19 pw_derive >>>> star >>>> t Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 19 pw_derive >>>> 0.00 >>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 168 >>>> pw_pool_create_pw >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 14 97 >>>> pw_create_c1d >>>> start Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 14 97 >>>> pw_create_c1d >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 168 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 77 pw_copy >>>> start >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002<< 13 77 pw_copy >>>> 0.001 >>>> Hostmem: 693 MB GPUmem: 0 MB >>>> 000000:000002>> 13 20 pw_derive >>>> star >>>> t Hostmem: 693 MB GPUmem: 0 MB >>>> ``` >>>> >>>> Thanks >>>> Bartosz >>>> >>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> Dear Bartosz, >>>>> I have no idea about the issue with LibXSMM. >>>>> Regarding the trace, I do not know either as there is not much that >>>>> could break in pw_derive (it just performs multiplications) and the >>>>> sequence of operations is to unspecific. It may be that the code actually >>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>> with the psmp version. >>>>> Best, >>>>> Frederick >>>>> >>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >>>>> >>>>>> The error is: >>>>>> >>>>>> ``` >>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>> CLX/DP TRY JIT STA COL >>>>>> 0..13 2 2 0 0 >>>>>> 14..23 0 0 0 0 >>>>>> >>>>>> 24..64 0 0 0 0 >>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>> Command (PID=2607388): >>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>> H2O-9.inp -o H2O-9.out >>>>>> Uptime: 5.288243 s >>>>>> >>>>>> >>>>>> >>>>>> =================================================================================== >>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>> >>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>> >>>>>> =================================================================================== >>>>>> >>>>>> >>>>>> =================================================================================== >>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>> >>>>>> =================================================================================== >>>>>> ``` >>>>>> >>>>>> and the last 20 lines: >>>>>> >>>>>> ``` >>>>>> 000000:000002<< 13 76 pw_copy >>>>>> 0.001 >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 19 pw_derive >>>>>> star >>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 19 pw_derive >>>>>> 0.00 >>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 168 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 14 97 >>>>>> pw_create_c1d >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 14 97 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 168 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 77 pw_copy >>>>>> start >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 77 pw_copy >>>>>> 0.001 >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 20 pw_derive >>>>>> star >>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>> ``` >>>>>> >>>>>> Thanks! >>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> Please pick one of the failing tests. Then, add the TRACE keyword to >>>>>>> the &GLOBAL section and then run the test manually. This increases the size >>>>>>> of the output file dramatically (to some million lines). Can you send me >>>>>>> the last ~20 lines of the output? >>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 UTC+2: >>>>>>> >>>>>>>> I'm using do_regtests.py script, not make regtesting, but I assume >>>>>>>> it makes no difference. As I mentioned in previous message for >>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>> setting, I provide example output as attachment. >>>>>>>> >>>>>>>> Thanks >>>>>>>> Bartosz >>>>>>>> >>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>> napisa?(a): >>>>>>>> >>>>>>>>> Dear Bartosz, >>>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>>> ssmp? >>>>>>>>> Best, >>>>>>>>> Frederick >>>>>>>>> >>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>>> UTC+2: >>>>>>>>> >>>>>>>>>> Hi Frederick, >>>>>>>>>> >>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from >>>>>>>>>> the effect on simulation time, they have no significant effect on the >>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> and psmp: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>> 495min >>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; 563min >>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>> failed: 98; 263min >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> Any ideas what I could do next to have more information about the >>>>>>>>>> source of the problem or maybe you see a potential solution at this stage? >>>>>>>>>> I would appreciate any further help. >>>>>>>>>> >>>>>>>>>> Best >>>>>>>>>> Bartosz >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>>> napisa?(a): >>>>>>>>>> >>>>>>>>>>> Dear Bartosz, >>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do not >>>>>>>>>>> run that efficiently with such a large number of threads. 2 should be >>>>>>>>>>> sufficient. >>>>>>>>>>> The test result suggests that most of the functionality may work >>>>>>>>>>> but due to a missing backtrace (or similar information), it is hard to tell >>>>>>>>>>> why they fail. You could also try to run some of the single-node tests to >>>>>>>>>>> assess the stability of CP2K. >>>>>>>>>>> Best, >>>>>>>>>>> Frederick >>>>>>>>>>> >>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>>>>> UTC+2: >>>>>>>>>>> >>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>> >>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/024be246-9696-4577-a330-f5a234dc51edn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bamaz.97 at gmail.com Tue Oct 22 15:39:35 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Tue, 22 Oct 2024 08:39:35 -0700 (PDT) Subject: [CP2K-user] [CP2K:20803] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> Message-ID: Great! Thank you for your help. Best Bartosz wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein napisa?(a): > I have a fix for it. In contrast to my first thought, it is a case of > invalid type conversion from real to complex numbers (yes, Fortran is > rather strict about it) in pw_derive. This may also be present in a few > other spots. I am currently running more tests and I will open a pull > request within the next few days. > Best, > Frederick > > Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 UTC+2: > >> I can reproduce the error locally. I am investigating it now. >> >> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: >> >>> I was loading it as it was needed for compilation. I have unloaded the >>> module, but the error still occurs: >>> >>> ``` >>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>> CLX/DP TRY JIT STA COL >>> 0..13 2 2 0 0 >>> 14..23 0 0 0 0 >>> 24..64 0 0 0 0 >>> Registry and code: 13 MB + 16 KB (gemm=2) >>> Command (PID=15485): >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>> H2O-9.inp -o H2O-9.out >>> Uptime: 1.757102 s >>> >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>> >>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>> >>> =================================================================================== >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>> >>> = KILLED BY SIGNAL: 9 (Killed) >>> >>> =================================================================================== >>> ``` >>> >>> >>> and the last 100 lines: >>> >>> ``` >>> 000000:000002>> 11 37 pw_create_c1d >>> start >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 37 pw_create_c1d >>> 0.000 >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 10 64 pw_pool_create_pw >>> 0.000 >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 10 25 pw_copy start >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002<< 10 25 pw_copy 0.001 >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002>> 10 17 pw_axpy start >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002<< 10 17 pw_axpy 0.001 >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002>> 10 19 mp_sum_d start >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002<< 10 19 mp_sum_d 0.000 >>> Hostmem: >>> 697 MB GPUmem: 0 MB >>> 000000:000002>> 10 3 pw_poisson_solve >>> start >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 3 pw_poisson_rebuild >>> s >>> tart Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 3 pw_poisson_rebuild >>> 0 >>> .000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 65 pw_pool_create_pw >>> st >>> art Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 38 pw_create_c1d >>> sta >>> rt Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 38 pw_create_c1d >>> 0.0 >>> 00 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 65 pw_pool_create_pw >>> 0. >>> 000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 26 pw_copy >>> start Hostme >>> m: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 26 pw_copy >>> 0.001 Hostme >>> m: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 3 pw_multiply_with >>> sta >>> rt Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 3 pw_multiply_with >>> 0.0 >>> 01 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 27 pw_copy >>> start Hostme >>> m: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 27 pw_copy >>> 0.001 Hostme >>> m: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 3 pw_integral_ab >>> start >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 20 mp_sum_d >>> start Ho >>> stmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 20 mp_sum_d >>> 0.001 Ho >>> stmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 11 3 pw_integral_ab >>> 0.004 >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 11 4 pw_poisson_set >>> start >>> Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 66 >>> pw_pool_create_pw >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 13 39 >>> pw_create_c1d >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 13 39 >>> pw_create_c1d >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 66 >>> pw_pool_create_pw >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 28 pw_copy >>> start Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 28 pw_copy >>> 0.001 Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 7 pw_derive >>> start H >>> ostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 7 pw_derive >>> 0.002 H >>> ostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 67 >>> pw_pool_create_pw >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 13 40 >>> pw_create_c1d >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 13 40 >>> pw_create_c1d >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 67 >>> pw_pool_create_pw >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 29 pw_copy >>> start Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 29 pw_copy >>> 0.001 Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 8 pw_derive >>> start H >>> ostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 8 pw_derive >>> 0.002 H >>> ostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 68 >>> pw_pool_create_pw >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 13 41 >>> pw_create_c1d >>> start Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 13 41 >>> pw_create_c1d >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 68 >>> pw_pool_create_pw >>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 30 pw_copy >>> start Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002<< 12 30 pw_copy >>> 0.001 Hos >>> tmem: 697 MB GPUmem: 0 MB >>> 000000:000002>> 12 9 pw_derive >>> start H >>> ostmem: 697 MB GPUmem: 0 MB >>> ``` >>> >>> This is the list of currently loaded modules (all come with intel): >>> >>> ``` >>> Currently Loaded Modulefiles: >>> 1) GCCcore/13.3.0 7) >>> impi/2021.13.0-intel-compilers-2024.2.0 >>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>> >>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>> >>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>> >>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>> >>> 6) UCX/1.16.0-GCCcore-13.3.0 >>> ``` >>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein napisa?(a): >>> >>>> Dear Bartosz, >>>> I am currently running some tests with the latest Intel compiler >>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>> is it loaded? Can you unload it? This would at least reduce potential >>>> interferences with between the Intel and the GCC compilers. >>>> Best, >>>> Frederick >>>> >>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>>> >>>>> The error for ssmp is: >>>>> >>>>> ``` >>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>> CLX/DP TRY JIT STA COL >>>>> 0..13 4 4 0 0 >>>>> 14..23 0 0 0 0 >>>>> 24..64 0 0 0 0 >>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>> Command (PID=54845): >>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>> H2O-9.inp -o H2O-9.out >>>>> Uptime: 2.861583 s >>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>>>> Segmentation fault (core dumped) >>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>> H2O-9.inp -o H2O-9.out >>>>> ``` >>>>> >>>>> and the last 100 lines of output: >>>>> >>>>> ``` >>>>> 000000:000001>> 12 20 mp_sum_d >>>>> start Ho >>>>> stmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 20 mp_sum_d >>>>> 0.000 Ho >>>>> stmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>>> 0.000 H >>>>> ostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 10 12 calculate_ptrace_kp >>>>> 0.0 >>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 9 6 >>>>> evaluate_core_matrix_traces >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>> start Ho >>>>> stmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 10 6 >>>>> qs_ks_build_kohn_sham_matrix >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 11 140 >>>>> pw_pool_create_pw st >>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 79 pw_create_c1d >>>>> sta >>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 79 pw_create_c1d >>>>> 0.0 >>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 11 140 >>>>> pw_pool_create_pw 0. >>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 11 141 >>>>> pw_pool_create_pw st >>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 80 pw_create_c1d >>>>> sta >>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 80 pw_create_c1d >>>>> 0.0 >>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 11 141 >>>>> pw_pool_create_pw 0. >>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 11 61 pw_copy >>>>> start Hostme >>>>> m: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 11 61 pw_copy >>>>> 0.004 Hostme >>>>> m: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 11 35 pw_axpy >>>>> start Hostme >>>>> m: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 11 35 pw_axpy >>>>> 0.002 Hostme >>>>> m: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 11 6 pw_poisson_solve >>>>> sta >>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 6 >>>>> pw_poisson_rebuild >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 6 >>>>> pw_poisson_rebuild >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 142 >>>>> pw_pool_create_pw >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 81 >>>>> pw_create_c1d >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 81 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 142 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 62 pw_copy >>>>> start Hos >>>>> tmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 62 pw_copy >>>>> 0.003 Hos >>>>> tmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 6 >>>>> pw_multiply_with >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 6 >>>>> pw_multiply_with >>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 63 pw_copy >>>>> start Hos >>>>> tmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 63 pw_copy >>>>> 0.003 Hos >>>>> tmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 6 >>>>> pw_integral_ab st >>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 12 6 >>>>> pw_integral_ab 0. >>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 12 7 >>>>> pw_poisson_set st >>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 143 >>>>> pw_pool_create_pw >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 14 82 >>>>> pw_create_c1d >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 14 82 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 143 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 64 pw_copy >>>>> start >>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 64 pw_copy >>>>> 0.003 >>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 16 pw_derive >>>>> star >>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 16 pw_derive >>>>> 0.00 >>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 144 >>>>> pw_pool_create_pw >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 14 83 >>>>> pw_create_c1d >>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 14 83 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 144 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 65 pw_copy >>>>> start >>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001<< 13 65 pw_copy >>>>> 0.004 >>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>> 000000:000001>> 13 17 pw_derive >>>>> star >>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>> ``` >>>>> >>>>> for psmp the last 100 lines is: >>>>> >>>>> ``` >>>>> 000000:000002<< 9 7 >>>>> evaluate_core_matrix_traces >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>> start Ho >>>>> >>>>> stmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 10 7 >>>>> qs_ks_build_kohn_sham_matrix >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 164 >>>>> pw_pool_create_pw st >>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 93 pw_create_c1d >>>>> sta >>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 93 pw_create_c1d >>>>> 0.0 >>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 164 >>>>> pw_pool_create_pw 0. >>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 165 >>>>> pw_pool_create_pw st >>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 94 pw_create_c1d >>>>> sta >>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 94 pw_create_c1d >>>>> 0.0 >>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 165 >>>>> pw_pool_create_pw 0. >>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 73 pw_copy >>>>> start Hostme >>>>> >>>>> m: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 73 pw_copy >>>>> 0.001 Hostme >>>>> >>>>> m: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 41 pw_axpy >>>>> start Hostme >>>>> >>>>> m: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 41 pw_axpy >>>>> 0.001 Hostme >>>>> >>>>> m: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 52 mp_sum_d >>>>> start Hostm >>>>> >>>>> em: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 52 mp_sum_d >>>>> 0.000 Hostm >>>>> >>>>> em: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 7 pw_poisson_solve >>>>> sta >>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 7 >>>>> pw_poisson_rebuild >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 7 >>>>> pw_poisson_rebuild >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 166 >>>>> pw_pool_create_pw >>>>> >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 95 >>>>> pw_create_c1d >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 95 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 166 >>>>> pw_pool_create_pw >>>>> >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 74 pw_copy >>>>> start Hos >>>>> >>>>> tmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 74 pw_copy >>>>> 0.001 Hos >>>>> >>>>> tmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 7 >>>>> pw_multiply_with >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 7 >>>>> pw_multiply_with >>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 75 pw_copy >>>>> start Hos >>>>> >>>>> tmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 75 pw_copy >>>>> 0.001 Hos >>>>> >>>>> tmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 7 >>>>> pw_integral_ab st >>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 53 mp_sum_d >>>>> start >>>>> >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 53 mp_sum_d >>>>> 0.000 >>>>> >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 7 >>>>> pw_integral_ab 0. >>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 8 >>>>> pw_poisson_set st >>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 167 >>>>> pw_pool_create_pw >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 14 96 >>>>> pw_create_c1d >>>>> >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 14 96 >>>>> pw_create_c1d >>>>> >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 167 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 76 pw_copy >>>>> start >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 76 pw_copy >>>>> 0.001 >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 19 pw_derive >>>>> star >>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 19 pw_derive >>>>> 0.00 >>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 168 >>>>> pw_pool_create_pw >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 14 97 >>>>> pw_create_c1d >>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 14 97 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 168 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 77 pw_copy >>>>> start >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 77 pw_copy >>>>> 0.001 >>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 20 pw_derive >>>>> star >>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>> ``` >>>>> >>>>> Thanks >>>>> Bartosz >>>>> >>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> Dear Bartosz, >>>>>> I have no idea about the issue with LibXSMM. >>>>>> Regarding the trace, I do not know either as there is not much that >>>>>> could break in pw_derive (it just performs multiplications) and the >>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>> with the psmp version. >>>>>> Best, >>>>>> Frederick >>>>>> >>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >>>>>> >>>>>>> The error is: >>>>>>> >>>>>>> ``` >>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>> CLX/DP TRY JIT STA COL >>>>>>> 0..13 2 2 0 0 >>>>>>> 14..23 0 0 0 0 >>>>>>> >>>>>>> 24..64 0 0 0 0 >>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>> Command (PID=2607388): >>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>> H2O-9.inp -o H2O-9.out >>>>>>> Uptime: 5.288243 s >>>>>>> >>>>>>> >>>>>>> >>>>>>> =================================================================================== >>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>> >>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>> >>>>>>> =================================================================================== >>>>>>> >>>>>>> >>>>>>> =================================================================================== >>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>> >>>>>>> =================================================================================== >>>>>>> ``` >>>>>>> >>>>>>> and the last 20 lines: >>>>>>> >>>>>>> ``` >>>>>>> 000000:000002<< 13 76 pw_copy >>>>>>> 0.001 >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 19 >>>>>>> pw_derive star >>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 19 >>>>>>> pw_derive 0.00 >>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 168 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 14 97 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 14 97 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 168 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 77 pw_copy >>>>>>> start >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 77 pw_copy >>>>>>> 0.001 >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 20 >>>>>>> pw_derive star >>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>> ``` >>>>>>> >>>>>>> Thanks! >>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>> napisa?(a): >>>>>>> >>>>>>>> Please pick one of the failing tests. Then, add the TRACE keyword >>>>>>>> to the &GLOBAL section and then run the test manually. This increases the >>>>>>>> size of the output file dramatically (to some million lines). Can you send >>>>>>>> me the last ~20 lines of the output? >>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>> UTC+2: >>>>>>>> >>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I assume >>>>>>>>> it makes no difference. As I mentioned in previous message for >>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>> setting, I provide example output as attachment. >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Bartosz >>>>>>>>> >>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>> napisa?(a): >>>>>>>>> >>>>>>>>>> Dear Bartosz, >>>>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>>>> ssmp? >>>>>>>>>> Best, >>>>>>>>>> Frederick >>>>>>>>>> >>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>>>> UTC+2: >>>>>>>>>> >>>>>>>>>>> Hi Frederick, >>>>>>>>>>> >>>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from >>>>>>>>>>> the effect on simulation time, they have no significant effect on the >>>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> and psmp: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>>> 495min >>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; 484min >>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>> 563min >>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; 556min >>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>>> failed: 98; 263min >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> Any ideas what I could do next to have more information about >>>>>>>>>>> the source of the problem or maybe you see a potential solution at this >>>>>>>>>>> stage? I would appreciate any further help. >>>>>>>>>>> >>>>>>>>>>> Best >>>>>>>>>>> Bartosz >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>>>> napisa?(a): >>>>>>>>>>> >>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do >>>>>>>>>>>> not run that efficiently with such a large number of threads. 2 should be >>>>>>>>>>>> sufficient. >>>>>>>>>>>> The test result suggests that most of the functionality may >>>>>>>>>>>> work but due to a missing backtrace (or similar information), it is hard to >>>>>>>>>>>> tell why they fail. You could also try to run some of the single-node tests >>>>>>>>>>>> to assess the stability of CP2K. >>>>>>>>>>>> Best, >>>>>>>>>>>> Frederick >>>>>>>>>>>> >>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>>>>>> UTC+2: >>>>>>>>>>>> >>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>> >>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/f4d5e2d2-12d0-49b0-b7cc-12de19399b7dn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tamaldas89 at gmail.com Tue Oct 22 21:42:53 2024 From: tamaldas89 at gmail.com (Tamal Das) Date: Tue, 22 Oct 2024 14:42:53 -0700 (PDT) Subject: [CP2K-user] [CP2K:20805] Query regarding QM/MM calculation Message-ID: <8a49adff-78ce-47c6-9f5c-85ab8c12f21bn@googlegroups.com> Hello CP2K users, I am trying to do QM/MM dynamics with a substrate-bound protein system and for that I have initially equilibrated my system using MD. After that, I am trying to set up a QM/MM simulation. However, my job immediately crashed with *CPASSERT failed (*a routine calling stack in *qmmm_env_create)*. I can't find the mistake in the input file except my guess regarding the issue with the *LINK section* is that I made a partition between QM and MM parts. I have attached my input and output files for your reference. Please give me any suggestions to solve this issue. Best Regards, Tamal -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/8a49adff-78ce-47c6-9f5c-85ab8c12f21bn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cp2k.inp Type: chemical/x-gamess-input Size: 6186 bytes Desc: not available URL: From f.stein at hzdr.de Wed Oct 23 07:15:33 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Wed, 23 Oct 2024 00:15:33 -0700 (PDT) Subject: [CP2K-user] [CP2K:20806] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> Message-ID: <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> Dear Bartosz, My fix is merged. Can you switch to the CP2K master and try it again? We are still working on a few issues with the Intel compilers such that we may eventually migrate from ifort to ifx. Best, Frederick bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: > Great! Thank you for your help. > > Best > Bartosz > > wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein napisa?(a): > >> I have a fix for it. In contrast to my first thought, it is a case of >> invalid type conversion from real to complex numbers (yes, Fortran is >> rather strict about it) in pw_derive. This may also be present in a few >> other spots. I am currently running more tests and I will open a pull >> request within the next few days. >> Best, >> Frederick >> >> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 UTC+2: >> >>> I can reproduce the error locally. I am investigating it now. >>> >>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: >>> >>>> I was loading it as it was needed for compilation. I have unloaded the >>>> module, but the error still occurs: >>>> >>>> ``` >>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>> CLX/DP TRY JIT STA COL >>>> 0..13 2 2 0 0 >>>> 14..23 0 0 0 0 >>>> 24..64 0 0 0 0 >>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>> Command (PID=15485): >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>> H2O-9.inp -o H2O-9.out >>>> Uptime: 1.757102 s >>>> >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>> >>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>> >>>> =================================================================================== >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>> >>>> = KILLED BY SIGNAL: 9 (Killed) >>>> >>>> =================================================================================== >>>> ``` >>>> >>>> >>>> and the last 100 lines: >>>> >>>> ``` >>>> 000000:000002>> 11 37 pw_create_c1d >>>> start >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 37 pw_create_c1d >>>> 0.000 >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 10 64 pw_pool_create_pw >>>> 0.000 >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 10 25 pw_copy start >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002<< 10 25 pw_copy 0.001 >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002>> 10 17 pw_axpy start >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002<< 10 17 pw_axpy 0.001 >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002>> 10 19 mp_sum_d start >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002<< 10 19 mp_sum_d 0.000 >>>> Hostmem: >>>> 697 MB GPUmem: 0 MB >>>> 000000:000002>> 10 3 pw_poisson_solve >>>> start >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 3 >>>> pw_poisson_rebuild s >>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 3 >>>> pw_poisson_rebuild 0 >>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 65 pw_pool_create_pw >>>> st >>>> art Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 38 pw_create_c1d >>>> sta >>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 38 pw_create_c1d >>>> 0.0 >>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 65 pw_pool_create_pw >>>> 0. >>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 26 pw_copy >>>> start Hostme >>>> m: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 26 pw_copy >>>> 0.001 Hostme >>>> m: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 3 pw_multiply_with >>>> sta >>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 3 pw_multiply_with >>>> 0.0 >>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 27 pw_copy >>>> start Hostme >>>> m: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 27 pw_copy >>>> 0.001 Hostme >>>> m: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 3 pw_integral_ab >>>> start >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 20 mp_sum_d >>>> start Ho >>>> stmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 20 mp_sum_d >>>> 0.001 Ho >>>> stmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 11 3 pw_integral_ab >>>> 0.004 >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 11 4 pw_poisson_set >>>> start >>>> Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 66 >>>> pw_pool_create_pw >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 13 39 >>>> pw_create_c1d >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 13 39 >>>> pw_create_c1d >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 66 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 28 pw_copy >>>> start Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 28 pw_copy >>>> 0.001 Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 7 pw_derive >>>> start H >>>> ostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 7 pw_derive >>>> 0.002 H >>>> ostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 67 >>>> pw_pool_create_pw >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 13 40 >>>> pw_create_c1d >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 13 40 >>>> pw_create_c1d >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 67 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 29 pw_copy >>>> start Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 29 pw_copy >>>> 0.001 Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 8 pw_derive >>>> start H >>>> ostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 8 pw_derive >>>> 0.002 H >>>> ostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 68 >>>> pw_pool_create_pw >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 13 41 >>>> pw_create_c1d >>>> start Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 13 41 >>>> pw_create_c1d >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 68 >>>> pw_pool_create_pw >>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 30 pw_copy >>>> start Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002<< 12 30 pw_copy >>>> 0.001 Hos >>>> tmem: 697 MB GPUmem: 0 MB >>>> 000000:000002>> 12 9 pw_derive >>>> start H >>>> ostmem: 697 MB GPUmem: 0 MB >>>> ``` >>>> >>>> This is the list of currently loaded modules (all come with intel): >>>> >>>> ``` >>>> Currently Loaded Modulefiles: >>>> 1) GCCcore/13.3.0 7) >>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>> >>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>> >>>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>>> >>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>> >>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>> ``` >>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> Dear Bartosz, >>>>> I am currently running some tests with the latest Intel compiler >>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>> interferences with between the Intel and the GCC compilers. >>>>> Best, >>>>> Frederick >>>>> >>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>>>> >>>>>> The error for ssmp is: >>>>>> >>>>>> ``` >>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>> CLX/DP TRY JIT STA COL >>>>>> 0..13 4 4 0 0 >>>>>> 14..23 0 0 0 0 >>>>>> 24..64 0 0 0 0 >>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>> Command (PID=54845): >>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>> H2O-9.inp -o H2O-9.out >>>>>> Uptime: 2.861583 s >>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>>>>> Segmentation fault (core dumped) >>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>> H2O-9.inp -o H2O-9.out >>>>>> ``` >>>>>> >>>>>> and the last 100 lines of output: >>>>>> >>>>>> ``` >>>>>> 000000:000001>> 12 20 mp_sum_d >>>>>> start Ho >>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 20 mp_sum_d >>>>>> 0.000 Ho >>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>>>> 0.000 H >>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 10 12 >>>>>> calculate_ptrace_kp 0.0 >>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 9 6 >>>>>> evaluate_core_matrix_traces >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>>> start Ho >>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 10 6 >>>>>> qs_ks_build_kohn_sham_matrix >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 11 140 >>>>>> pw_pool_create_pw st >>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 79 >>>>>> pw_create_c1d sta >>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 79 >>>>>> pw_create_c1d 0.0 >>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 11 140 >>>>>> pw_pool_create_pw 0. >>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 11 141 >>>>>> pw_pool_create_pw st >>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 80 >>>>>> pw_create_c1d sta >>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 80 >>>>>> pw_create_c1d 0.0 >>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 11 141 >>>>>> pw_pool_create_pw 0. >>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 11 61 pw_copy >>>>>> start Hostme >>>>>> m: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 11 61 pw_copy >>>>>> 0.004 Hostme >>>>>> m: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 11 35 pw_axpy >>>>>> start Hostme >>>>>> m: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 11 35 pw_axpy >>>>>> 0.002 Hostme >>>>>> m: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 11 6 >>>>>> pw_poisson_solve sta >>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 6 >>>>>> pw_poisson_rebuild >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 6 >>>>>> pw_poisson_rebuild >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 142 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 81 >>>>>> pw_create_c1d >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 81 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 142 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 62 pw_copy >>>>>> start Hos >>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 62 pw_copy >>>>>> 0.003 Hos >>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 6 >>>>>> pw_multiply_with >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 6 >>>>>> pw_multiply_with >>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 63 pw_copy >>>>>> start Hos >>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 63 pw_copy >>>>>> 0.003 Hos >>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 6 >>>>>> pw_integral_ab st >>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 12 6 >>>>>> pw_integral_ab 0. >>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 12 7 >>>>>> pw_poisson_set st >>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 143 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 14 82 >>>>>> pw_create_c1d >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 14 82 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 143 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 64 pw_copy >>>>>> start >>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 64 pw_copy >>>>>> 0.003 >>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 16 pw_derive >>>>>> star >>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 16 pw_derive >>>>>> 0.00 >>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 144 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 14 83 >>>>>> pw_create_c1d >>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 14 83 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 144 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 65 pw_copy >>>>>> start >>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001<< 13 65 pw_copy >>>>>> 0.004 >>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>> 000000:000001>> 13 17 pw_derive >>>>>> star >>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>> ``` >>>>>> >>>>>> for psmp the last 100 lines is: >>>>>> >>>>>> ``` >>>>>> 000000:000002<< 9 7 >>>>>> evaluate_core_matrix_traces >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>>> start Ho >>>>>> >>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 10 7 >>>>>> qs_ks_build_kohn_sham_matrix >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 164 >>>>>> pw_pool_create_pw st >>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 93 >>>>>> pw_create_c1d sta >>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 93 >>>>>> pw_create_c1d 0.0 >>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 164 >>>>>> pw_pool_create_pw 0. >>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 165 >>>>>> pw_pool_create_pw st >>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 94 >>>>>> pw_create_c1d sta >>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 94 >>>>>> pw_create_c1d 0.0 >>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 165 >>>>>> pw_pool_create_pw 0. >>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 73 pw_copy >>>>>> start Hostme >>>>>> >>>>>> m: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 73 pw_copy >>>>>> 0.001 Hostme >>>>>> >>>>>> m: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 41 pw_axpy >>>>>> start Hostme >>>>>> >>>>>> m: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 41 pw_axpy >>>>>> 0.001 Hostme >>>>>> >>>>>> m: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>> start Hostm >>>>>> >>>>>> em: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>> 0.000 Hostm >>>>>> >>>>>> em: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 7 >>>>>> pw_poisson_solve sta >>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 7 >>>>>> pw_poisson_rebuild >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 7 >>>>>> pw_poisson_rebuild >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 166 >>>>>> pw_pool_create_pw >>>>>> >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 95 >>>>>> pw_create_c1d >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 95 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 166 >>>>>> pw_pool_create_pw >>>>>> >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 74 pw_copy >>>>>> start Hos >>>>>> >>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 74 pw_copy >>>>>> 0.001 Hos >>>>>> >>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 7 >>>>>> pw_multiply_with >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 7 >>>>>> pw_multiply_with >>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 75 pw_copy >>>>>> start Hos >>>>>> >>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 75 pw_copy >>>>>> 0.001 Hos >>>>>> >>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 7 >>>>>> pw_integral_ab st >>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 53 mp_sum_d >>>>>> start >>>>>> >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 53 mp_sum_d >>>>>> 0.000 >>>>>> >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 7 >>>>>> pw_integral_ab 0. >>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 8 >>>>>> pw_poisson_set st >>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 167 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 14 96 >>>>>> pw_create_c1d >>>>>> >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 14 96 >>>>>> pw_create_c1d >>>>>> >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 167 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 76 pw_copy >>>>>> start >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 76 pw_copy >>>>>> 0.001 >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 19 pw_derive >>>>>> star >>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 19 pw_derive >>>>>> 0.00 >>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 168 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 14 97 >>>>>> pw_create_c1d >>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 14 97 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 168 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 77 pw_copy >>>>>> start >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 77 pw_copy >>>>>> 0.001 >>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 20 pw_derive >>>>>> star >>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>> ``` >>>>>> >>>>>> Thanks >>>>>> Bartosz >>>>>> >>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> Dear Bartosz, >>>>>>> I have no idea about the issue with LibXSMM. >>>>>>> Regarding the trace, I do not know either as there is not much that >>>>>>> could break in pw_derive (it just performs multiplications) and the >>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>> with the psmp version. >>>>>>> Best, >>>>>>> Frederick >>>>>>> >>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 UTC+2: >>>>>>> >>>>>>>> The error is: >>>>>>>> >>>>>>>> ``` >>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>> 0..13 2 2 0 0 >>>>>>>> 14..23 0 0 0 0 >>>>>>>> >>>>>>>> 24..64 0 0 0 0 >>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>> Command (PID=2607388): >>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>> Uptime: 5.288243 s >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>> >>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> ``` >>>>>>>> >>>>>>>> and the last 20 lines: >>>>>>>> >>>>>>>> ``` >>>>>>>> 000000:000002<< 13 76 pw_copy >>>>>>>> 0.001 >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 19 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 19 >>>>>>>> pw_derive 0.00 >>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 168 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 14 97 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 14 97 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 168 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 77 pw_copy >>>>>>>> start >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 77 pw_copy >>>>>>>> 0.001 >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 20 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> ``` >>>>>>>> >>>>>>>> Thanks! >>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>> napisa?(a): >>>>>>>> >>>>>>>>> Please pick one of the failing tests. Then, add the TRACE keyword >>>>>>>>> to the &GLOBAL section and then run the test manually. This increases the >>>>>>>>> size of the output file dramatically (to some million lines). Can you send >>>>>>>>> me the last ~20 lines of the output? >>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>>> UTC+2: >>>>>>>>> >>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Bartosz >>>>>>>>>> >>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>>> napisa?(a): >>>>>>>>>> >>>>>>>>>>> Dear Bartosz, >>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>>>>> ssmp? >>>>>>>>>>> Best, >>>>>>>>>>> Frederick >>>>>>>>>>> >>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>>>>> UTC+2: >>>>>>>>>>> >>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>> >>>>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart from >>>>>>>>>>>> the effect on simulation time, they have no significant effect on the >>>>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>>>> >>>>>>>>>>>> ``` >>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>> ``` >>>>>>>>>>>> >>>>>>>>>>>> and psmp: >>>>>>>>>>>> >>>>>>>>>>>> ``` >>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>>>> 495min >>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>> 484min >>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>> 563min >>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>> 556min >>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; 511min >>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>>>> failed: 98; 263min >>>>>>>>>>>> ``` >>>>>>>>>>>> >>>>>>>>>>>> Any ideas what I could do next to have more information about >>>>>>>>>>>> the source of the problem or maybe you see a potential solution at this >>>>>>>>>>>> stage? I would appreciate any further help. >>>>>>>>>>>> >>>>>>>>>>>> Best >>>>>>>>>>>> Bartosz >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>>>>> napisa?(a): >>>>>>>>>>>> >>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do >>>>>>>>>>>>> not run that efficiently with such a large number of threads. 2 should be >>>>>>>>>>>>> sufficient. >>>>>>>>>>>>> The test result suggests that most of the functionality may >>>>>>>>>>>>> work but due to a missing backtrace (or similar information), it is hard to >>>>>>>>>>>>> tell why they fail. You could also try to run some of the single-node tests >>>>>>>>>>>>> to assess the stability of CP2K. >>>>>>>>>>>>> Best, >>>>>>>>>>>>> Frederick >>>>>>>>>>>>> >>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um 13:48:42 >>>>>>>>>>>>> UTC+2: >>>>>>>>>>>>> >>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/9027c53b-4155-418c-9d08-ea77e5ea5bcfn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholaslaws8 at gmail.com Wed Oct 23 15:30:24 2024 From: nicholaslaws8 at gmail.com (Nicholas Laws) Date: Wed, 23 Oct 2024 08:30:24 -0700 (PDT) Subject: [CP2K-user] [CP2K:20806] All-electron Geometry Optimization of EMIBF4 In-Reply-To: References: <9ae46c98-afd4-4625-b769-6cf39a3fb5d2n@googlegroups.com> Message-ID: Thank you for the advice, Professor Hutter. I have since referenced the all-electron examples in the tests/QS sections and adjusted my input script accordingly (as seen in the attached .inp file). I have decided to separate the geometry optimizations into two separate simulations (one for EMIBF4 and one for EMI+). The content of this post refers to the geometry optimization of EMIBF4. However, after testing several SCF and GEO_OPT configurations, it seems that there are energy instabilities that occur after 5 energy evaluations as depicted here: SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ HFX_MEM_INFO| Est. max. program size before HFX [MiB]: 91626 HFX_MEM_INFO| Number of cart. primitive ERI's calculated: 150682718957 HFX_MEM_INFO| Number of sph. ERI's calculated: 54286634411 HFX_MEM_INFO| Number of sph. ERI's stored in-core: 29731009090 HFX_MEM_INFO| Number of sph. ERI's stored on disk: 0 HFX_MEM_INFO| Number of sph. ERI's calculated on the fly: 24555625321 HFX_MEM_INFO| Total memory consumption ERI's RAM [MiB]: 64051 HFX_MEM_INFO| Whereof max-vals [MiB]: 193 HFX_MEM_INFO| Total compression factor ERI's RAM: 3.54 HFX_MEM_INFO| Total memory consumption ERI's disk [MiB]: 0 HFX_MEM_INFO| Total compression factor ERI's disk: 0.00 HFX_MEM_INFO| Size of density/Fock matrix [MiB]: 4 HFX_MEM_INFO| Size of buffers [MiB]: 5 HFX_MEM_INFO| Est. max. program size after HFX [MiB]: 91793 1 Pulay/Diag. 0.20E+00 105.6 93.15134943 -786.8938839625 -7.87E+02 2 Pulay/Diag. 0.20E+00 59.5 1424.78843686 -787.9112236525 -1.02E+00 3 Pulay/Diag. 0.20E+00 59.6 4.0852E+05 -804.4927698695 -1.66E+01 4 Pulay/Diag. 0.20E+00 58.8 2.6603E+05 -1012.0913997227 -2.08E+02 5 Pulay/Diag. 0.20E+00 60.1 1.2150E+05 -3042.9246578949 -2.03E+03 6 Pulay/Diag. 0.20E+00 58.7 1.3437E+05 -4156.9505747502 -1.11E+03 7 Pulay/Diag. 0.20E+00 59.4 58430.75278087 -4430.8543850289 -2.74E+02 8 Pulay/Diag. 0.20E+00 58.9 47799.10199568 -4482.5049850590 -5.17E+01 9 Pulay/Diag. 0.20E+00 59.9 42678.37923542 -4460.2090853173 2.23E+01 10 Pulay/Diag. 0.20E+00 58.6 29018.77397837 -4448.4858203716 1.17E+01 11 Pulay/Diag. 0.20E+00 60.1 30422.75295948 -4458.1250435109 -9.64E+00 12 Pulay/Diag. 0.20E+00 58.8 20157.27665011 -4462.7197837500 -4.59E+00 13 Pulay/Diag. 0.20E+00 60.2 20697.03326037 -4465.4866843212 -2.77E+00 14 Pulay/Diag. 0.20E+00 58.6 14035.03955462 -4464.6656749474 8.21E-01 15 Pulay/Diag. 0.20E+00 59.8 13859.56913168 -4455.4582585163 9.21E+00 16 Pulay/Diag. 0.20E+00 58.6 9844.63604592 -4439.6169244281 1.58E+01 17 Pulay/Diag. 0.20E+00 59.7 9086.46419396 -4429.7590093298 9.86E+00 18 Pulay/Diag. 0.20E+00 59.0 6252.68491596 -4451.0446941090 -2.13E+01 19 Pulay/Diag. 0.20E+00 59.8 5836.44669985 -4446.6123587806 4.43E+00 20 Pulay/Diag. 0.20E+00 58.7 4369.58978109 -4434.7404123014 1.19E+01 21 Pulay/Diag. 0.20E+00 59.8 3709.18259540 -4422.7286506587 1.20E+01 22 Pulay/Diag. 0.20E+00 58.8 2919.47259531 -4423.1759843663 -4.47E-01 23 Pulay/Diag. 0.20E+00 59.8 2418.12012615 -4420.5319920017 2.64E+00 24 Pulay/Diag. 0.20E+00 58.8 1984.95398922 -4419.3227006816 1.21E+00 25 Pulay/Diag. 0.20E+00 60.0 1707.00425122 -4420.3727826353 -1.05E+00 26 Pulay/Diag. 0.20E+00 59.0 1274.71842881 -4420.2666697595 1.06E-01 27 Pulay/Diag. 0.20E+00 60.3 1417.13141141 -4425.4603136343 -5.19E+00 28 Pulay/Diag. 0.20E+00 58.8 1104.03431671 -4418.0031624370 7.46E+00 I was wondering, if there is any guidance or suggestions on how to handle these energy fluctuations when using the aug-cc-pvtz basis set and WB97X-D XC-functional? Additionally, I noticed that runtimes are approximately 1 minute per SCF iteration, I was curious if there was any advice on how to improve this runtime? I've attached a sample SLURM submission script that I used to generate the results located in the attached .out file. Please let me know if there is any additional information that I can provide and I greatly appreciate the support that you and the community provide. All my best, Nick On Tuesday, October 22, 2024 at 3:48:22?AM UTC-4 J?rg Hutter wrote: > Hi > > you are missing the &HF section in your specification of the hybrid > functional. > Libxc only covers the density functional part, see the many examples in > the tests/QS > sections. > > regards > JH > > ________________________________________ > From: cp... at googlegroups.com on behalf of > Nicholas Laws > Sent: Monday, October 21, 2024 4:14 PM > To: cp2k > Subject: [CP2K:20794] All-electron Geometry Optimization of EMIBF4 > > Hi all, > > I am trying to do a geometry optimization for energy minimization of > EMIBF4 using the aug-cc-pvtz basis set (attached below) and WB97X-D > XC-functional. It seems that my optimization requires hundreds of SCF steps > before convergence (as seen in the attached .out file) and I was wondering > if there are any recommendations for doing all-electron geometry > optimizations, especially for the one I discuss in this post (current > implementation can be viewed in the attached .inp file)? Please let me know > if there any additional information that I can clarify. > > Thank you, and I look forward to hearing from you. > > All my best, > Nick > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com >. > To view this discussion on the web visit > https://groups.google.com/d/msgid/cp2k/9ae46c98-afd4-4625-b769-6cf39a3fb5d2n%40googlegroups.com > < > https://groups.google.com/d/msgid/cp2k/9ae46c98-afd4-4625-b769-6cf39a3fb5d2n%40googlegroups.com?utm_medium=email&utm_source=footer > >. > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dbffa255-5afa-43c7-8bd5-875420594c7cn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EMIBF4_Geometry_Optimization.out Type: application/octet-stream Size: 164932 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: EMIBF4_Geometry_Optimization.inp Type: chemical/x-gamess-input Size: 3967 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: script.slurm Type: application/x-shellscript Size: 723 bytes Desc: not available URL: From tianning733 at gmail.com Wed Oct 23 15:54:12 2024 From: tianning733 at gmail.com (Tian-Ning Chen) Date: Wed, 23 Oct 2024 08:54:12 -0700 (PDT) Subject: [CP2K-user] [CP2K:20806] SCF not converge Message-ID: <03023304-6e1b-470b-a5b2-887d6d28b724n@googlegroups.com> Dear CP2K experts I am trying to perform geometry optimization by fixing some atoms of adsorbate. The optimized structure will be used as initial guess for dimer calculation. After 5 steps of geometry optimization, the SCF cannot converge. The energy keeps jumping in SCF cycles. Recently I switch from Gamma point to KPOINTS with "MONKHORST-PACK 2 2 1" to improve the accuracy of force evaluation, this issue happens since then. [image: Screenshot 2024-10-23 113727.png] Input and output files are attached. Much appreciation if you can provide thoughts about this issue. Thank you, Tian-Ning Chen -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/03023304-6e1b-470b-a5b2-887d6d28b724n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-10-23 113727.png Type: image/png Size: 120520 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Sn-Beta_geoopt (1).inp Type: chemical/x-gamess-input Size: 3469 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slurm-48651.out Type: application/octet-stream Size: 661027 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mechanism3_ether_geoopt_config1.xyz Type: chemical/x-xyz Size: 16581 bytes Desc: not available URL: From tianning733 at gmail.com Thu Oct 24 02:13:23 2024 From: tianning733 at gmail.com (Tian-Ning Chen) Date: Wed, 23 Oct 2024 19:13:23 -0700 (PDT) Subject: [CP2K-user] [CP2K:20807] Structure falls apart during optimization Message-ID: <440ec58c-2f11-4981-a4e8-01befa6b23fan@googlegroups.com> Dear CP2K experts I am trying to perform geometry optimization by fixing some atoms of adsorbate and zeolite. The optimized structure will be used as initial guess for dimer calculation. At step 7 of geometry optimization, the structure falls apart. [image: Screenshot 2024-10-23 220658.png][image: Screenshot 2024-10-23 220730.png] Part of my inputs are ... &CG MAX_STEEP_STEPS 0 &LINE_SEARCH TYPE 2PNT &2PNT MAX_ALLOWED_STEP 0.2 &END &END LINE_SEARCH &END CG ... &KPOINTS SCHEME MONKHORST-PACK 2 2 1 SYMMETRY F FULL_GRID T PARALLEL_GROUP_SIZE 0 &END KPOINTS &SCF MAX_SCF 1000 EPS_SCF 9.9999999999999995E-07 SCF_GUESS RESTART &OT F MINIMIZER CG PRECONDITIONER FULL_ALL ENERGY_GAP 1.0000000000000000E-03 &END OT &DIAGONALIZATION T ALGORITHM STANDARD &END DIAGONALIZATION &OUTER_SCF T EPS_SCF 9.9999999999999995E-07 MAX_SCF 50 &END OUTER_SCF &END SCF &QS EPS_DEFAULT 1.0000000000000000E-10 EXTRAPOLATION USE_GUESS METHOD GPW &END QS Recently I switch from Gamma point to KPOINTS with "MONKHORST-PACK 2 2 1" to improve the accuracy of force evaluation, this issue happens since then. Input and output files are attached. I am at entry level of CP2K. Much appreciation if you can provide thoughts about this issue. Thank you, Tian-Ning Chen -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/440ec58c-2f11-4981-a4e8-01befa6b23fan%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out Type: application/octet-stream Size: 661027 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-10-23 220730.png Type: image/png Size: 19045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pos-1.xyz Type: chemical/x-xyz Size: 108552 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2024-10-23 220658.png Type: image/png Size: 19863 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: initial_structure.xyz Type: chemical/x-xyz Size: 16581 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: input Type: application/octet-stream Size: 3469 bytes Desc: not available URL: From hutter at chem.uzh.ch Thu Oct 24 07:53:52 2024 From: hutter at chem.uzh.ch (=?utf-8?B?SsO8cmcgSHV0dGVy?=) Date: Thu, 24 Oct 2024 07:53:52 +0000 Subject: [CP2K-user] [CP2K:20808] All-electron Geometry Optimization of EMIBF4 In-Reply-To: References: <9ae46c98-afd4-4625-b769-6cf39a3fb5d2n@googlegroups.com> Message-ID: Hi you should do the following steps to get this running: 1) Use OT optimizer &SCF MAX_SCF 50 SCF_GUESS RESTART EPS_SCF 1E-6 &OT PRECONDITIONER FULL_ALL MINIMIZER DIIS &END OT &END SCF 2) Use EPS_Defualt 1.E-10 3) Optimize the structure (at least SCF, maybe geometry) first with a GGA functional (e.g. PBE) 4) Restart from that orbitals and use SCREEN_ON_INITIAL_P TRUE 5) Increase MAX_MEMORY (if possible) so that all integrals are kept incore 6) USE ADMM &AUXILIARY_DENSITY_MATRIX_METHOD ADMM_TYPE ADMMQ EXCH_CORRECTION_FUNC PBEX &END AUXILIARY_DENSITY_MATRIX_METHOD with (add it) BASIS_SET_FILE_NAME BASIS_ADMM_ae and BASIS_SET AUX_FIT admm-2 7) Use BFGS for geometry optimizer, maybe even restart the Hessian from a previous GGA optimization 8) if the geometry optimization is not converging you might have to tighten the thresholds EPS_DEFAULT, EPS_SCF, EPS_SCHWARZ With these changes, you need about 40 Gb of memory and on 36 core a geometry step takes 1-2 minutes. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Nicholas Laws Sent: Wednesday, October 23, 2024 5:30 PM To: cp2k Subject: Re: [CP2K:20806] All-electron Geometry Optimization of EMIBF4 Thank you for the advice, Professor Hutter. I have since referenced the all-electron examples in the tests/QS sections and adjusted my input script accordingly (as seen in the attached .inp file). I have decided to separate the geometry optimizations into two separate simulations (one for EMIBF4 and one for EMI+). The content of this post refers to the geometry optimization of EMIBF4. However, after testing several SCF and GEO_OPT configurations, it seems that there are energy instabilities that occur after 5 energy evaluations as depicted here: SCF WAVEFUNCTION OPTIMIZATION Step Update method Time Convergence Total energy Change ------------------------------------------------------------------------------ HFX_MEM_INFO| Est. max. program size before HFX [MiB]: 91626 HFX_MEM_INFO| Number of cart. primitive ERI's calculated: 150682718957 HFX_MEM_INFO| Number of sph. ERI's calculated: 54286634411 HFX_MEM_INFO| Number of sph. ERI's stored in-core: 29731009090 HFX_MEM_INFO| Number of sph. ERI's stored on disk: 0 HFX_MEM_INFO| Number of sph. ERI's calculated on the fly: 24555625321 HFX_MEM_INFO| Total memory consumption ERI's RAM [MiB]: 64051 HFX_MEM_INFO| Whereof max-vals [MiB]: 193 HFX_MEM_INFO| Total compression factor ERI's RAM: 3.54 HFX_MEM_INFO| Total memory consumption ERI's disk [MiB]: 0 HFX_MEM_INFO| Total compression factor ERI's disk: 0.00 HFX_MEM_INFO| Size of density/Fock matrix [MiB]: 4 HFX_MEM_INFO| Size of buffers [MiB]: 5 HFX_MEM_INFO| Est. max. program size after HFX [MiB]: 91793 1 Pulay/Diag. 0.20E+00 105.6 93.15134943 -786.8938839625 -7.87E+02 2 Pulay/Diag. 0.20E+00 59.5 1424.78843686 -787.9112236525 -1.02E+00 3 Pulay/Diag. 0.20E+00 59.6 4.0852E+05 -804.4927698695 -1.66E+01 4 Pulay/Diag. 0.20E+00 58.8 2.6603E+05 -1012.0913997227 -2.08E+02 5 Pulay/Diag. 0.20E+00 60.1 1.2150E+05 -3042.9246578949 -2.03E+03 6 Pulay/Diag. 0.20E+00 58.7 1.3437E+05 -4156.9505747502 -1.11E+03 7 Pulay/Diag. 0.20E+00 59.4 58430.75278087 -4430.8543850289 -2.74E+02 8 Pulay/Diag. 0.20E+00 58.9 47799.10199568 -4482.5049850590 -5.17E+01 9 Pulay/Diag. 0.20E+00 59.9 42678.37923542 -4460.2090853173 2.23E+01 10 Pulay/Diag. 0.20E+00 58.6 29018.77397837 -4448.4858203716 1.17E+01 11 Pulay/Diag. 0.20E+00 60.1 30422.75295948 -4458.1250435109 -9.64E+00 12 Pulay/Diag. 0.20E+00 58.8 20157.27665011 -4462.7197837500 -4.59E+00 13 Pulay/Diag. 0.20E+00 60.2 20697.03326037 -4465.4866843212 -2.77E+00 14 Pulay/Diag. 0.20E+00 58.6 14035.03955462 -4464.6656749474 8.21E-01 15 Pulay/Diag. 0.20E+00 59.8 13859.56913168 -4455.4582585163 9.21E+00 16 Pulay/Diag. 0.20E+00 58.6 9844.63604592 -4439.6169244281 1.58E+01 17 Pulay/Diag. 0.20E+00 59.7 9086.46419396 -4429.7590093298 9.86E+00 18 Pulay/Diag. 0.20E+00 59.0 6252.68491596 -4451.0446941090 -2.13E+01 19 Pulay/Diag. 0.20E+00 59.8 5836.44669985 -4446.6123587806 4.43E+00 20 Pulay/Diag. 0.20E+00 58.7 4369.58978109 -4434.7404123014 1.19E+01 21 Pulay/Diag. 0.20E+00 59.8 3709.18259540 -4422.7286506587 1.20E+01 22 Pulay/Diag. 0.20E+00 58.8 2919.47259531 -4423.1759843663 -4.47E-01 23 Pulay/Diag. 0.20E+00 59.8 2418.12012615 -4420.5319920017 2.64E+00 24 Pulay/Diag. 0.20E+00 58.8 1984.95398922 -4419.3227006816 1.21E+00 25 Pulay/Diag. 0.20E+00 60.0 1707.00425122 -4420.3727826353 -1.05E+00 26 Pulay/Diag. 0.20E+00 59.0 1274.71842881 -4420.2666697595 1.06E-01 27 Pulay/Diag. 0.20E+00 60.3 1417.13141141 -4425.4603136343 -5.19E+00 28 Pulay/Diag. 0.20E+00 58.8 1104.03431671 -4418.0031624370 7.46E+00 I was wondering, if there is any guidance or suggestions on how to handle these energy fluctuations when using the aug-cc-pvtz basis set and WB97X-D XC-functional? Additionally, I noticed that runtimes are approximately 1 minute per SCF iteration, I was curious if there was any advice on how to improve this runtime? I've attached a sample SLURM submission script that I used to generate the results located in the attached .out file. Please let me know if there is any additional information that I can provide and I greatly appreciate the support that you and the community provide. All my best, Nick On Tuesday, October 22, 2024 at 3:48:22?AM UTC-4 J?rg Hutter wrote: Hi you are missing the &HF section in your specification of the hybrid functional. Libxc only covers the density functional part, see the many examples in the tests/QS sections. regards JH ________________________________________ From: cp... at googlegroups.com on behalf of Nicholas Laws Sent: Monday, October 21, 2024 4:14 PM To: cp2k Subject: [CP2K:20794] All-electron Geometry Optimization of EMIBF4 Hi all, I am trying to do a geometry optimization for energy minimization of EMIBF4 using the aug-cc-pvtz basis set (attached below) and WB97X-D XC-functional. It seems that my optimization requires hundreds of SCF steps before convergence (as seen in the attached .out file) and I was wondering if there are any recommendations for doing all-electron geometry optimizations, especially for the one I discuss in this post (current implementation can be viewed in the attached .inp file)? Please let me know if there any additional information that I can clarify. Thank you, and I look forward to hearing from you. All my best, Nick -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/9ae46c98-afd4-4625-b769-6cf39a3fb5d2n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dbffa255-5afa-43c7-8bd5-875420594c7cn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB075908B2A935F858BE93C1499F4E2%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From bnzmichela at gmail.com Thu Oct 24 13:46:39 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Thu, 24 Oct 2024 06:46:39 -0700 (PDT) Subject: [CP2K-user] [CP2K:20809] Re: Gallium + CO2 lack of convergence In-Reply-To: <8cb7ec66-c7da-4798-b626-d9f93489be9en@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> <335efca4-9014-4d60-8c0e-42d4ab964000n@googlegroups.com> <8cb7ec66-c7da-4798-b626-d9f93489be9en@googlegroups.com> Message-ID: <996b3dfe-cc2b-4b47-8de1-b3f8921b8a77n@googlegroups.com> Hi Marcella and everyone, I wanted to follow up on your advice. I used an equilibrated structure of 64 Ga atoms, eliminated ~8 central Ga atoms and added the CO2 molecule. The MD steps converged until step #14, then the SCF loop did not converge. I wanted to update anyone who may be looking at this thread for guidance: I will be sizing up my cell dimensions. Thank you, Michela On Thursday, October 3, 2024 at 1:22:48?PM UTC-4 Marcella Iannuzzi wrote: > Hi > If you have a good equilibrated Ga liquid box (right density and low > stress tensor) I wouldn't run GEO_OPT > Obviously introducing CO2 is going to change the conditions, but I would > anyway start with a NVT run for a first equilibration and then run NPT to > re-equilibrate the volume. I would add the CO2 molecule and remove all the > Ga atoms within a certain radius from the center of mass of the molecule, > and then run the equilibrations as described above. > The only way to lower the concentration is to increase the amount of Ga, > i.e., increase the box. > Regards > Marcella > > On Thursday, October 3, 2024 at 5:02:46?PM UTC+2 bnzmi... at gmail.com wrote: > >> Hi Marcella, >> >> Thank you again! >> >> 1) should I run GEO_OPT on the pure Ga liquid, then add CO2? >> 2) How should I form a cavity artificially without disrupting the newly >> equilibrated Ga structure? >> 3) I only have one molecule of CO2 in there - How should I go about >> lowering the concentration? Should I just increase my #Ga atoms and add 1 >> CO2 molecule? >> >> Best, >> >> Michela >> On Thursday, October 3, 2024 at 10:20:25?AM UTC-4 Marcella Iannuzzi wrote: >> >>> >>> Dear Michela, >>> >>> The procedure you describe does not sound very appropriate to me. >>> You should first obtain a liquid system, without solute. >>> I suppose you should check for density and other properties and have a >>> sufficiently large box. >>> Then you can create a cavity in the equilibrated liquid and insert the >>> solute, still with the right C-O bond length. >>> If the SCF does not converge anymore after a few steps it is probably >>> because of the coordinates. >>> The concentration of CO2 seems rather high. >>> >>> You can use GAPW. It is more commonly used for all electron >>> calculations. With PP, GPW is as accurate. >>> >>> Regards >>> Marcella >>> >>> >>> >>> On Thursday, October 3, 2024 at 3:02:10?PM UTC+2 bnzmi... at gmail.com >>> wrote: >>> >>>> Hi Marcella, >>>> >>>> thank you for your kind response and your time. >>>> >>>> 1) I just switched to double zeta quality a few hours ago, but my MD >>>> just crashed because, weirdly, it converged for the first few SCF loops, >>>> but then it stopped converging (attached the output file here to explain). >>>> >>>> 2) I am using GAPW because I found that augmented plane waves method >>>> worked really well with my liquid Al systems before. that method is also >>>> reported in DFT literature for liquid Ga. Rationalizing it, I think it >>>> works because it samples regions of space with different charge densities >>>> with more accuracy. Do you think I should consider something else? >>>> >>>> 3) I used a cell size to represent the density of liquid Ga with the # >>>> of atoms I have. I prepared my coordinates with a Python script, then >>>> relaxed the geometry in Avogadro2 software and inserted CO2 such that it >>>> was a distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do >>>> you have any suggestions to prepare a structure? I am leaving 1-2 A on all >>>> sides from the unit cell boundaries because I have been worried about Ga >>>> atoms being too close to neighbors across periodic boundaries. >>>> >>>> Michela >>>> On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi >>>> wrote: >>>> >>>>> Dear Michela, >>>>> >>>>> The basis set you are using is of poor quality. >>>>> The coordinates you sent show a rather strange C-O bond length >>>>> The cell is very small, but still there is vacuum space among the >>>>> replicas in all directions, it is a rather weird choice of coordinates. >>>>> >>>>> Is there a reason why you are using GAPW? >>>>> >>>>> Regards >>>>> Marcella >>>>> On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com >>>>> wrote: >>>>> >>>>>> Good morning dear CP2K community, >>>>>> >>>>>> how are you? You may know me from previous posts on liquid Al (+CO2) >>>>>> MD troubleshooting. All of your responses have been super helpful so far, >>>>>> and I am coming here again for a different liquid metal. >>>>>> >>>>>> My simulations with pure liquid Gallium have been less troublesome >>>>>> than all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >>>>>> converge. Can I please get some help troubleshooting? >>>>>> >>>>>> Thank you, >>>>>> >>>>>> Michela >>>>>> >>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/996b3dfe-cc2b-4b47-8de1-b3f8921b8a77n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga64C1_24h-pos-1.xyz Type: chemical/x-xyz Size: 58575 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: central.xyz Type: chemical/x-xyz Size: 2246 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga64C1_24h-1.restart Type: application/octet-stream Size: 17533 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 44822592.out Type: application/octet-stream Size: 190418 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ga64C1.in Type: application/octet-stream Size: 2731 bytes Desc: not available URL: From bnzmichela at gmail.com Thu Oct 24 14:34:45 2024 From: bnzmichela at gmail.com (Michela Benazzi) Date: Thu, 24 Oct 2024 07:34:45 -0700 (PDT) Subject: [CP2K-user] [CP2K:20810] Configuration Interaction Singles in CP2K? Message-ID: <81cab4d8-f199-4e91-bf4f-8fd3dbf15a15n@googlegroups.com> Hello everyone, as the title states, I was wondering if you have any suggestions for evaluating CIS (https://manual.q-chem.com/5.2/Ch7.S2.SS1.html) in CP2K. Thank you :) Michela -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/81cab4d8-f199-4e91-bf4f-8fd3dbf15a15n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hutter at chem.uzh.ch Thu Oct 24 15:48:09 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Thu, 24 Oct 2024 15:48:09 +0000 Subject: [CP2K-user] [CP2K:20811] Configuration Interaction Singles in CP2K? In-Reply-To: <81cab4d8-f199-4e91-bf4f-8fd3dbf15a15n@googlegroups.com> References: <81cab4d8-f199-4e91-bf4f-8fd3dbf15a15n@googlegroups.com> Message-ID: Hi yes, if you run a HF calculation with &PROPERTIES / TDDFPT you get a CIS calculation. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Michela Benazzi Sent: Thursday, October 24, 2024 4:34 PM To: cp2k Subject: [CP2K:20810] Configuration Interaction Singles in CP2K? Hello everyone, as the title states, I was wondering if you have any suggestions for evaluating CIS (https://manual.q-chem.com/5.2/Ch7.S2.SS1.html) in CP2K. Thank you :) Michela -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/81cab4d8-f199-4e91-bf4f-8fd3dbf15a15n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB0759DA28E4F727FA6BAC8DEA9F4E2%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From bamaz.97 at gmail.com Fri Oct 25 07:46:22 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 25 Oct 2024 00:46:22 -0700 (PDT) Subject: [CP2K-user] [CP2K:20812] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> Message-ID: Hi Frederick, it helped with most of the tests! Now only 13 have failed. In the attachments you will find full output from regtests and here is output from single job with TRACE enabled: ``` Loading intel/2024a Loading requirement: GCCcore/13.3.0 zlib/1.3.1-GCCcore-13.3.0 binutils/2.42-GCCcore-13.3.0 intel-compilers/2024.2.0 numactl/2.0.18-GCCcore-13.3.0 UCX/1.16.0-GCCcore-13.3.0 impi/2021.13.0-intel-compilers-2024.2.0 imkl/2024.2.0 iimpi/2024a imkl-FFTW/2024.2.0-iimpi-2024a Currently Loaded Modulefiles: 1) GCCcore/13.3.0 7) impi/2021.13.0-intel-compilers-2024.2.0 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a 6) UCX/1.16.0-GCCcore-13.3.0 2 MPI processes with 2 OpenMP threads each started at Fri Oct 25 09:34:34 CEST 2024 in /lustre/tmp/slurm/3127182 SIRIUS 7.6.1, git hash: https://api.github.com/repos/electronic-structure/SIRIUS/git/ref/tags/v7.6.1 Warning! Compiled in 'debug' mode with assert statements enabled! LIBXSMM_VERSION: develop-1.17-3834 (25693946) CLX/DP TRY JIT STA COL 0..13 8 8 0 0 14..23 0 0 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 64 KB (gemm=8) Command (PID=423503): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i dftd3src1.inp -o dftd3src1.out Uptime: 2.752513 s =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 0 PID 423503 RUNNING AT r21c01b03 = KILLED BY SIGNAL: 11 (Segmentation fault) =================================================================================== =================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = RANK 1 PID 423504 RUNNING AT r21c01b03 = KILLED BY SIGNAL: 9 (Killed) =================================================================================== finished at Fri Oct 25 09:34:39 CEST 2024 ``` and the last lines: ``` 000000:000002<< 13 3 mp_sendrecv_dm2 0.000 Hostmem: 955 MB GPUmem: 0 MB 000000:000002>> 13 4 mp_sendrecv_dm2 start Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 13 4 mp_sendrecv_dm2 0.000 Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 12 2 pw_nn_compose_r 0 .003 Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 11 1 xc_pw_derive 0.003 H ostmem: 955 MB GPUmem: 0 MB 000000:000002>> 11 5 pw_zero start Hostme m: 955 MB GPUmem: 0 MB 000000:000002<< 11 5 pw_zero 0.000 Hostme m: 955 MB GPUmem: 0 MB 000000:000002>> 11 2 xc_pw_derive start H ostmem: 955 MB GPUmem: 0 MB 000000:000002>> 12 3 pw_nn_compose_r s tart Hostmem: 955 MB GPUmem: 0 MB 000000:000002>> 13 5 mp_sendrecv_dm2 start Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 13 5 mp_sendrecv_dm2 0.000 Hostmem: 955 MB GPUmem: 0 MB 000000:000002>> 13 6 mp_sendrecv_dm2 start Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 13 6 mp_sendrecv_dm2 0.000 Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 12 3 pw_nn_compose_r 0 .002 Hostmem: 955 MB GPUmem: 0 MB 000000:000002<< 11 2 xc_pw_derive 0.002 H ostmem: 955 MB GPUmem: 0 MB 000000:000002>> 11 6 pw_zero start Hostme m: 955 MB GPUmem: 0 MB 000000:000002<< 11 6 pw_zero 0.001 Hostme m: 960 MB GPUmem: 0 MB 000000:000002>> 11 3 xc_pw_derive start H ostmem: 960 MB GPUmem: 0 MB 000000:000002>> 12 4 pw_nn_compose_r s tart Hostmem: 960 MB GPUmem: 0 MB 000000:000002>> 13 7 mp_sendrecv_dm2 start Hostmem: 960 MB GPUmem: 0 MB 000000:000002<< 13 7 mp_sendrecv_dm2 0.000 Hostmem: 960 MB GPUmem: 0 MB 000000:000002>> 13 8 mp_sendrecv_dm2 start Hostmem: 960 MB GPUmem: 0 MB 000000:000002<< 13 8 mp_sendrecv_dm2 0.000 Hostmem: 960 MB GPUmem: 0 MB 000000:000002<< 12 4 pw_nn_compose_r 0 .002 Hostmem: 960 MB GPUmem: 0 MB 000000:000002<< 11 3 xc_pw_derive 0.002 H ostmem: 960 MB GPUmem: 0 MB 000000:000002>> 11 1 pw_spline_scale_deriv start Hostmem: 960 MB GPUmem: 0 MB 000000:000002<< 11 1 pw_spline_scale_deriv 0.001 Hostmem: 960 MB GPUmem: 0 MB 000000:000002>> 11 20 pw_pool_give_back_pw start Hostmem: 965 MB GPUmem: 0 MB 000000:000002<< 11 20 pw_pool_give_back_pw 0.000 Hostmem: 965 MB GPUmem: 0 MB 000000:000002>> 11 21 pw_pool_give_back_pw start Hostmem: 965 MB GPUmem: 0 MB 000000:000002<< 11 21 pw_pool_give_back_pw 0.000 Hostmem: 965 MB GPUmem: 0 MB 000000:000002>> 11 22 pw_pool_give_back_pw start Hostmem: 965 MB GPUmem: 0 MB 000000:000002<< 11 22 pw_pool_give_back_pw 0.000 Hostmem: 965 MB GPUmem: 0 MB 000000:000002>> 11 23 pw_pool_give_back_pw start Hostmem: 965 MB GPUmem: 0 MB 000000:000002<< 11 23 pw_pool_give_back_pw 0.000 Hostmem: 965 MB GPUmem: 0 MB 000000:000002>> 11 1 xc_functional_eval s tart Hostmem: 965 MB GPUmem: 0 MB 000000:000002>> 12 1 b97_lda_eval star t Hostmem: 965 MB GPUmem: 0 MB 000000:000002<< 12 1 b97_lda_eval 0.10 3 Hostmem: 979 MB GPUmem: 0 MB 000000:000002<< 11 1 xc_functional_eval 0 .103 Hostmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 1 xc_rho_set_and_dset_create 0.120 Hostmem: 979 MB GPUmem: 0 MB 000000:000002>> 10 1 check_for_derivatives s tart Hostmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 1 check_for_derivatives 0 .000 Hostmem: 979 MB GPUmem: 0 MB 000000:000002>> 10 14 pw_create_r3d start Hos tmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 14 pw_create_r3d 0.000 Hos tmem: 979 MB GPUmem: 0 MB 000000:000002>> 10 15 pw_create_r3d start Hos tmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 15 pw_create_r3d 0.000 Hos tmem: 979 MB GPUmem: 0 MB 000000:000002>> 10 16 pw_create_r3d start Hos tmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 16 pw_create_r3d 0.000 Hos tmem: 979 MB GPUmem: 0 MB 000000:000002>> 10 17 pw_create_r3d start Hos tmem: 979 MB GPUmem: 0 MB 000000:000002<< 10 17 pw_create_r3d 0.000 Hos tmem: 979 MB GPUmem: 0 MB ``` Best Bartosz ?roda, 23 pa?dziernika 2024 o 09:15:33 UTC+2 Frederick Stein napisa?(a): > Dear Bartosz, > My fix is merged. Can you switch to the CP2K master and try it again? We > are still working on a few issues with the Intel compilers such that we may > eventually migrate from ifort to ifx. > Best, > Frederick > > bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: > >> Great! Thank you for your help. >> >> Best >> Bartosz >> >> wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein napisa?(a): >> >>> I have a fix for it. In contrast to my first thought, it is a case of >>> invalid type conversion from real to complex numbers (yes, Fortran is >>> rather strict about it) in pw_derive. This may also be present in a few >>> other spots. I am currently running more tests and I will open a pull >>> request within the next few days. >>> Best, >>> Frederick >>> >>> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 UTC+2: >>> >>>> I can reproduce the error locally. I am investigating it now. >>>> >>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: >>>> >>>>> I was loading it as it was needed for compilation. I have unloaded the >>>>> module, but the error still occurs: >>>>> >>>>> ``` >>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>> CLX/DP TRY JIT STA COL >>>>> 0..13 2 2 0 0 >>>>> 14..23 0 0 0 0 >>>>> 24..64 0 0 0 0 >>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>> Command (PID=15485): >>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>> H2O-9.inp -o H2O-9.out >>>>> Uptime: 1.757102 s >>>>> >>>>> >>>>> >>>>> =================================================================================== >>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>>> >>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>> >>>>> =================================================================================== >>>>> >>>>> >>>>> =================================================================================== >>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>>> >>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>> >>>>> =================================================================================== >>>>> ``` >>>>> >>>>> >>>>> and the last 100 lines: >>>>> >>>>> ``` >>>>> 000000:000002>> 11 37 pw_create_c1d >>>>> start >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 37 pw_create_c1d >>>>> 0.000 >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 10 64 pw_pool_create_pw >>>>> 0.000 >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 10 25 pw_copy start >>>>> Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 10 25 pw_copy 0.001 >>>>> Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 10 17 pw_axpy start >>>>> Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 10 17 pw_axpy 0.001 >>>>> Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 10 19 mp_sum_d >>>>> start Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 10 19 mp_sum_d >>>>> 0.000 Hostmem: >>>>> 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 10 3 pw_poisson_solve >>>>> start >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 3 >>>>> pw_poisson_rebuild s >>>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 3 >>>>> pw_poisson_rebuild 0 >>>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 65 >>>>> pw_pool_create_pw st >>>>> art Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 38 pw_create_c1d >>>>> sta >>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 38 pw_create_c1d >>>>> 0.0 >>>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 65 >>>>> pw_pool_create_pw 0. >>>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 26 pw_copy >>>>> start Hostme >>>>> m: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 26 pw_copy >>>>> 0.001 Hostme >>>>> m: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 3 pw_multiply_with >>>>> sta >>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 3 pw_multiply_with >>>>> 0.0 >>>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 27 pw_copy >>>>> start Hostme >>>>> m: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 27 pw_copy >>>>> 0.001 Hostme >>>>> m: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 3 pw_integral_ab >>>>> start >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 20 mp_sum_d >>>>> start Ho >>>>> stmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 20 mp_sum_d >>>>> 0.001 Ho >>>>> stmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 11 3 pw_integral_ab >>>>> 0.004 >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 11 4 pw_poisson_set >>>>> start >>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 66 >>>>> pw_pool_create_pw >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 39 >>>>> pw_create_c1d >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 39 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 66 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 28 pw_copy >>>>> start Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 28 pw_copy >>>>> 0.001 Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 7 pw_derive >>>>> start H >>>>> ostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 7 pw_derive >>>>> 0.002 H >>>>> ostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 67 >>>>> pw_pool_create_pw >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 40 >>>>> pw_create_c1d >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 40 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 67 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 29 pw_copy >>>>> start Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 29 pw_copy >>>>> 0.001 Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 8 pw_derive >>>>> start H >>>>> ostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 8 pw_derive >>>>> 0.002 H >>>>> ostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 68 >>>>> pw_pool_create_pw >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 13 41 >>>>> pw_create_c1d >>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 13 41 >>>>> pw_create_c1d >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 68 >>>>> pw_pool_create_pw >>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 30 pw_copy >>>>> start Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002<< 12 30 pw_copy >>>>> 0.001 Hos >>>>> tmem: 697 MB GPUmem: 0 MB >>>>> 000000:000002>> 12 9 pw_derive >>>>> start H >>>>> ostmem: 697 MB GPUmem: 0 MB >>>>> ``` >>>>> >>>>> This is the list of currently loaded modules (all come with intel): >>>>> >>>>> ``` >>>>> Currently Loaded Modulefiles: >>>>> 1) GCCcore/13.3.0 7) >>>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>>> >>>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>>> >>>>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>>>> >>>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>>> >>>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>>> ``` >>>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> Dear Bartosz, >>>>>> I am currently running some tests with the latest Intel compiler >>>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>>> interferences with between the Intel and the GCC compilers. >>>>>> Best, >>>>>> Frederick >>>>>> >>>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>>>>> >>>>>>> The error for ssmp is: >>>>>>> >>>>>>> ``` >>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>> CLX/DP TRY JIT STA COL >>>>>>> 0..13 4 4 0 0 >>>>>>> 14..23 0 0 0 0 >>>>>>> 24..64 0 0 0 0 >>>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>>> Command (PID=54845): >>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>> H2O-9.inp -o H2O-9.out >>>>>>> Uptime: 2.861583 s >>>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>>>>>> Segmentation fault (core dumped) >>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>> H2O-9.inp -o H2O-9.out >>>>>>> ``` >>>>>>> >>>>>>> and the last 100 lines of output: >>>>>>> >>>>>>> ``` >>>>>>> 000000:000001>> 12 20 mp_sum_d >>>>>>> start Ho >>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 20 mp_sum_d >>>>>>> 0.000 Ho >>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>>>>> 0.000 H >>>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 10 12 >>>>>>> calculate_ptrace_kp 0.0 >>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 9 6 >>>>>>> evaluate_core_matrix_traces >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>>>> start Ho >>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 10 6 >>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 11 140 >>>>>>> pw_pool_create_pw st >>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 79 >>>>>>> pw_create_c1d sta >>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 79 >>>>>>> pw_create_c1d 0.0 >>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 11 140 >>>>>>> pw_pool_create_pw 0. >>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 11 141 >>>>>>> pw_pool_create_pw st >>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 80 >>>>>>> pw_create_c1d sta >>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 80 >>>>>>> pw_create_c1d 0.0 >>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 11 141 >>>>>>> pw_pool_create_pw 0. >>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 11 61 pw_copy >>>>>>> start Hostme >>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 11 61 pw_copy >>>>>>> 0.004 Hostme >>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 11 35 pw_axpy >>>>>>> start Hostme >>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 11 35 pw_axpy >>>>>>> 0.002 Hostme >>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 11 6 >>>>>>> pw_poisson_solve sta >>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 6 >>>>>>> pw_poisson_rebuild >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 6 >>>>>>> pw_poisson_rebuild >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 142 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 81 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 81 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 142 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 62 pw_copy >>>>>>> start Hos >>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 62 pw_copy >>>>>>> 0.003 Hos >>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 6 >>>>>>> pw_multiply_with >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 6 >>>>>>> pw_multiply_with >>>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 63 pw_copy >>>>>>> start Hos >>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 63 pw_copy >>>>>>> 0.003 Hos >>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 6 >>>>>>> pw_integral_ab st >>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 12 6 >>>>>>> pw_integral_ab 0. >>>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 12 7 >>>>>>> pw_poisson_set st >>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 143 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 14 82 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 14 82 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 143 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 64 pw_copy >>>>>>> start >>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 64 pw_copy >>>>>>> 0.003 >>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 16 >>>>>>> pw_derive star >>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 16 >>>>>>> pw_derive 0.00 >>>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 144 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 14 83 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 14 83 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 144 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 65 pw_copy >>>>>>> start >>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001<< 13 65 pw_copy >>>>>>> 0.004 >>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>> 000000:000001>> 13 17 >>>>>>> pw_derive star >>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>> ``` >>>>>>> >>>>>>> for psmp the last 100 lines is: >>>>>>> >>>>>>> ``` >>>>>>> 000000:000002<< 9 7 >>>>>>> evaluate_core_matrix_traces >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>>>> start Ho >>>>>>> >>>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 10 7 >>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 164 >>>>>>> pw_pool_create_pw st >>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 93 >>>>>>> pw_create_c1d sta >>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 93 >>>>>>> pw_create_c1d 0.0 >>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 164 >>>>>>> pw_pool_create_pw 0. >>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 165 >>>>>>> pw_pool_create_pw st >>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 94 >>>>>>> pw_create_c1d sta >>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 94 >>>>>>> pw_create_c1d 0.0 >>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 165 >>>>>>> pw_pool_create_pw 0. >>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 73 pw_copy >>>>>>> start Hostme >>>>>>> >>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 73 pw_copy >>>>>>> 0.001 Hostme >>>>>>> >>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 41 pw_axpy >>>>>>> start Hostme >>>>>>> >>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 41 pw_axpy >>>>>>> 0.001 Hostme >>>>>>> >>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>>> start Hostm >>>>>>> >>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>>> 0.000 Hostm >>>>>>> >>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 7 >>>>>>> pw_poisson_solve sta >>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 7 >>>>>>> pw_poisson_rebuild >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 7 >>>>>>> pw_poisson_rebuild >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 166 >>>>>>> pw_pool_create_pw >>>>>>> >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 95 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 95 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 166 >>>>>>> pw_pool_create_pw >>>>>>> >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 74 pw_copy >>>>>>> start Hos >>>>>>> >>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 74 pw_copy >>>>>>> 0.001 Hos >>>>>>> >>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 7 >>>>>>> pw_multiply_with >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 7 >>>>>>> pw_multiply_with >>>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 75 pw_copy >>>>>>> start Hos >>>>>>> >>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 75 pw_copy >>>>>>> 0.001 Hos >>>>>>> >>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 7 >>>>>>> pw_integral_ab st >>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 53 mp_sum_d >>>>>>> start >>>>>>> >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 53 mp_sum_d >>>>>>> 0.000 >>>>>>> >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 7 >>>>>>> pw_integral_ab 0. >>>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 8 >>>>>>> pw_poisson_set st >>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 167 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 14 96 >>>>>>> pw_create_c1d >>>>>>> >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 14 96 >>>>>>> pw_create_c1d >>>>>>> >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 167 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 76 pw_copy >>>>>>> start >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 76 pw_copy >>>>>>> 0.001 >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 19 >>>>>>> pw_derive star >>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 19 >>>>>>> pw_derive 0.00 >>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 168 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 14 97 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 14 97 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 168 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 77 pw_copy >>>>>>> start >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 77 pw_copy >>>>>>> 0.001 >>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 20 >>>>>>> pw_derive star >>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>> ``` >>>>>>> >>>>>>> Thanks >>>>>>> Bartosz >>>>>>> >>>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>>>>>> napisa?(a): >>>>>>> >>>>>>>> Dear Bartosz, >>>>>>>> I have no idea about the issue with LibXSMM. >>>>>>>> Regarding the trace, I do not know either as there is not much that >>>>>>>> could break in pw_derive (it just performs multiplications) and the >>>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>>> with the psmp version. >>>>>>>> Best, >>>>>>>> Frederick >>>>>>>> >>>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 >>>>>>>> UTC+2: >>>>>>>> >>>>>>>>> The error is: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>> 0..13 2 2 0 0 >>>>>>>>> 14..23 0 0 0 0 >>>>>>>>> >>>>>>>>> 24..64 0 0 0 0 >>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>> Command (PID=2607388): >>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>> Uptime: 5.288243 s >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>>> >>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> and the last 20 lines: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> 000000:000002<< 13 76 >>>>>>>>> pw_copy 0.001 >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 19 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 19 >>>>>>>>> pw_derive 0.00 >>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 168 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 14 97 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 14 97 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 168 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 77 >>>>>>>>> pw_copy start >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 77 >>>>>>>>> pw_copy 0.001 >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 20 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> Thanks! >>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>>> napisa?(a): >>>>>>>>> >>>>>>>>>> Please pick one of the failing tests. Then, add the TRACE keyword >>>>>>>>>> to the &GLOBAL section and then run the test manually. This increases the >>>>>>>>>> size of the output file dramatically (to some million lines). Can you send >>>>>>>>>> me the last ~20 lines of the output? >>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>>>> UTC+2: >>>>>>>>>> >>>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Bartosz >>>>>>>>>>> >>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>>>> napisa?(a): >>>>>>>>>>> >>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>>>>>> ssmp? >>>>>>>>>>>> Best, >>>>>>>>>>>> Frederick >>>>>>>>>>>> >>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>>>>>> UTC+2: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>>> >>>>>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart >>>>>>>>>>>>> from the effect on simulation time, they have no significant effect on the >>>>>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>>>>> >>>>>>>>>>>>> ``` >>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, time >>>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>>> ``` >>>>>>>>>>>>> >>>>>>>>>>>>> and psmp: >>>>>>>>>>>>> >>>>>>>>>>>>> ``` >>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>>>>> 495min >>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>>> 484min >>>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>> 563min >>>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>> 556min >>>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; >>>>>>>>>>>>> 511min >>>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>>>>> failed: 98; 263min >>>>>>>>>>>>> ``` >>>>>>>>>>>>> >>>>>>>>>>>>> Any ideas what I could do next to have more information about >>>>>>>>>>>>> the source of the problem or maybe you see a potential solution at this >>>>>>>>>>>>> stage? I would appreciate any further help. >>>>>>>>>>>>> >>>>>>>>>>>>> Best >>>>>>>>>>>>> Bartosz >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>>>>>> napisa?(a): >>>>>>>>>>>>> >>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do >>>>>>>>>>>>>> not run that efficiently with such a large number of threads. 2 should be >>>>>>>>>>>>>> sufficient. >>>>>>>>>>>>>> The test result suggests that most of the functionality may >>>>>>>>>>>>>> work but due to a missing backtrace (or similar information), it is hard to >>>>>>>>>>>>>> tell why they fail. You could also try to run some of the single-node tests >>>>>>>>>>>>>> to assess the stability of CP2K. >>>>>>>>>>>>>> Best, >>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>> >>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um >>>>>>>>>>>>>> 13:48:42 UTC+2: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dce4ef50-6f5f-414f-b60c-bc468afa0827n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: regtests.out.zip Type: application/x-zip Size: 105823 bytes Desc: not available URL: From bamaz.97 at gmail.com Fri Oct 25 08:07:03 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Fri, 25 Oct 2024 01:07:03 -0700 (PDT) Subject: [CP2K-user] [CP2K:20813] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> Message-ID: <7042b62f-62de-43ad-ad94-b940977c9e2an@googlegroups.com> I just got another error with LibXSMM, now in my regular simulation and without using OpenMP. This is the error: ``` [1729843139.920274] [r23c01b04:2913 :0] ib_md.c:295 UCX ERROR ibv_reg_mr(address=0x14f0b46fc080, length=7424, access=0xf) failed: Cannot allocate memory [1729843139.920290] [r23c01b04:2913 :0] ucp_mm.c:70 UCX ERROR failed to register address 0x14f0b46fc080 (host) length 7424 on md[4]=mlx5_0: Input/output error (md supports: host) LIBXSMM_VERSION: develop-1.17-3834 (25693946)[1729843139.932647] [r23c01b04:2945 :0] ib_md.c:295 UCX ERROR ibv_reg_mr(address=0x1491f069e040, length=8128, access=0xf) failed: Cannot allocate memory [1729843139.932660] [r23c01b04:2945 :0] ucp_mm.c:70 UCX ERROR failed to register address 0x1491f069e040 (host) length 8128 on md[4]=mlx5_0: Input/output error (md supports: host) CLX/DP TRY JIT STA COL 0..13 4 4 0 0 14..23 4 4 0 0 24..64 0 0 0 0 Registry and code: 13 MB + 80 KB (gemm=8) Command (PID=2913): /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i cp2k.inp -o cp2k.out Uptime: 407633.177169 s ``` and this is simulation input I'm using: ``` &GLOBAL PROJECT uam1o_npt_rms RUN_TYPE MD PRINT_LEVEL LOW PREFERRED_DIAG_LIBRARY SCALAPACK &END GLOBAL &FORCE_EVAL METHOD QUICKSTEP STRESS_TENSOR ANALYTICAL &DFT BASIS_SET_FILE_NAME BASIS_MOLOPT_UZH POTENTIAL_FILE_NAME POTENTIAL_UZH &MGRID CUTOFF 500 &END MGRID &XC &XC_FUNCTIONAL PBE &END XC_FUNCTIONAL &VDW_POTENTIAL POTENTIAL_TYPE PAIR_POTENTIAL &PAIR_POTENTIAL TYPE DFTD3(BJ) PARAMETER_FILE_NAME dftd3.dat REFERENCE_FUNCTIONAL PBE R_CUTOFF 25.0 &END PAIR_POTENTIAL &END VDW_POTENTIAL &END XC &END DFT &SUBSYS &CELL A 12.2807999 0.0000000 0.0000000 B 7.6258602 9.6257200 0.0000000 C -2.1557724 -1.0420258 18.0042801 &END CELL &COORD Zn 11.37811 4.60286 0.24515 Zn 8.15435 3.05288 8.74518 Zn 6.37590 3.97311 17.74650 Zn 9.59842 5.54014 9.24747 S 11.79344 6.72692 17.10850 S 4.06825 3.00573 9.90358 S 5.95830 1.84422 0.90027 S 13.67407 5.58944 8.10767 O 10.72408 3.58291 1.89315 O 8.51986 4.01962 1.53085 O 6.60135 3.91587 7.68572 O 7.74637 5.79259 8.21600 O 15.32810 8.58246 5.10041 O 9.35608 2.93551 7.09500 O 10.38999 4.93007 7.45977 O 11.66491 6.35111 1.31266 O 9.48582 6.62478 0.77364 O 2.59062 2.40094 3.91496 O 7.03031 4.99173 16.09885 O 9.23544 4.56122 16.46252 O 11.14602 4.67776 10.31440 O 10.00982 2.79915 9.77218 O 2.41388 0.01898 12.91899 O 8.39375 5.66143 10.89628 O 7.36998 3.66087 10.53589 O 6.08863 2.22161 16.68336 O 8.26988 1.95313 17.21650 O 15.16937 6.16381 14.09906 N 13.25907 3.80728 0.04001 N 2.36335 -0.74130 17.33402 N 7.60676 1.08576 8.95623 N 15.77729 5.75974 9.67861 N 4.49430 4.76652 17.95756 N 15.38873 9.31230 0.67467 N 10.14308 7.50848 9.04236 N 1.96529 2.83557 8.33233 C 6.76554 5.18292 7.68414 C 14.28210 4.11624 0.86006 C 9.47998 3.39622 2.09658 C 3.20112 3.42080 0.84626 C 9.91466 1.18589 3.17244 C 9.08210 2.29987 3.02657 C 5.74710 6.04945 7.01821 C 7.83265 2.30920 3.66005 C 3.35793 2.34328 -0.04029 C 4.51663 1.46385 -0.02755 C 16.24194 7.75266 5.73606 C 4.78940 5.52817 6.14198 C 7.40810 1.21174 4.39947 C 16.18016 6.38244 5.49010 C 9.48869 0.06986 3.88005 C 11.27238 1.77457 17.14330 C 5.77166 7.43009 7.27236 C 11.14819 8.24901 17.58588 C 8.22170 0.08058 4.47135 C 0.15087 1.02286 17.07544 C 17.16180 8.28565 6.64351 C 10.57067 7.01060 1.31282 C 6.72654 0.47459 8.14002 C 10.27972 3.79035 6.89470 C 14.15006 8.72843 8.15880 C 11.73751 2.06868 5.82537 C 11.38838 3.41515 5.96966 C 10.52304 8.34339 1.98566 C 12.16584 4.39562 5.33967 C 14.89762 7.93801 9.04648 C 14.86698 6.48365 9.03575 C 2.67167 1.17044 3.27681 C 11.52468 8.76552 2.86608 C 13.29140 4.04007 4.60622 C 3.78230 0.36534 3.52266 C 12.87823 1.70260 5.12344 C 8.27761 0.34001 9.85941 C 9.42677 9.18364 1.73295 C 3.27553 4.45658 9.42657 C 13.66559 2.69775 4.53650 C 15.77023 8.59069 9.93240 C 1.68356 0.78491 2.36643 C 10.98451 3.41041 10.31327 C 3.46873 4.45681 17.14097 C 8.27403 5.18373 15.89814 C 14.54907 5.15099 17.15930 C 7.83119 7.39584 14.82858 C 8.66916 6.28563 14.97331 C 11.99928 2.54577 10.98702 C 9.92072 6.28547 14.34388 C 16.54982 7.26986 0.04271 C 15.39103 8.14919 0.03189 C 1.50023 0.84646 12.27989 C 12.95126 3.06908 11.86817 C 10.34198 7.38826 13.61070 C 1.55836 2.21699 12.52561 C 8.25354 8.51697 14.12666 C 6.48249 6.79770 0.85630 C 11.97760 1.16465 10.73446 C 6.60385 0.32218 0.42301 C 9.52282 8.51550 13.54043 C 17.60321 7.54791 0.92891 C 0.58530 0.31102 11.36884 C 7.18362 1.56332 16.68291 C 11.01926 8.11905 9.86341 C 7.47582 4.80132 11.10039 C 3.59282 -0.13430 9.84955 C 6.01179 6.51430 12.17471 C 6.36853 5.17005 12.02942 C 7.23131 0.22715 16.01652 C 5.59963 4.18477 12.66234 C 2.84614 0.65728 8.96213 C 2.87561 2.11161 8.97508 C 15.08536 7.39548 14.73440 C 6.23001 -0.19920 15.13769 C 4.47482 4.53325 13.40042 C 13.97400 8.19851 14.48576 C 4.87173 6.87322 12.88120 C 9.47231 8.25578 8.14046 C 8.32790 -0.61137 16.27301 C 14.46698 4.13864 8.58475 C 4.09294 5.87331 13.47165 C 1.97640 0.00563 8.07267 C 16.07240 7.78504 15.64417 H 14.10215 4.93465 1.55678 H 3.98110 3.68721 1.55899 H 10.89072 1.19647 2.69205 H 7.19958 3.19021 3.56839 H 4.75923 4.45384 5.96230 H 6.45299 1.21835 4.92062 H 15.44211 6.00062 4.78824 H 17.75043 8.81610 3.97156 H 10.41563 1.57993 16.49923 H 6.49332 7.81303 7.99143 H 0.24800 0.19739 16.37425 H 9.53586 -0.26872 6.84508 H 6.19685 1.12218 7.44173 H 13.45550 8.28133 7.44815 H 11.11633 1.31384 6.30260 H 11.87413 5.44074 5.42962 H 12.38442 8.12016 3.04474 H 13.88694 4.78876 4.08791 H 4.53915 0.70283 4.22717 H 0.88557 0.65625 5.03328 H 8.96418 0.89159 10.50060 H 8.67994 8.85961 1.01083 H 16.35704 8.00331 10.63471 H 13.12606 1.45212 2.16563 H 3.64702 3.63930 16.44281 H 13.76743 4.88477 16.44833 H 6.85355 7.37827 15.30535 H 10.55820 5.40745 14.43410 H 12.97886 4.14375 12.04672 H 11.29905 7.38966 13.09313 H 2.29216 2.60091 13.23073 H -0.01303 -0.23279 14.03603 H 7.34113 6.99275 1.49776 H 11.26049 0.78023 10.01184 H 17.50743 8.37258 1.63130 H 8.21398 8.86531 11.16822 H 11.54834 7.47018 10.56097 H 4.28503 0.31205 10.56295 H 6.62643 7.27289 11.69479 H 5.89748 3.14154 12.57118 H 5.36986 0.44461 14.95599 H 3.88656 3.78035 13.92095 H 13.21826 7.85764 13.78163 H 16.85773 7.91771 12.97237 H 8.78884 7.70469 7.49554 H 9.07452 -0.28399 16.99402 H 1.39009 0.59398 7.37083 H 4.63062 7.11938 15.84758 &END COORD &KIND Zn BASIS_SET TZVP-MOLOPT-PBE-GTH-q12 POTENTIAL GTH-PBE-q12 &END KIND &KIND S BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 POTENTIAL GTH-PBE-q6 &END KIND &KIND O BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 POTENTIAL GTH-PBE-q6 &END KIND &KIND N BASIS_SET TZVP-MOLOPT-PBE-GTH-q5 POTENTIAL GTH-PBE-q5 &END KIND &KIND C BASIS_SET TZVP-MOLOPT-PBE-GTH-q4 POTENTIAL GTH-PBE-q4 &END KIND &KIND H BASIS_SET TZVP-MOLOPT-PBE-GTH-q1 POTENTIAL GTH-PBE-q1 &END KIND &END SUBSYS &END FORCE_EVAL &MOTION &MD ENSEMBLE NPT_I TEMPERATURE 298 TIMESTEP 1.0 STEPS 50000 &THERMOSTAT TYPE NOSE &NOSE LENGTH 3 YOSHIDA 3 TIMECON 1000 &END NOSE &END THERMOSTAT &BAROSTAT PRESSURE 1.0 TIMECON 4000 &END BAROSTAT &END MD &FREE_ENERGY METHOD METADYN &METADYN USE_PLUMED .TRUE. PLUMED_INPUT_FILE plumed.dat &END METADYN &END FREE_ENERGY &PRINT &TRAJECTORY &EACH MD 5 &END EACH &END TRAJECTORY &FORCES UNIT eV*angstrom^-1 &EACH MD 5 &END EACH &END FORCES &CELL &EACH MD 5 &END EACH &END CELL &END PRINT &END MOTION ``` This simulation was performed with previous version of cp2k (so without your fix). pi?tek, 25 pa?dziernika 2024 o 09:50:47 UTC+2 bartosz mazur napisa?(a): > Hi Frederick, > > it helped with most of the tests! Now only 13 have failed. In the > attachments you will find full output from regtests and here is output from > single job with TRACE enabled: > > ``` > Loading intel/2024a > Loading requirement: GCCcore/13.3.0 zlib/1.3.1-GCCcore-13.3.0 > binutils/2.42-GCCcore-13.3.0 intel-compilers/2024.2.0 > numactl/2.0.18-GCCcore-13.3.0 UCX/1.16.0-GCCcore-13.3.0 > impi/2021.13.0-intel-compilers-2024.2.0 imkl/2024.2.0 iimpi/2024a > imkl-FFTW/2024.2.0-iimpi-2024a > > Currently Loaded Modulefiles: > 1) GCCcore/13.3.0 7) > impi/2021.13.0-intel-compilers-2024.2.0 > 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 > > 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a > > 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a > > 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a > > 6) UCX/1.16.0-GCCcore-13.3.0 > 2 MPI processes with 2 OpenMP threads each > started at Fri Oct 25 09:34:34 CEST 2024 in /lustre/tmp/slurm/3127182 > SIRIUS 7.6.1, git hash: > https://api.github.com/repos/electronic-structure/SIRIUS/git/ref/tags/v7.6.1 > Warning! Compiled in 'debug' mode with assert statements enabled! > > > LIBXSMM_VERSION: develop-1.17-3834 (25693946) > CLX/DP TRY JIT STA COL > 0..13 8 8 0 0 > 14..23 0 0 0 0 > 24..64 0 0 0 0 > Registry and code: 13 MB + 64 KB (gemm=8) > Command (PID=423503): > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i > dftd3src1.inp -o dftd3src1.out > Uptime: 2.752513 s > > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 0 PID 423503 RUNNING AT r21c01b03 > > = KILLED BY SIGNAL: 11 (Segmentation fault) > > =================================================================================== > > > =================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = RANK 1 PID 423504 RUNNING AT r21c01b03 > > = KILLED BY SIGNAL: 9 (Killed) > > =================================================================================== > finished at Fri Oct 25 09:34:39 CEST 2024 > ``` > > and the last lines: > > ``` > 000000:000002<< 13 3 > mp_sendrecv_dm2 > 0.000 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 13 4 > mp_sendrecv_dm2 > start Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 13 4 > mp_sendrecv_dm2 > 0.000 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 12 2 pw_nn_compose_r > 0 > .003 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 11 1 xc_pw_derive > 0.003 H > ostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 11 5 pw_zero start > Hostme > m: 955 MB GPUmem: 0 MB > 000000:000002<< 11 5 pw_zero 0.000 > Hostme > m: 955 MB GPUmem: 0 MB > 000000:000002>> 11 2 xc_pw_derive > start H > ostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 12 3 pw_nn_compose_r > s > tart Hostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 13 5 > mp_sendrecv_dm2 > start Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 13 5 > mp_sendrecv_dm2 > 0.000 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 13 6 > mp_sendrecv_dm2 > start Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 13 6 > mp_sendrecv_dm2 > 0.000 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 12 3 pw_nn_compose_r > 0 > .002 Hostmem: 955 MB GPUmem: 0 MB > 000000:000002<< 11 2 xc_pw_derive > 0.002 H > ostmem: 955 MB GPUmem: 0 MB > 000000:000002>> 11 6 pw_zero start > Hostme > m: 955 MB GPUmem: 0 MB > 000000:000002<< 11 6 pw_zero 0.001 > Hostme > m: 960 MB GPUmem: 0 MB > 000000:000002>> 11 3 xc_pw_derive > start H > ostmem: 960 MB GPUmem: 0 MB > 000000:000002>> 12 4 pw_nn_compose_r > s > tart Hostmem: 960 MB GPUmem: 0 MB > 000000:000002>> 13 7 > mp_sendrecv_dm2 > start Hostmem: 960 MB GPUmem: 0 MB > 000000:000002<< 13 7 > mp_sendrecv_dm2 > 0.000 Hostmem: 960 MB GPUmem: 0 MB > 000000:000002>> 13 8 > mp_sendrecv_dm2 > start Hostmem: 960 MB GPUmem: 0 MB > 000000:000002<< 13 8 > mp_sendrecv_dm2 > 0.000 Hostmem: 960 MB GPUmem: 0 MB > 000000:000002<< 12 4 pw_nn_compose_r > 0 > .002 Hostmem: 960 MB GPUmem: 0 MB > 000000:000002<< 11 3 xc_pw_derive > 0.002 H > ostmem: 960 MB GPUmem: 0 MB > 000000:000002>> 11 1 > pw_spline_scale_deriv > start Hostmem: 960 MB GPUmem: 0 MB > 000000:000002<< 11 1 > pw_spline_scale_deriv > 0.001 Hostmem: 960 MB GPUmem: 0 MB > 000000:000002>> 11 20 pw_pool_give_back_pw > > start Hostmem: 965 MB GPUmem: 0 MB > 000000:000002<< 11 20 pw_pool_give_back_pw > > 0.000 Hostmem: 965 MB GPUmem: 0 MB > 000000:000002>> 11 21 pw_pool_give_back_pw > > start Hostmem: 965 MB GPUmem: 0 MB > 000000:000002<< 11 21 pw_pool_give_back_pw > > 0.000 Hostmem: 965 MB GPUmem: 0 MB > 000000:000002>> 11 22 pw_pool_give_back_pw > > start Hostmem: 965 MB GPUmem: 0 MB > 000000:000002<< 11 22 pw_pool_give_back_pw > > 0.000 Hostmem: 965 MB GPUmem: 0 MB > 000000:000002>> 11 23 pw_pool_give_back_pw > > start Hostmem: 965 MB GPUmem: 0 MB > 000000:000002<< 11 23 pw_pool_give_back_pw > > 0.000 Hostmem: 965 MB GPUmem: 0 MB > 000000:000002>> 11 1 xc_functional_eval > s > tart Hostmem: 965 MB GPUmem: 0 MB > 000000:000002>> 12 1 b97_lda_eval > star > t Hostmem: 965 MB GPUmem: 0 MB > 000000:000002<< 12 1 b97_lda_eval > 0.10 > 3 Hostmem: 979 MB GPUmem: 0 MB > 000000:000002<< 11 1 xc_functional_eval > 0 > .103 Hostmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 1 > xc_rho_set_and_dset_create > 0.120 Hostmem: 979 MB GPUmem: 0 MB > 000000:000002>> 10 1 check_for_derivatives > s > tart Hostmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 1 check_for_derivatives > 0 > .000 Hostmem: 979 MB GPUmem: 0 MB > 000000:000002>> 10 14 pw_create_r3d > start Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 14 pw_create_r3d > 0.000 Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002>> 10 15 pw_create_r3d > start Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 15 pw_create_r3d > 0.000 Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002>> 10 16 pw_create_r3d > start Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 16 pw_create_r3d > 0.000 Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002>> 10 17 pw_create_r3d > start Hos > tmem: 979 MB GPUmem: 0 MB > 000000:000002<< 10 17 pw_create_r3d > 0.000 Hos > tmem: 979 MB GPUmem: 0 MB > ``` > > Best > Bartosz > > ?roda, 23 pa?dziernika 2024 o 09:15:33 UTC+2 Frederick Stein napisa?(a): > >> Dear Bartosz, >> My fix is merged. Can you switch to the CP2K master and try it again? We >> are still working on a few issues with the Intel compilers such that we may >> eventually migrate from ifort to ifx. >> Best, >> Frederick >> >> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: >> >>> Great! Thank you for your help. >>> >>> Best >>> Bartosz >>> >>> wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein napisa?(a): >>> >>>> I have a fix for it. In contrast to my first thought, it is a case of >>>> invalid type conversion from real to complex numbers (yes, Fortran is >>>> rather strict about it) in pw_derive. This may also be present in a few >>>> other spots. I am currently running more tests and I will open a pull >>>> request within the next few days. >>>> Best, >>>> Frederick >>>> >>>> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 UTC+2: >>>> >>>>> I can reproduce the error locally. I am investigating it now. >>>>> >>>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: >>>>> >>>>>> I was loading it as it was needed for compilation. I have unloaded >>>>>> the module, but the error still occurs: >>>>>> >>>>>> ``` >>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>> CLX/DP TRY JIT STA COL >>>>>> 0..13 2 2 0 0 >>>>>> 14..23 0 0 0 0 >>>>>> 24..64 0 0 0 0 >>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>> Command (PID=15485): >>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>> H2O-9.inp -o H2O-9.out >>>>>> Uptime: 1.757102 s >>>>>> >>>>>> >>>>>> >>>>>> =================================================================================== >>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>>>> >>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>> >>>>>> =================================================================================== >>>>>> >>>>>> >>>>>> =================================================================================== >>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>>>> >>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>> >>>>>> =================================================================================== >>>>>> ``` >>>>>> >>>>>> >>>>>> and the last 100 lines: >>>>>> >>>>>> ``` >>>>>> 000000:000002>> 11 37 pw_create_c1d >>>>>> start >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 37 pw_create_c1d >>>>>> 0.000 >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 10 64 pw_pool_create_pw >>>>>> 0.000 >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 10 25 pw_copy >>>>>> start Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 10 25 pw_copy >>>>>> 0.001 Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 10 17 pw_axpy >>>>>> start Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 10 17 pw_axpy >>>>>> 0.001 Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 10 19 mp_sum_d >>>>>> start Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 10 19 mp_sum_d >>>>>> 0.000 Hostmem: >>>>>> 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 10 3 pw_poisson_solve >>>>>> start >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 3 >>>>>> pw_poisson_rebuild s >>>>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 3 >>>>>> pw_poisson_rebuild 0 >>>>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 65 >>>>>> pw_pool_create_pw st >>>>>> art Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 38 >>>>>> pw_create_c1d sta >>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 38 >>>>>> pw_create_c1d 0.0 >>>>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 65 >>>>>> pw_pool_create_pw 0. >>>>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 26 pw_copy >>>>>> start Hostme >>>>>> m: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 26 pw_copy >>>>>> 0.001 Hostme >>>>>> m: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 3 >>>>>> pw_multiply_with sta >>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 3 >>>>>> pw_multiply_with 0.0 >>>>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 27 pw_copy >>>>>> start Hostme >>>>>> m: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 27 pw_copy >>>>>> 0.001 Hostme >>>>>> m: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 3 pw_integral_ab >>>>>> start >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 20 mp_sum_d >>>>>> start Ho >>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 20 mp_sum_d >>>>>> 0.001 Ho >>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 11 3 pw_integral_ab >>>>>> 0.004 >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 11 4 pw_poisson_set >>>>>> start >>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 66 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 39 >>>>>> pw_create_c1d >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 39 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 66 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 28 pw_copy >>>>>> start Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 28 pw_copy >>>>>> 0.001 Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 7 pw_derive >>>>>> start H >>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 7 pw_derive >>>>>> 0.002 H >>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 67 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 40 >>>>>> pw_create_c1d >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 40 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 67 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 29 pw_copy >>>>>> start Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 29 pw_copy >>>>>> 0.001 Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 8 pw_derive >>>>>> start H >>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 8 pw_derive >>>>>> 0.002 H >>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 68 >>>>>> pw_pool_create_pw >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 13 41 >>>>>> pw_create_c1d >>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 13 41 >>>>>> pw_create_c1d >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 68 >>>>>> pw_pool_create_pw >>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 30 pw_copy >>>>>> start Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002<< 12 30 pw_copy >>>>>> 0.001 Hos >>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>> 000000:000002>> 12 9 pw_derive >>>>>> start H >>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>> ``` >>>>>> >>>>>> This is the list of currently loaded modules (all come with intel): >>>>>> >>>>>> ``` >>>>>> Currently Loaded Modulefiles: >>>>>> 1) GCCcore/13.3.0 7) >>>>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>>>> >>>>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>>>> >>>>>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>>>>> >>>>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>>>> >>>>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>>>> ``` >>>>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> Dear Bartosz, >>>>>>> I am currently running some tests with the latest Intel compiler >>>>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>>>> interferences with between the Intel and the GCC compilers. >>>>>>> Best, >>>>>>> Frederick >>>>>>> >>>>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>>>>>> >>>>>>>> The error for ssmp is: >>>>>>>> >>>>>>>> ``` >>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>> 0..13 4 4 0 0 >>>>>>>> 14..23 0 0 0 0 >>>>>>>> 24..64 0 0 0 0 >>>>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>>>> Command (PID=54845): >>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>> Uptime: 2.861583 s >>>>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: 54845 >>>>>>>> Segmentation fault (core dumped) >>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>> ``` >>>>>>>> >>>>>>>> and the last 100 lines of output: >>>>>>>> >>>>>>>> ``` >>>>>>>> 000000:000001>> 12 20 mp_sum_d >>>>>>>> start Ho >>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 20 mp_sum_d >>>>>>>> 0.000 Ho >>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>>>>>> 0.000 H >>>>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 10 12 >>>>>>>> calculate_ptrace_kp 0.0 >>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 9 6 >>>>>>>> evaluate_core_matrix_traces >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>>>>> start Ho >>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 10 6 >>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 11 140 >>>>>>>> pw_pool_create_pw st >>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 79 >>>>>>>> pw_create_c1d sta >>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 79 >>>>>>>> pw_create_c1d 0.0 >>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 11 140 >>>>>>>> pw_pool_create_pw 0. >>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 11 141 >>>>>>>> pw_pool_create_pw st >>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 80 >>>>>>>> pw_create_c1d sta >>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 80 >>>>>>>> pw_create_c1d 0.0 >>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 11 141 >>>>>>>> pw_pool_create_pw 0. >>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 11 61 pw_copy >>>>>>>> start Hostme >>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 11 61 pw_copy >>>>>>>> 0.004 Hostme >>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 11 35 pw_axpy >>>>>>>> start Hostme >>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 11 35 pw_axpy >>>>>>>> 0.002 Hostme >>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 11 6 >>>>>>>> pw_poisson_solve sta >>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 6 >>>>>>>> pw_poisson_rebuild >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 6 >>>>>>>> pw_poisson_rebuild >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 142 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 81 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 81 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 142 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 62 pw_copy >>>>>>>> start Hos >>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 62 pw_copy >>>>>>>> 0.003 Hos >>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 6 >>>>>>>> pw_multiply_with >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 6 >>>>>>>> pw_multiply_with >>>>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 63 pw_copy >>>>>>>> start Hos >>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 63 pw_copy >>>>>>>> 0.003 Hos >>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 6 >>>>>>>> pw_integral_ab st >>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 12 6 >>>>>>>> pw_integral_ab 0. >>>>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 12 7 >>>>>>>> pw_poisson_set st >>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 143 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 14 82 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 14 82 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 143 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 64 pw_copy >>>>>>>> start >>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 64 pw_copy >>>>>>>> 0.003 >>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 16 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 16 >>>>>>>> pw_derive 0.00 >>>>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 144 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 14 83 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 14 83 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 144 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 65 pw_copy >>>>>>>> start >>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001<< 13 65 pw_copy >>>>>>>> 0.004 >>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> 000000:000001>> 13 17 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>> ``` >>>>>>>> >>>>>>>> for psmp the last 100 lines is: >>>>>>>> >>>>>>>> ``` >>>>>>>> 000000:000002<< 9 7 >>>>>>>> evaluate_core_matrix_traces >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>>>>> start Ho >>>>>>>> >>>>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 10 7 >>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 164 >>>>>>>> pw_pool_create_pw st >>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 93 >>>>>>>> pw_create_c1d sta >>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 93 >>>>>>>> pw_create_c1d 0.0 >>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 164 >>>>>>>> pw_pool_create_pw 0. >>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 165 >>>>>>>> pw_pool_create_pw st >>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 94 >>>>>>>> pw_create_c1d sta >>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 94 >>>>>>>> pw_create_c1d 0.0 >>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 165 >>>>>>>> pw_pool_create_pw 0. >>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 73 pw_copy >>>>>>>> start Hostme >>>>>>>> >>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 73 pw_copy >>>>>>>> 0.001 Hostme >>>>>>>> >>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 41 pw_axpy >>>>>>>> start Hostme >>>>>>>> >>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 41 pw_axpy >>>>>>>> 0.001 Hostme >>>>>>>> >>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>>>> start Hostm >>>>>>>> >>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>>>> 0.000 Hostm >>>>>>>> >>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 7 >>>>>>>> pw_poisson_solve sta >>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 7 >>>>>>>> pw_poisson_rebuild >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 7 >>>>>>>> pw_poisson_rebuild >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 166 >>>>>>>> pw_pool_create_pw >>>>>>>> >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 95 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 95 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 166 >>>>>>>> pw_pool_create_pw >>>>>>>> >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 74 pw_copy >>>>>>>> start Hos >>>>>>>> >>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 74 pw_copy >>>>>>>> 0.001 Hos >>>>>>>> >>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 7 >>>>>>>> pw_multiply_with >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 7 >>>>>>>> pw_multiply_with >>>>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 75 pw_copy >>>>>>>> start Hos >>>>>>>> >>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 75 pw_copy >>>>>>>> 0.001 Hos >>>>>>>> >>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 7 >>>>>>>> pw_integral_ab st >>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 53 >>>>>>>> mp_sum_d start >>>>>>>> >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 53 >>>>>>>> mp_sum_d 0.000 >>>>>>>> >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 7 >>>>>>>> pw_integral_ab 0. >>>>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 8 >>>>>>>> pw_poisson_set st >>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 167 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 14 96 >>>>>>>> pw_create_c1d >>>>>>>> >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 14 96 >>>>>>>> pw_create_c1d >>>>>>>> >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 167 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 76 pw_copy >>>>>>>> start >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 76 pw_copy >>>>>>>> 0.001 >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 19 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 19 >>>>>>>> pw_derive 0.00 >>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 168 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 14 97 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 14 97 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 168 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 77 pw_copy >>>>>>>> start >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 77 pw_copy >>>>>>>> 0.001 >>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 20 >>>>>>>> pw_derive star >>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>> ``` >>>>>>>> >>>>>>>> Thanks >>>>>>>> Bartosz >>>>>>>> >>>>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick Stein >>>>>>>> napisa?(a): >>>>>>>> >>>>>>>>> Dear Bartosz, >>>>>>>>> I have no idea about the issue with LibXSMM. >>>>>>>>> Regarding the trace, I do not know either as there is not much >>>>>>>>> that could break in pw_derive (it just performs multiplications) and the >>>>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>>>> with the psmp version. >>>>>>>>> Best, >>>>>>>>> Frederick >>>>>>>>> >>>>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 >>>>>>>>> UTC+2: >>>>>>>>> >>>>>>>>>> The error is: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>> 0..13 2 2 0 0 >>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>> >>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>>> Command (PID=2607388): >>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>> Uptime: 5.288243 s >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> =================================================================================== >>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>>>> >>>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>>> >>>>>>>>>> =================================================================================== >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> =================================================================================== >>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>>> >>>>>>>>>> =================================================================================== >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> and the last 20 lines: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>> pw_copy 0.001 >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>> pw_derive 0.00 >>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>> pw_copy start >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>> pw_copy 0.001 >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> Thanks! >>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>>>> napisa?(a): >>>>>>>>>> >>>>>>>>>>> Please pick one of the failing tests. Then, add the TRACE >>>>>>>>>>> keyword to the &GLOBAL section and then run the test manually. This >>>>>>>>>>> increases the size of the output file dramatically (to some million lines). >>>>>>>>>>> Can you send me the last ~20 lines of the output? >>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>>>>> UTC+2: >>>>>>>>>>> >>>>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> Bartosz >>>>>>>>>>>> >>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>>>>> napisa?(a): >>>>>>>>>>>> >>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 (add >>>>>>>>>>>>> '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of the >>>>>>>>>>>>> ssmp? >>>>>>>>>>>>> Best, >>>>>>>>>>>>> Frederick >>>>>>>>>>>>> >>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 15:37:43 >>>>>>>>>>>>> UTC+2: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>>>> >>>>>>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart >>>>>>>>>>>>>> from the effect on simulation time, they have no significant effect on the >>>>>>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>>>>>> >>>>>>>>>>>>>> ``` >>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, >>>>>>>>>>>>>> time >>>>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>>>> ``` >>>>>>>>>>>>>> >>>>>>>>>>>>>> and psmp: >>>>>>>>>>>>>> >>>>>>>>>>>>>> ``` >>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>>>>>> 495min >>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>>>> 484min >>>>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>> 563min >>>>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>> 556min >>>>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; >>>>>>>>>>>>>> 511min >>>>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>>>>>> failed: 98; 263min >>>>>>>>>>>>>> ``` >>>>>>>>>>>>>> >>>>>>>>>>>>>> Any ideas what I could do next to have more information about >>>>>>>>>>>>>> the source of the problem or maybe you see a potential solution at this >>>>>>>>>>>>>> stage? I would appreciate any further help. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Best >>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick Stein >>>>>>>>>>>>>> napisa?(a): >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test do >>>>>>>>>>>>>>> not run that efficiently with such a large number of threads. 2 should be >>>>>>>>>>>>>>> sufficient. >>>>>>>>>>>>>>> The test result suggests that most of the functionality may >>>>>>>>>>>>>>> work but due to a missing backtrace (or similar information), it is hard to >>>>>>>>>>>>>>> tell why they fail. You could also try to run some of the single-node tests >>>>>>>>>>>>>>> to assess the stability of CP2K. >>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um >>>>>>>>>>>>>>> 13:48:42 UTC+2: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/7042b62f-62de-43ad-ad94-b940977c9e2an%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chdlhao at gmail.com Fri Oct 25 08:30:49 2024 From: chdlhao at gmail.com (Hao Liu) Date: Fri, 25 Oct 2024 01:30:49 -0700 (PDT) Subject: [CP2K-user] [CP2K:20814] Compilation Error for GPU-Accelerated ELPA using Toolchain Message-ID: <36e25343-9b3c-4009-bbe0-9ece681a949cn@googlegroups.com> Dear CP2K Developers, As I mentioned earlier, I am attempting to compile version 24.3 of CP2K. My compilation environment includes the following modules: Currently Loaded Modulefiles: 1. compiler/intel/2021.3.0 4) nvidia/cuda/11.3 7) python/3.8.10 2. mpi/intelmpi/2021.3.0 5) compiler/cmake/3.23.3 3. mathlib/fftw/3.3.10_intel21_double 6) compiler/gcc/12.2.0 I encountered the following error: /public/software/compiler/intel-compiler/2021.3.0/compiler/include/complex:62:3: error: #error "This Intel is for use only with the Intel C++ compilers!" 62 | # error "This Intel is for use only with the Intel C++ compilers!" | ^~~~~ /public/software/compiler/intel-compiler/2021.3.0/compiler/include/complex.h:30:3: error: #error "This Intel is for use with only the Intel compilers!" 30 | # error "This Intel is for use with only the Intel compilers!" | ^~~~~ ifort: command line warning #10148: option '-fno-lto' not supported ifort: command line warning #10148: option '-fno-lto' not supported It seems that the Intel libraries are being loaded when compiling the NVIDIA version of ELPA. Could you advise on how to avoid this issue? If more information is needed, please let me know. Best regards, Hao Liu Let me know if you'd like any further adjustments! -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/36e25343-9b3c-4009-bbe0-9ece681a949cn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre.andre.cazade at gmail.com Fri Oct 25 08:39:09 2024 From: pierre.andre.cazade at gmail.com (pierre.an...@gmail.com) Date: Fri, 25 Oct 2024 01:39:09 -0700 (PDT) Subject: [CP2K-user] [CP2K:20816] Static compilation Message-ID: <6025b751-5e99-45e7-b6fc-8601191269a4n@googlegroups.com> Dear developers and users, Is there a way to compile a static version of CP2K? Regards, Pierre -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/6025b751-5e99-45e7-b6fc-8601191269a4n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Fri Oct 25 08:56:56 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 25 Oct 2024 01:56:56 -0700 (PDT) Subject: [CP2K-user] [CP2K:20817] #3744/#3745 Message-ID: Some issue from Github by DoumAAAAAA: i wan to Optimation Ni111slab?It is used for amino adsorption energy test. 24 atoms. I set the k-point to 4,4,1. The nickel magnetic moment is 1. The optional multiplicity is 25. open UKS,smear. use Rpbe+D3?bj?. CUTOFF 550 REL_CUTOFF 55 ALPHA 0.4 NBROYDEN 12 but,the Scf no cover. I'm using the cp2k 2024.1 version Ni111.zip It is not known in terms of which parameters need to be modified -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ea7fde46-61e3-42be-8236-afc98d4fbadan%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Fri Oct 25 09:46:00 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 25 Oct 2024 02:46:00 -0700 (PDT) Subject: [CP2K-user] [CP2K:20818] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <7042b62f-62de-43ad-ad94-b940977c9e2an@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> <7042b62f-62de-43ad-ad94-b940977c9e2an@googlegroups.com> Message-ID: <5473442a-c035-4d51-833f-4c340767ee66n@googlegroups.com> Dear Bartosz, I will check the other issues with your regtests. Regarding your latest issue, please provide more information such as an output file or a hint on the context. If I am supposed to retry the calculation on my local machine, I need all additional input files such as your plumed file. I can run your input file up to the point that CP2K needs plumed. Best, Frederick bartosz mazur schrieb am Freitag, 25. Oktober 2024 um 10:15:19 UTC+2: > I just got another error with LibXSMM, now in my regular simulation and > without using OpenMP. This is the error: > > ``` > [1729843139.920274] [r23c01b04:2913 :0] ib_md.c:295 UCX ERROR > ibv_reg_mr(address=0x14f0b46fc080, length=7424, access=0xf) failed: Cannot > allocate memory > [1729843139.920290] [r23c01b04:2913 :0] ucp_mm.c:70 UCX ERROR > failed to register address 0x14f0b46fc080 (host) length 7424 on > md[4]=mlx5_0: Input/output error (md supports: host) > > LIBXSMM_VERSION: develop-1.17-3834 (25693946)[1729843139.932647] > [r23c01b04:2945 :0] ib_md.c:295 UCX ERROR > ibv_reg_mr(address=0x1491f069e040, length=8128, access=0xf) failed: Cannot > allocate memory > [1729843139.932660] [r23c01b04:2945 :0] ucp_mm.c:70 UCX ERROR > failed to register address 0x1491f069e040 (host) length 8128 on > md[4]=mlx5_0: Input/output error (md supports: host) > > > CLX/DP TRY JIT STA COL > 0..13 4 4 0 0 > 14..23 4 4 0 0 > > 24..64 0 0 0 0 > Registry and code: 13 MB + 80 KB (gemm=8) > Command (PID=2913): > /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i > cp2k.inp -o cp2k.out > Uptime: 407633.177169 s > ``` > > and this is simulation input I'm using: > > ``` > &GLOBAL > PROJECT uam1o_npt_rms > RUN_TYPE MD > PRINT_LEVEL LOW > PREFERRED_DIAG_LIBRARY SCALAPACK > &END GLOBAL > > &FORCE_EVAL > METHOD QUICKSTEP > STRESS_TENSOR ANALYTICAL > &DFT > BASIS_SET_FILE_NAME BASIS_MOLOPT_UZH > POTENTIAL_FILE_NAME POTENTIAL_UZH > &MGRID > CUTOFF 500 > &END MGRID > &XC > &XC_FUNCTIONAL PBE > &END XC_FUNCTIONAL > &VDW_POTENTIAL > POTENTIAL_TYPE PAIR_POTENTIAL > &PAIR_POTENTIAL > TYPE DFTD3(BJ) > PARAMETER_FILE_NAME dftd3.dat > REFERENCE_FUNCTIONAL PBE > R_CUTOFF 25.0 > &END PAIR_POTENTIAL > &END VDW_POTENTIAL > &END XC > &END DFT > > &SUBSYS > &CELL > A 12.2807999 0.0000000 0.0000000 > B 7.6258602 9.6257200 0.0000000 > C -2.1557724 -1.0420258 18.0042801 > &END CELL > &COORD > Zn 11.37811 4.60286 0.24515 > Zn 8.15435 3.05288 8.74518 > Zn 6.37590 3.97311 17.74650 > Zn 9.59842 5.54014 9.24747 > S 11.79344 6.72692 17.10850 > S 4.06825 3.00573 9.90358 > S 5.95830 1.84422 0.90027 > S 13.67407 5.58944 8.10767 > O 10.72408 3.58291 1.89315 > O 8.51986 4.01962 1.53085 > O 6.60135 3.91587 7.68572 > O 7.74637 5.79259 8.21600 > O 15.32810 8.58246 5.10041 > O 9.35608 2.93551 7.09500 > O 10.38999 4.93007 7.45977 > O 11.66491 6.35111 1.31266 > O 9.48582 6.62478 0.77364 > O 2.59062 2.40094 3.91496 > O 7.03031 4.99173 16.09885 > O 9.23544 4.56122 16.46252 > O 11.14602 4.67776 10.31440 > O 10.00982 2.79915 9.77218 > O 2.41388 0.01898 12.91899 > O 8.39375 5.66143 10.89628 > O 7.36998 3.66087 10.53589 > O 6.08863 2.22161 16.68336 > O 8.26988 1.95313 17.21650 > O 15.16937 6.16381 14.09906 > N 13.25907 3.80728 0.04001 > N 2.36335 -0.74130 17.33402 > N 7.60676 1.08576 8.95623 > N 15.77729 5.75974 9.67861 > N 4.49430 4.76652 17.95756 > N 15.38873 9.31230 0.67467 > N 10.14308 7.50848 9.04236 > N 1.96529 2.83557 8.33233 > C 6.76554 5.18292 7.68414 > C 14.28210 4.11624 0.86006 > C 9.47998 3.39622 2.09658 > C 3.20112 3.42080 0.84626 > C 9.91466 1.18589 3.17244 > C 9.08210 2.29987 3.02657 > C 5.74710 6.04945 7.01821 > C 7.83265 2.30920 3.66005 > C 3.35793 2.34328 -0.04029 > C 4.51663 1.46385 -0.02755 > C 16.24194 7.75266 5.73606 > C 4.78940 5.52817 6.14198 > C 7.40810 1.21174 4.39947 > C 16.18016 6.38244 5.49010 > C 9.48869 0.06986 3.88005 > C 11.27238 1.77457 17.14330 > C 5.77166 7.43009 7.27236 > C 11.14819 8.24901 17.58588 > C 8.22170 0.08058 4.47135 > C 0.15087 1.02286 17.07544 > C 17.16180 8.28565 6.64351 > C 10.57067 7.01060 1.31282 > C 6.72654 0.47459 8.14002 > C 10.27972 3.79035 6.89470 > C 14.15006 8.72843 8.15880 > C 11.73751 2.06868 5.82537 > C 11.38838 3.41515 5.96966 > C 10.52304 8.34339 1.98566 > C 12.16584 4.39562 5.33967 > C 14.89762 7.93801 9.04648 > C 14.86698 6.48365 9.03575 > C 2.67167 1.17044 3.27681 > C 11.52468 8.76552 2.86608 > C 13.29140 4.04007 4.60622 > C 3.78230 0.36534 3.52266 > C 12.87823 1.70260 5.12344 > C 8.27761 0.34001 9.85941 > C 9.42677 9.18364 1.73295 > C 3.27553 4.45658 9.42657 > C 13.66559 2.69775 4.53650 > C 15.77023 8.59069 9.93240 > C 1.68356 0.78491 2.36643 > C 10.98451 3.41041 10.31327 > C 3.46873 4.45681 17.14097 > C 8.27403 5.18373 15.89814 > C 14.54907 5.15099 17.15930 > C 7.83119 7.39584 14.82858 > C 8.66916 6.28563 14.97331 > C 11.99928 2.54577 10.98702 > C 9.92072 6.28547 14.34388 > C 16.54982 7.26986 0.04271 > C 15.39103 8.14919 0.03189 > C 1.50023 0.84646 12.27989 > C 12.95126 3.06908 11.86817 > C 10.34198 7.38826 13.61070 > C 1.55836 2.21699 12.52561 > C 8.25354 8.51697 14.12666 > C 6.48249 6.79770 0.85630 > C 11.97760 1.16465 10.73446 > C 6.60385 0.32218 0.42301 > C 9.52282 8.51550 13.54043 > C 17.60321 7.54791 0.92891 > C 0.58530 0.31102 11.36884 > C 7.18362 1.56332 16.68291 > C 11.01926 8.11905 9.86341 > C 7.47582 4.80132 11.10039 > C 3.59282 -0.13430 9.84955 > C 6.01179 6.51430 12.17471 > C 6.36853 5.17005 12.02942 > C 7.23131 0.22715 16.01652 > C 5.59963 4.18477 12.66234 > C 2.84614 0.65728 8.96213 > C 2.87561 2.11161 8.97508 > C 15.08536 7.39548 14.73440 > C 6.23001 -0.19920 15.13769 > C 4.47482 4.53325 13.40042 > C 13.97400 8.19851 14.48576 > C 4.87173 6.87322 12.88120 > C 9.47231 8.25578 8.14046 > C 8.32790 -0.61137 16.27301 > C 14.46698 4.13864 8.58475 > C 4.09294 5.87331 13.47165 > C 1.97640 0.00563 8.07267 > C 16.07240 7.78504 15.64417 > H 14.10215 4.93465 1.55678 > H 3.98110 3.68721 1.55899 > H 10.89072 1.19647 2.69205 > H 7.19958 3.19021 3.56839 > H 4.75923 4.45384 5.96230 > H 6.45299 1.21835 4.92062 > H 15.44211 6.00062 4.78824 > H 17.75043 8.81610 3.97156 > H 10.41563 1.57993 16.49923 > H 6.49332 7.81303 7.99143 > H 0.24800 0.19739 16.37425 > H 9.53586 -0.26872 6.84508 > H 6.19685 1.12218 7.44173 > H 13.45550 8.28133 7.44815 > H 11.11633 1.31384 6.30260 > H 11.87413 5.44074 5.42962 > H 12.38442 8.12016 3.04474 > H 13.88694 4.78876 4.08791 > H 4.53915 0.70283 4.22717 > H 0.88557 0.65625 5.03328 > H 8.96418 0.89159 10.50060 > H 8.67994 8.85961 1.01083 > H 16.35704 8.00331 10.63471 > H 13.12606 1.45212 2.16563 > H 3.64702 3.63930 16.44281 > H 13.76743 4.88477 16.44833 > H 6.85355 7.37827 15.30535 > H 10.55820 5.40745 14.43410 > H 12.97886 4.14375 12.04672 > H 11.29905 7.38966 13.09313 > H 2.29216 2.60091 13.23073 > H -0.01303 -0.23279 14.03603 > H 7.34113 6.99275 1.49776 > H 11.26049 0.78023 10.01184 > H 17.50743 8.37258 1.63130 > H 8.21398 8.86531 11.16822 > H 11.54834 7.47018 10.56097 > H 4.28503 0.31205 10.56295 > H 6.62643 7.27289 11.69479 > H 5.89748 3.14154 12.57118 > H 5.36986 0.44461 14.95599 > H 3.88656 3.78035 13.92095 > H 13.21826 7.85764 13.78163 > H 16.85773 7.91771 12.97237 > H 8.78884 7.70469 7.49554 > H 9.07452 -0.28399 16.99402 > H 1.39009 0.59398 7.37083 > H 4.63062 7.11938 15.84758 > &END COORD > &KIND Zn > BASIS_SET TZVP-MOLOPT-PBE-GTH-q12 > POTENTIAL GTH-PBE-q12 > &END KIND > &KIND S > BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 > POTENTIAL GTH-PBE-q6 > &END KIND > &KIND O > BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 > POTENTIAL GTH-PBE-q6 > &END KIND > &KIND N > BASIS_SET TZVP-MOLOPT-PBE-GTH-q5 > POTENTIAL GTH-PBE-q5 > &END KIND > &KIND C > BASIS_SET TZVP-MOLOPT-PBE-GTH-q4 > POTENTIAL GTH-PBE-q4 > &END KIND > &KIND H > BASIS_SET TZVP-MOLOPT-PBE-GTH-q1 > POTENTIAL GTH-PBE-q1 > &END KIND > &END SUBSYS > &END FORCE_EVAL > > &MOTION > &MD > ENSEMBLE NPT_I > TEMPERATURE 298 > TIMESTEP 1.0 > STEPS 50000 > &THERMOSTAT > TYPE NOSE > &NOSE > LENGTH 3 > YOSHIDA 3 > TIMECON 1000 > &END NOSE > &END THERMOSTAT > &BAROSTAT > PRESSURE 1.0 > TIMECON 4000 > &END BAROSTAT > &END MD > &FREE_ENERGY > METHOD METADYN > &METADYN > USE_PLUMED .TRUE. > PLUMED_INPUT_FILE plumed.dat > &END METADYN > &END FREE_ENERGY > &PRINT > &TRAJECTORY > &EACH > MD 5 > &END EACH > &END TRAJECTORY > &FORCES > UNIT eV*angstrom^-1 > &EACH > MD 5 > &END EACH > &END FORCES > &CELL > &EACH > MD 5 > &END EACH > &END CELL > &END PRINT > &END MOTION > ``` > > This simulation was performed with previous version of cp2k (so without > your fix). > pi?tek, 25 pa?dziernika 2024 o 09:50:47 UTC+2 bartosz mazur napisa?(a): > >> Hi Frederick, >> >> it helped with most of the tests! Now only 13 have failed. In the >> attachments you will find full output from regtests and here is output from >> single job with TRACE enabled: >> >> ``` >> Loading intel/2024a >> Loading requirement: GCCcore/13.3.0 zlib/1.3.1-GCCcore-13.3.0 >> binutils/2.42-GCCcore-13.3.0 intel-compilers/2024.2.0 >> numactl/2.0.18-GCCcore-13.3.0 UCX/1.16.0-GCCcore-13.3.0 >> impi/2021.13.0-intel-compilers-2024.2.0 imkl/2024.2.0 iimpi/2024a >> imkl-FFTW/2024.2.0-iimpi-2024a >> >> Currently Loaded Modulefiles: >> 1) GCCcore/13.3.0 7) >> impi/2021.13.0-intel-compilers-2024.2.0 >> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >> >> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >> >> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >> >> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >> >> 6) UCX/1.16.0-GCCcore-13.3.0 >> 2 MPI processes with 2 OpenMP threads each >> started at Fri Oct 25 09:34:34 CEST 2024 in /lustre/tmp/slurm/3127182 >> SIRIUS 7.6.1, git hash: >> https://api.github.com/repos/electronic-structure/SIRIUS/git/ref/tags/v7.6.1 >> Warning! Compiled in 'debug' mode with assert statements enabled! >> >> >> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >> CLX/DP TRY JIT STA COL >> 0..13 8 8 0 0 >> 14..23 0 0 0 0 >> 24..64 0 0 0 0 >> Registry and code: 13 MB + 64 KB (gemm=8) >> Command (PID=423503): >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >> dftd3src1.inp -o dftd3src1.out >> Uptime: 2.752513 s >> >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 0 PID 423503 RUNNING AT r21c01b03 >> >> = KILLED BY SIGNAL: 11 (Segmentation fault) >> >> =================================================================================== >> >> >> =================================================================================== >> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >> = RANK 1 PID 423504 RUNNING AT r21c01b03 >> >> = KILLED BY SIGNAL: 9 (Killed) >> >> =================================================================================== >> finished at Fri Oct 25 09:34:39 CEST 2024 >> ``` >> >> and the last lines: >> >> ``` >> 000000:000002<< 13 3 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 13 4 >> mp_sendrecv_dm2 >> start Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 13 4 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 12 2 pw_nn_compose_r >> 0 >> .003 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 11 1 xc_pw_derive >> 0.003 H >> ostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 11 5 pw_zero start >> Hostme >> m: 955 MB GPUmem: 0 MB >> 000000:000002<< 11 5 pw_zero 0.000 >> Hostme >> m: 955 MB GPUmem: 0 MB >> 000000:000002>> 11 2 xc_pw_derive >> start H >> ostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 12 3 pw_nn_compose_r >> s >> tart Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 13 5 >> mp_sendrecv_dm2 >> start Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 13 5 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 13 6 >> mp_sendrecv_dm2 >> start Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 13 6 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 12 3 pw_nn_compose_r >> 0 >> .002 Hostmem: 955 MB GPUmem: 0 MB >> 000000:000002<< 11 2 xc_pw_derive >> 0.002 H >> ostmem: 955 MB GPUmem: 0 MB >> 000000:000002>> 11 6 pw_zero start >> Hostme >> m: 955 MB GPUmem: 0 MB >> 000000:000002<< 11 6 pw_zero 0.001 >> Hostme >> m: 960 MB GPUmem: 0 MB >> 000000:000002>> 11 3 xc_pw_derive >> start H >> ostmem: 960 MB GPUmem: 0 MB >> 000000:000002>> 12 4 pw_nn_compose_r >> s >> tart Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002>> 13 7 >> mp_sendrecv_dm2 >> start Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002<< 13 7 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002>> 13 8 >> mp_sendrecv_dm2 >> start Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002<< 13 8 >> mp_sendrecv_dm2 >> 0.000 Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002<< 12 4 pw_nn_compose_r >> 0 >> .002 Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002<< 11 3 xc_pw_derive >> 0.002 H >> ostmem: 960 MB GPUmem: 0 MB >> 000000:000002>> 11 1 >> pw_spline_scale_deriv >> start Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002<< 11 1 >> pw_spline_scale_deriv >> 0.001 Hostmem: 960 MB GPUmem: 0 MB >> 000000:000002>> 11 20 >> pw_pool_give_back_pw >> start Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002<< 11 20 >> pw_pool_give_back_pw >> 0.000 Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002>> 11 21 >> pw_pool_give_back_pw >> start Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002<< 11 21 >> pw_pool_give_back_pw >> 0.000 Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002>> 11 22 >> pw_pool_give_back_pw >> start Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002<< 11 22 >> pw_pool_give_back_pw >> 0.000 Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002>> 11 23 >> pw_pool_give_back_pw >> start Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002<< 11 23 >> pw_pool_give_back_pw >> 0.000 Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002>> 11 1 xc_functional_eval >> s >> tart Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002>> 12 1 b97_lda_eval >> star >> t Hostmem: 965 MB GPUmem: 0 MB >> 000000:000002<< 12 1 b97_lda_eval >> 0.10 >> 3 Hostmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 11 1 xc_functional_eval >> 0 >> .103 Hostmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 1 >> xc_rho_set_and_dset_create >> 0.120 Hostmem: 979 MB GPUmem: 0 MB >> 000000:000002>> 10 1 check_for_derivatives >> s >> tart Hostmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 1 check_for_derivatives >> 0 >> .000 Hostmem: 979 MB GPUmem: 0 MB >> 000000:000002>> 10 14 pw_create_r3d >> start Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 14 pw_create_r3d >> 0.000 Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002>> 10 15 pw_create_r3d >> start Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 15 pw_create_r3d >> 0.000 Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002>> 10 16 pw_create_r3d >> start Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 16 pw_create_r3d >> 0.000 Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002>> 10 17 pw_create_r3d >> start Hos >> tmem: 979 MB GPUmem: 0 MB >> 000000:000002<< 10 17 pw_create_r3d >> 0.000 Hos >> tmem: 979 MB GPUmem: 0 MB >> ``` >> >> Best >> Bartosz >> >> ?roda, 23 pa?dziernika 2024 o 09:15:33 UTC+2 Frederick Stein napisa?(a): >> >>> Dear Bartosz, >>> My fix is merged. Can you switch to the CP2K master and try it again? We >>> are still working on a few issues with the Intel compilers such that we may >>> eventually migrate from ifort to ifx. >>> Best, >>> Frederick >>> >>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: >>> >>>> Great! Thank you for your help. >>>> >>>> Best >>>> Bartosz >>>> >>>> wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein >>>> napisa?(a): >>>> >>>>> I have a fix for it. In contrast to my first thought, it is a case of >>>>> invalid type conversion from real to complex numbers (yes, Fortran is >>>>> rather strict about it) in pw_derive. This may also be present in a few >>>>> other spots. I am currently running more tests and I will open a pull >>>>> request within the next few days. >>>>> Best, >>>>> Frederick >>>>> >>>>> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 >>>>> UTC+2: >>>>> >>>>>> I can reproduce the error locally. I am investigating it now. >>>>>> >>>>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 UTC+2: >>>>>> >>>>>>> I was loading it as it was needed for compilation. I have unloaded >>>>>>> the module, but the error still occurs: >>>>>>> >>>>>>> ``` >>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>> CLX/DP TRY JIT STA COL >>>>>>> 0..13 2 2 0 0 >>>>>>> 14..23 0 0 0 0 >>>>>>> 24..64 0 0 0 0 >>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>> Command (PID=15485): >>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>> H2O-9.inp -o H2O-9.out >>>>>>> Uptime: 1.757102 s >>>>>>> >>>>>>> >>>>>>> >>>>>>> =================================================================================== >>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>>>>> >>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>> >>>>>>> =================================================================================== >>>>>>> >>>>>>> >>>>>>> =================================================================================== >>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>>>>> >>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>> >>>>>>> =================================================================================== >>>>>>> ``` >>>>>>> >>>>>>> >>>>>>> and the last 100 lines: >>>>>>> >>>>>>> ``` >>>>>>> 000000:000002>> 11 37 pw_create_c1d >>>>>>> start >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 37 pw_create_c1d >>>>>>> 0.000 >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 10 64 pw_pool_create_pw >>>>>>> 0.000 >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 10 25 pw_copy >>>>>>> start Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 10 25 pw_copy >>>>>>> 0.001 Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 10 17 pw_axpy >>>>>>> start Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 10 17 pw_axpy >>>>>>> 0.001 Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 10 19 mp_sum_d >>>>>>> start Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 10 19 mp_sum_d >>>>>>> 0.000 Hostmem: >>>>>>> 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 10 3 pw_poisson_solve >>>>>>> start >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 3 >>>>>>> pw_poisson_rebuild s >>>>>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 3 >>>>>>> pw_poisson_rebuild 0 >>>>>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 65 >>>>>>> pw_pool_create_pw st >>>>>>> art Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 38 >>>>>>> pw_create_c1d sta >>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 38 >>>>>>> pw_create_c1d 0.0 >>>>>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 65 >>>>>>> pw_pool_create_pw 0. >>>>>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 26 pw_copy >>>>>>> start Hostme >>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 26 pw_copy >>>>>>> 0.001 Hostme >>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 3 >>>>>>> pw_multiply_with sta >>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 3 >>>>>>> pw_multiply_with 0.0 >>>>>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 27 pw_copy >>>>>>> start Hostme >>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 27 pw_copy >>>>>>> 0.001 Hostme >>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 3 pw_integral_ab >>>>>>> start >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 20 mp_sum_d >>>>>>> start Ho >>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 20 mp_sum_d >>>>>>> 0.001 Ho >>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 11 3 pw_integral_ab >>>>>>> 0.004 >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 11 4 pw_poisson_set >>>>>>> start >>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 66 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 39 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 39 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 66 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 28 pw_copy >>>>>>> start Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 28 pw_copy >>>>>>> 0.001 Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 7 pw_derive >>>>>>> start H >>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 7 pw_derive >>>>>>> 0.002 H >>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 67 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 40 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 40 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 67 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 29 pw_copy >>>>>>> start Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 29 pw_copy >>>>>>> 0.001 Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 8 pw_derive >>>>>>> start H >>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 8 pw_derive >>>>>>> 0.002 H >>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 68 >>>>>>> pw_pool_create_pw >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 13 41 >>>>>>> pw_create_c1d >>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 13 41 >>>>>>> pw_create_c1d >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 68 >>>>>>> pw_pool_create_pw >>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 30 pw_copy >>>>>>> start Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002<< 12 30 pw_copy >>>>>>> 0.001 Hos >>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>> 000000:000002>> 12 9 pw_derive >>>>>>> start H >>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>> ``` >>>>>>> >>>>>>> This is the list of currently loaded modules (all come with intel): >>>>>>> >>>>>>> ``` >>>>>>> Currently Loaded Modulefiles: >>>>>>> 1) GCCcore/13.3.0 7) >>>>>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>>>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>>>>> >>>>>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>>>>> >>>>>>> 4) intel-compilers/2024.2.0 10) >>>>>>> imkl-FFTW/2024.2.0-iimpi-2024a >>>>>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>>>>> >>>>>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>>>>> ``` >>>>>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>>>>> napisa?(a): >>>>>>> >>>>>>>> Dear Bartosz, >>>>>>>> I am currently running some tests with the latest Intel compiler >>>>>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>>>>> interferences with between the Intel and the GCC compilers. >>>>>>>> Best, >>>>>>>> Frederick >>>>>>>> >>>>>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 UTC+2: >>>>>>>> >>>>>>>>> The error for ssmp is: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>> 0..13 4 4 0 0 >>>>>>>>> 14..23 0 0 0 0 >>>>>>>>> 24..64 0 0 0 0 >>>>>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>>>>> Command (PID=54845): >>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>> Uptime: 2.861583 s >>>>>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: >>>>>>>>> 54845 Segmentation fault (core dumped) >>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> and the last 100 lines of output: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> 000000:000001>> 12 20 mp_sum_d >>>>>>>>> start Ho >>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 20 mp_sum_d >>>>>>>>> 0.000 Ho >>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 11 13 dbcsr_dot_sd >>>>>>>>> 0.000 H >>>>>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 10 12 >>>>>>>>> calculate_ptrace_kp 0.0 >>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 9 6 >>>>>>>>> evaluate_core_matrix_traces >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>>>>>> start Ho >>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 10 6 >>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 11 140 >>>>>>>>> pw_pool_create_pw st >>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 79 >>>>>>>>> pw_create_c1d sta >>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 79 >>>>>>>>> pw_create_c1d 0.0 >>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 11 140 >>>>>>>>> pw_pool_create_pw 0. >>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 11 141 >>>>>>>>> pw_pool_create_pw st >>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 80 >>>>>>>>> pw_create_c1d sta >>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 80 >>>>>>>>> pw_create_c1d 0.0 >>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 11 141 >>>>>>>>> pw_pool_create_pw 0. >>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 11 61 pw_copy >>>>>>>>> start Hostme >>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 11 61 pw_copy >>>>>>>>> 0.004 Hostme >>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 11 35 pw_axpy >>>>>>>>> start Hostme >>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 11 35 pw_axpy >>>>>>>>> 0.002 Hostme >>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 11 6 >>>>>>>>> pw_poisson_solve sta >>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 6 >>>>>>>>> pw_poisson_rebuild >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 6 >>>>>>>>> pw_poisson_rebuild >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 142 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 81 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 81 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 142 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 62 pw_copy >>>>>>>>> start Hos >>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 62 pw_copy >>>>>>>>> 0.003 Hos >>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 6 >>>>>>>>> pw_multiply_with >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 6 >>>>>>>>> pw_multiply_with >>>>>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 63 pw_copy >>>>>>>>> start Hos >>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 63 pw_copy >>>>>>>>> 0.003 Hos >>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 6 >>>>>>>>> pw_integral_ab st >>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 12 6 >>>>>>>>> pw_integral_ab 0. >>>>>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 12 7 >>>>>>>>> pw_poisson_set st >>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 143 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 14 82 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 14 82 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 143 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 64 >>>>>>>>> pw_copy start >>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 64 >>>>>>>>> pw_copy 0.003 >>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 16 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 16 >>>>>>>>> pw_derive 0.00 >>>>>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 144 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 14 83 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 14 83 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 144 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 65 >>>>>>>>> pw_copy start >>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001<< 13 65 >>>>>>>>> pw_copy 0.004 >>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> 000000:000001>> 13 17 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> for psmp the last 100 lines is: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> 000000:000002<< 9 7 >>>>>>>>> evaluate_core_matrix_traces >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>>>>>> start Ho >>>>>>>>> >>>>>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 10 7 >>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 164 >>>>>>>>> pw_pool_create_pw st >>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 93 >>>>>>>>> pw_create_c1d sta >>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 93 >>>>>>>>> pw_create_c1d 0.0 >>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 164 >>>>>>>>> pw_pool_create_pw 0. >>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 165 >>>>>>>>> pw_pool_create_pw st >>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 94 >>>>>>>>> pw_create_c1d sta >>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 94 >>>>>>>>> pw_create_c1d 0.0 >>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 165 >>>>>>>>> pw_pool_create_pw 0. >>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 73 pw_copy >>>>>>>>> start Hostme >>>>>>>>> >>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 73 pw_copy >>>>>>>>> 0.001 Hostme >>>>>>>>> >>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 41 pw_axpy >>>>>>>>> start Hostme >>>>>>>>> >>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 41 pw_axpy >>>>>>>>> 0.001 Hostme >>>>>>>>> >>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>>>>> start Hostm >>>>>>>>> >>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>>>>> 0.000 Hostm >>>>>>>>> >>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 7 >>>>>>>>> pw_poisson_solve sta >>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 7 >>>>>>>>> pw_poisson_rebuild >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 7 >>>>>>>>> pw_poisson_rebuild >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 166 >>>>>>>>> pw_pool_create_pw >>>>>>>>> >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 95 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 95 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 166 >>>>>>>>> pw_pool_create_pw >>>>>>>>> >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 74 pw_copy >>>>>>>>> start Hos >>>>>>>>> >>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 74 pw_copy >>>>>>>>> 0.001 Hos >>>>>>>>> >>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 7 >>>>>>>>> pw_multiply_with >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 7 >>>>>>>>> pw_multiply_with >>>>>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 75 pw_copy >>>>>>>>> start Hos >>>>>>>>> >>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 75 pw_copy >>>>>>>>> 0.001 Hos >>>>>>>>> >>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 7 >>>>>>>>> pw_integral_ab st >>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 53 >>>>>>>>> mp_sum_d start >>>>>>>>> >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 53 >>>>>>>>> mp_sum_d 0.000 >>>>>>>>> >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 7 >>>>>>>>> pw_integral_ab 0. >>>>>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 8 >>>>>>>>> pw_poisson_set st >>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 167 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 14 96 >>>>>>>>> pw_create_c1d >>>>>>>>> >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 14 96 >>>>>>>>> pw_create_c1d >>>>>>>>> >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 167 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 76 >>>>>>>>> pw_copy start >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 76 >>>>>>>>> pw_copy 0.001 >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 19 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 19 >>>>>>>>> pw_derive 0.00 >>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 168 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 14 97 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 14 97 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 168 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 77 >>>>>>>>> pw_copy start >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 77 >>>>>>>>> pw_copy 0.001 >>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 20 >>>>>>>>> pw_derive star >>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> Bartosz >>>>>>>>> >>>>>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick >>>>>>>>> Stein napisa?(a): >>>>>>>>> >>>>>>>>>> Dear Bartosz, >>>>>>>>>> I have no idea about the issue with LibXSMM. >>>>>>>>>> Regarding the trace, I do not know either as there is not much >>>>>>>>>> that could break in pw_derive (it just performs multiplications) and the >>>>>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>>>>> with the psmp version. >>>>>>>>>> Best, >>>>>>>>>> Frederick >>>>>>>>>> >>>>>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 >>>>>>>>>> UTC+2: >>>>>>>>>> >>>>>>>>>>> The error is: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>>> 0..13 2 2 0 0 >>>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>>> >>>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>>>> Command (PID=2607388): >>>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>>> Uptime: 5.288243 s >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> =================================================================================== >>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>>>>> >>>>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>>>> >>>>>>>>>>> =================================================================================== >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> =================================================================================== >>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>>>> >>>>>>>>>>> =================================================================================== >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> and the last 20 lines: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>>> pw_derive 0.00 >>>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>>> pw_copy start >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> Thanks! >>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>>>>> napisa?(a): >>>>>>>>>>> >>>>>>>>>>>> Please pick one of the failing tests. Then, add the TRACE >>>>>>>>>>>> keyword to the &GLOBAL section and then run the test manually. This >>>>>>>>>>>> increases the size of the output file dramatically (to some million lines). >>>>>>>>>>>> Can you send me the last ~20 lines of the output? >>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>>>>>> UTC+2: >>>>>>>>>>>> >>>>>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> Bartosz >>>>>>>>>>>>> >>>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>>>>>> napisa?(a): >>>>>>>>>>>>> >>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 >>>>>>>>>>>>>> (add '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of >>>>>>>>>>>>>> the ssmp? >>>>>>>>>>>>>> Best, >>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>> >>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um >>>>>>>>>>>>>> 15:37:43 UTC+2: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> thanks again for help. So I have tested different simulation >>>>>>>>>>>>>>> variants and I know that the problem occurs when using OMP. For MPI >>>>>>>>>>>>>>> calculations without OMP all tests pass. I have also tested the effect of >>>>>>>>>>>>>>> the `OMP_PROC_BIND` and `OMP_PLACES` parameters and apart >>>>>>>>>>>>>>> from the effect on simulation time, they have no significant effect on the >>>>>>>>>>>>>>> presence of errors. Below are the results for ssmp: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, >>>>>>>>>>>>>>> time >>>>>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> and psmp: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: 130; >>>>>>>>>>>>>>> 495min >>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>>>>> 484min >>>>>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>>> 563min >>>>>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>>> 556min >>>>>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; >>>>>>>>>>>>>>> 511min >>>>>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / 4227; >>>>>>>>>>>>>>> failed: 98; 263min >>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Any ideas what I could do next to have more information >>>>>>>>>>>>>>> about the source of the problem or maybe you see a potential solution at >>>>>>>>>>>>>>> this stage? I would appreciate any further help. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Best >>>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick >>>>>>>>>>>>>>> Stein napisa?(a): >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test >>>>>>>>>>>>>>>> do not run that efficiently with such a large number of threads. 2 should >>>>>>>>>>>>>>>> be sufficient. >>>>>>>>>>>>>>>> The test result suggests that most of the functionality may >>>>>>>>>>>>>>>> work but due to a missing backtrace (or similar information), it is hard to >>>>>>>>>>>>>>>> tell why they fail. You could also try to run some of the single-node tests >>>>>>>>>>>>>>>> to assess the stability of CP2K. >>>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um >>>>>>>>>>>>>>>> 13:48:42 UTC+2: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/5473442a-c035-4d51-833f-4c340767ee66n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.stein at hzdr.de Fri Oct 25 12:27:36 2024 From: f.stein at hzdr.de (Frederick Stein) Date: Fri, 25 Oct 2024 05:27:36 -0700 (PDT) Subject: [CP2K-user] [CP2K:20819] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: <5473442a-c035-4d51-833f-4c340767ee66n@googlegroups.com> References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> <7042b62f-62de-43ad-ad94-b940977c9e2an@googlegroups.com> <5473442a-c035-4d51-833f-4c340767ee66n@googlegroups.com> Message-ID: Regarding the other issues: I can confirm them but cannot provide fixes for all of them because the probably trigger bugs in ifort. Because ifort is already deprecated, these bugs will probably not be fixed. Furthermore, we do not see any issues on our Intel CI. I will fix what I can but some of them will be left as we will focus our efforts on the support of the new ifx compiler. Frederick Stein schrieb am Freitag, 25. Oktober 2024 um 11:46:00 UTC+2: > Dear Bartosz, > I will check the other issues with your regtests. > Regarding your latest issue, please provide more information such as an > output file or a hint on the context. If I am supposed to retry the > calculation on my local machine, I need all additional input files such as > your plumed file. I can run your input file up to the point that CP2K needs > plumed. > Best, > Frederick > bartosz mazur schrieb am Freitag, 25. Oktober 2024 um 10:15:19 UTC+2: > >> I just got another error with LibXSMM, now in my regular simulation and >> without using OpenMP. This is the error: >> >> ``` >> [1729843139.920274] [r23c01b04:2913 :0] ib_md.c:295 UCX ERROR >> ibv_reg_mr(address=0x14f0b46fc080, length=7424, access=0xf) failed: Cannot >> allocate memory >> [1729843139.920290] [r23c01b04:2913 :0] ucp_mm.c:70 UCX ERROR >> failed to register address 0x14f0b46fc080 (host) length 7424 on >> md[4]=mlx5_0: Input/output error (md supports: host) >> >> LIBXSMM_VERSION: develop-1.17-3834 (25693946)[1729843139.932647] >> [r23c01b04:2945 :0] ib_md.c:295 UCX ERROR >> ibv_reg_mr(address=0x1491f069e040, length=8128, access=0xf) failed: Cannot >> allocate memory >> [1729843139.932660] [r23c01b04:2945 :0] ucp_mm.c:70 UCX ERROR >> failed to register address 0x1491f069e040 (host) length 8128 on >> md[4]=mlx5_0: Input/output error (md supports: host) >> >> >> CLX/DP TRY JIT STA COL >> 0..13 4 4 0 0 >> 14..23 4 4 0 0 >> >> 24..64 0 0 0 0 >> Registry and code: 13 MB + 80 KB (gemm=8) >> Command (PID=2913): >> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >> cp2k.inp -o cp2k.out >> Uptime: 407633.177169 s >> ``` >> >> and this is simulation input I'm using: >> >> ``` >> &GLOBAL >> PROJECT uam1o_npt_rms >> RUN_TYPE MD >> PRINT_LEVEL LOW >> PREFERRED_DIAG_LIBRARY SCALAPACK >> &END GLOBAL >> >> &FORCE_EVAL >> METHOD QUICKSTEP >> STRESS_TENSOR ANALYTICAL >> &DFT >> BASIS_SET_FILE_NAME BASIS_MOLOPT_UZH >> POTENTIAL_FILE_NAME POTENTIAL_UZH >> &MGRID >> CUTOFF 500 >> &END MGRID >> &XC >> &XC_FUNCTIONAL PBE >> &END XC_FUNCTIONAL >> &VDW_POTENTIAL >> POTENTIAL_TYPE PAIR_POTENTIAL >> &PAIR_POTENTIAL >> TYPE DFTD3(BJ) >> PARAMETER_FILE_NAME dftd3.dat >> REFERENCE_FUNCTIONAL PBE >> R_CUTOFF 25.0 >> &END PAIR_POTENTIAL >> &END VDW_POTENTIAL >> &END XC >> &END DFT >> >> &SUBSYS >> &CELL >> A 12.2807999 0.0000000 0.0000000 >> B 7.6258602 9.6257200 0.0000000 >> C -2.1557724 -1.0420258 18.0042801 >> &END CELL >> &COORD >> Zn 11.37811 4.60286 0.24515 >> Zn 8.15435 3.05288 8.74518 >> Zn 6.37590 3.97311 17.74650 >> Zn 9.59842 5.54014 9.24747 >> S 11.79344 6.72692 17.10850 >> S 4.06825 3.00573 9.90358 >> S 5.95830 1.84422 0.90027 >> S 13.67407 5.58944 8.10767 >> O 10.72408 3.58291 1.89315 >> O 8.51986 4.01962 1.53085 >> O 6.60135 3.91587 7.68572 >> O 7.74637 5.79259 8.21600 >> O 15.32810 8.58246 5.10041 >> O 9.35608 2.93551 7.09500 >> O 10.38999 4.93007 7.45977 >> O 11.66491 6.35111 1.31266 >> O 9.48582 6.62478 0.77364 >> O 2.59062 2.40094 3.91496 >> O 7.03031 4.99173 16.09885 >> O 9.23544 4.56122 16.46252 >> O 11.14602 4.67776 10.31440 >> O 10.00982 2.79915 9.77218 >> O 2.41388 0.01898 12.91899 >> O 8.39375 5.66143 10.89628 >> O 7.36998 3.66087 10.53589 >> O 6.08863 2.22161 16.68336 >> O 8.26988 1.95313 17.21650 >> O 15.16937 6.16381 14.09906 >> N 13.25907 3.80728 0.04001 >> N 2.36335 -0.74130 17.33402 >> N 7.60676 1.08576 8.95623 >> N 15.77729 5.75974 9.67861 >> N 4.49430 4.76652 17.95756 >> N 15.38873 9.31230 0.67467 >> N 10.14308 7.50848 9.04236 >> N 1.96529 2.83557 8.33233 >> C 6.76554 5.18292 7.68414 >> C 14.28210 4.11624 0.86006 >> C 9.47998 3.39622 2.09658 >> C 3.20112 3.42080 0.84626 >> C 9.91466 1.18589 3.17244 >> C 9.08210 2.29987 3.02657 >> C 5.74710 6.04945 7.01821 >> C 7.83265 2.30920 3.66005 >> C 3.35793 2.34328 -0.04029 >> C 4.51663 1.46385 -0.02755 >> C 16.24194 7.75266 5.73606 >> C 4.78940 5.52817 6.14198 >> C 7.40810 1.21174 4.39947 >> C 16.18016 6.38244 5.49010 >> C 9.48869 0.06986 3.88005 >> C 11.27238 1.77457 17.14330 >> C 5.77166 7.43009 7.27236 >> C 11.14819 8.24901 17.58588 >> C 8.22170 0.08058 4.47135 >> C 0.15087 1.02286 17.07544 >> C 17.16180 8.28565 6.64351 >> C 10.57067 7.01060 1.31282 >> C 6.72654 0.47459 8.14002 >> C 10.27972 3.79035 6.89470 >> C 14.15006 8.72843 8.15880 >> C 11.73751 2.06868 5.82537 >> C 11.38838 3.41515 5.96966 >> C 10.52304 8.34339 1.98566 >> C 12.16584 4.39562 5.33967 >> C 14.89762 7.93801 9.04648 >> C 14.86698 6.48365 9.03575 >> C 2.67167 1.17044 3.27681 >> C 11.52468 8.76552 2.86608 >> C 13.29140 4.04007 4.60622 >> C 3.78230 0.36534 3.52266 >> C 12.87823 1.70260 5.12344 >> C 8.27761 0.34001 9.85941 >> C 9.42677 9.18364 1.73295 >> C 3.27553 4.45658 9.42657 >> C 13.66559 2.69775 4.53650 >> C 15.77023 8.59069 9.93240 >> C 1.68356 0.78491 2.36643 >> C 10.98451 3.41041 10.31327 >> C 3.46873 4.45681 17.14097 >> C 8.27403 5.18373 15.89814 >> C 14.54907 5.15099 17.15930 >> C 7.83119 7.39584 14.82858 >> C 8.66916 6.28563 14.97331 >> C 11.99928 2.54577 10.98702 >> C 9.92072 6.28547 14.34388 >> C 16.54982 7.26986 0.04271 >> C 15.39103 8.14919 0.03189 >> C 1.50023 0.84646 12.27989 >> C 12.95126 3.06908 11.86817 >> C 10.34198 7.38826 13.61070 >> C 1.55836 2.21699 12.52561 >> C 8.25354 8.51697 14.12666 >> C 6.48249 6.79770 0.85630 >> C 11.97760 1.16465 10.73446 >> C 6.60385 0.32218 0.42301 >> C 9.52282 8.51550 13.54043 >> C 17.60321 7.54791 0.92891 >> C 0.58530 0.31102 11.36884 >> C 7.18362 1.56332 16.68291 >> C 11.01926 8.11905 9.86341 >> C 7.47582 4.80132 11.10039 >> C 3.59282 -0.13430 9.84955 >> C 6.01179 6.51430 12.17471 >> C 6.36853 5.17005 12.02942 >> C 7.23131 0.22715 16.01652 >> C 5.59963 4.18477 12.66234 >> C 2.84614 0.65728 8.96213 >> C 2.87561 2.11161 8.97508 >> C 15.08536 7.39548 14.73440 >> C 6.23001 -0.19920 15.13769 >> C 4.47482 4.53325 13.40042 >> C 13.97400 8.19851 14.48576 >> C 4.87173 6.87322 12.88120 >> C 9.47231 8.25578 8.14046 >> C 8.32790 -0.61137 16.27301 >> C 14.46698 4.13864 8.58475 >> C 4.09294 5.87331 13.47165 >> C 1.97640 0.00563 8.07267 >> C 16.07240 7.78504 15.64417 >> H 14.10215 4.93465 1.55678 >> H 3.98110 3.68721 1.55899 >> H 10.89072 1.19647 2.69205 >> H 7.19958 3.19021 3.56839 >> H 4.75923 4.45384 5.96230 >> H 6.45299 1.21835 4.92062 >> H 15.44211 6.00062 4.78824 >> H 17.75043 8.81610 3.97156 >> H 10.41563 1.57993 16.49923 >> H 6.49332 7.81303 7.99143 >> H 0.24800 0.19739 16.37425 >> H 9.53586 -0.26872 6.84508 >> H 6.19685 1.12218 7.44173 >> H 13.45550 8.28133 7.44815 >> H 11.11633 1.31384 6.30260 >> H 11.87413 5.44074 5.42962 >> H 12.38442 8.12016 3.04474 >> H 13.88694 4.78876 4.08791 >> H 4.53915 0.70283 4.22717 >> H 0.88557 0.65625 5.03328 >> H 8.96418 0.89159 10.50060 >> H 8.67994 8.85961 1.01083 >> H 16.35704 8.00331 10.63471 >> H 13.12606 1.45212 2.16563 >> H 3.64702 3.63930 16.44281 >> H 13.76743 4.88477 16.44833 >> H 6.85355 7.37827 15.30535 >> H 10.55820 5.40745 14.43410 >> H 12.97886 4.14375 12.04672 >> H 11.29905 7.38966 13.09313 >> H 2.29216 2.60091 13.23073 >> H -0.01303 -0.23279 14.03603 >> H 7.34113 6.99275 1.49776 >> H 11.26049 0.78023 10.01184 >> H 17.50743 8.37258 1.63130 >> H 8.21398 8.86531 11.16822 >> H 11.54834 7.47018 10.56097 >> H 4.28503 0.31205 10.56295 >> H 6.62643 7.27289 11.69479 >> H 5.89748 3.14154 12.57118 >> H 5.36986 0.44461 14.95599 >> H 3.88656 3.78035 13.92095 >> H 13.21826 7.85764 13.78163 >> H 16.85773 7.91771 12.97237 >> H 8.78884 7.70469 7.49554 >> H 9.07452 -0.28399 16.99402 >> H 1.39009 0.59398 7.37083 >> H 4.63062 7.11938 15.84758 >> &END COORD >> &KIND Zn >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q12 >> POTENTIAL GTH-PBE-q12 >> &END KIND >> &KIND S >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 >> POTENTIAL GTH-PBE-q6 >> &END KIND >> &KIND O >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 >> POTENTIAL GTH-PBE-q6 >> &END KIND >> &KIND N >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q5 >> POTENTIAL GTH-PBE-q5 >> &END KIND >> &KIND C >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q4 >> POTENTIAL GTH-PBE-q4 >> &END KIND >> &KIND H >> BASIS_SET TZVP-MOLOPT-PBE-GTH-q1 >> POTENTIAL GTH-PBE-q1 >> &END KIND >> &END SUBSYS >> &END FORCE_EVAL >> >> &MOTION >> &MD >> ENSEMBLE NPT_I >> TEMPERATURE 298 >> TIMESTEP 1.0 >> STEPS 50000 >> &THERMOSTAT >> TYPE NOSE >> &NOSE >> LENGTH 3 >> YOSHIDA 3 >> TIMECON 1000 >> &END NOSE >> &END THERMOSTAT >> &BAROSTAT >> PRESSURE 1.0 >> TIMECON 4000 >> &END BAROSTAT >> &END MD >> &FREE_ENERGY >> METHOD METADYN >> &METADYN >> USE_PLUMED .TRUE. >> PLUMED_INPUT_FILE plumed.dat >> &END METADYN >> &END FREE_ENERGY >> &PRINT >> &TRAJECTORY >> &EACH >> MD 5 >> &END EACH >> &END TRAJECTORY >> &FORCES >> UNIT eV*angstrom^-1 >> &EACH >> MD 5 >> &END EACH >> &END FORCES >> &CELL >> &EACH >> MD 5 >> &END EACH >> &END CELL >> &END PRINT >> &END MOTION >> ``` >> >> This simulation was performed with previous version of cp2k (so without >> your fix). >> pi?tek, 25 pa?dziernika 2024 o 09:50:47 UTC+2 bartosz mazur napisa?(a): >> >>> Hi Frederick, >>> >>> it helped with most of the tests! Now only 13 have failed. In the >>> attachments you will find full output from regtests and here is output from >>> single job with TRACE enabled: >>> >>> ``` >>> Loading intel/2024a >>> Loading requirement: GCCcore/13.3.0 zlib/1.3.1-GCCcore-13.3.0 >>> binutils/2.42-GCCcore-13.3.0 intel-compilers/2024.2.0 >>> numactl/2.0.18-GCCcore-13.3.0 UCX/1.16.0-GCCcore-13.3.0 >>> impi/2021.13.0-intel-compilers-2024.2.0 imkl/2024.2.0 iimpi/2024a >>> imkl-FFTW/2024.2.0-iimpi-2024a >>> >>> Currently Loaded Modulefiles: >>> 1) GCCcore/13.3.0 7) >>> impi/2021.13.0-intel-compilers-2024.2.0 >>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>> >>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>> >>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>> >>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>> >>> 6) UCX/1.16.0-GCCcore-13.3.0 >>> 2 MPI processes with 2 OpenMP threads each >>> started at Fri Oct 25 09:34:34 CEST 2024 in /lustre/tmp/slurm/3127182 >>> SIRIUS 7.6.1, git hash: >>> https://api.github.com/repos/electronic-structure/SIRIUS/git/ref/tags/v7.6.1 >>> Warning! Compiled in 'debug' mode with assert statements enabled! >>> >>> >>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>> CLX/DP TRY JIT STA COL >>> 0..13 8 8 0 0 >>> 14..23 0 0 0 0 >>> 24..64 0 0 0 0 >>> Registry and code: 13 MB + 64 KB (gemm=8) >>> Command (PID=423503): >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>> dftd3src1.inp -o dftd3src1.out >>> Uptime: 2.752513 s >>> >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 0 PID 423503 RUNNING AT r21c01b03 >>> >>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>> >>> =================================================================================== >>> >>> >>> =================================================================================== >>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>> = RANK 1 PID 423504 RUNNING AT r21c01b03 >>> >>> = KILLED BY SIGNAL: 9 (Killed) >>> >>> =================================================================================== >>> finished at Fri Oct 25 09:34:39 CEST 2024 >>> ``` >>> >>> and the last lines: >>> >>> ``` >>> 000000:000002<< 13 3 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 13 4 >>> mp_sendrecv_dm2 >>> start Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 13 4 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 12 2 pw_nn_compose_r >>> 0 >>> .003 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 11 1 xc_pw_derive >>> 0.003 H >>> ostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 11 5 pw_zero >>> start Hostme >>> m: 955 MB GPUmem: 0 MB >>> 000000:000002<< 11 5 pw_zero >>> 0.000 Hostme >>> m: 955 MB GPUmem: 0 MB >>> 000000:000002>> 11 2 xc_pw_derive >>> start H >>> ostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 12 3 pw_nn_compose_r >>> s >>> tart Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 13 5 >>> mp_sendrecv_dm2 >>> start Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 13 5 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 13 6 >>> mp_sendrecv_dm2 >>> start Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 13 6 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 12 3 pw_nn_compose_r >>> 0 >>> .002 Hostmem: 955 MB GPUmem: 0 MB >>> 000000:000002<< 11 2 xc_pw_derive >>> 0.002 H >>> ostmem: 955 MB GPUmem: 0 MB >>> 000000:000002>> 11 6 pw_zero >>> start Hostme >>> m: 955 MB GPUmem: 0 MB >>> 000000:000002<< 11 6 pw_zero >>> 0.001 Hostme >>> m: 960 MB GPUmem: 0 MB >>> 000000:000002>> 11 3 xc_pw_derive >>> start H >>> ostmem: 960 MB GPUmem: 0 MB >>> 000000:000002>> 12 4 pw_nn_compose_r >>> s >>> tart Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002>> 13 7 >>> mp_sendrecv_dm2 >>> start Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002<< 13 7 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002>> 13 8 >>> mp_sendrecv_dm2 >>> start Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002<< 13 8 >>> mp_sendrecv_dm2 >>> 0.000 Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002<< 12 4 pw_nn_compose_r >>> 0 >>> .002 Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002<< 11 3 xc_pw_derive >>> 0.002 H >>> ostmem: 960 MB GPUmem: 0 MB >>> 000000:000002>> 11 1 >>> pw_spline_scale_deriv >>> start Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002<< 11 1 >>> pw_spline_scale_deriv >>> 0.001 Hostmem: 960 MB GPUmem: 0 MB >>> 000000:000002>> 11 20 >>> pw_pool_give_back_pw >>> start Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002<< 11 20 >>> pw_pool_give_back_pw >>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002>> 11 21 >>> pw_pool_give_back_pw >>> start Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002<< 11 21 >>> pw_pool_give_back_pw >>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002>> 11 22 >>> pw_pool_give_back_pw >>> start Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002<< 11 22 >>> pw_pool_give_back_pw >>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002>> 11 23 >>> pw_pool_give_back_pw >>> start Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002<< 11 23 >>> pw_pool_give_back_pw >>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002>> 11 1 xc_functional_eval >>> s >>> tart Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002>> 12 1 b97_lda_eval >>> star >>> t Hostmem: 965 MB GPUmem: 0 MB >>> 000000:000002<< 12 1 b97_lda_eval >>> 0.10 >>> 3 Hostmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 11 1 xc_functional_eval >>> 0 >>> .103 Hostmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 1 >>> xc_rho_set_and_dset_create >>> 0.120 Hostmem: 979 MB GPUmem: 0 MB >>> 000000:000002>> 10 1 check_for_derivatives >>> s >>> tart Hostmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 1 check_for_derivatives >>> 0 >>> .000 Hostmem: 979 MB GPUmem: 0 MB >>> 000000:000002>> 10 14 pw_create_r3d >>> start Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 14 pw_create_r3d >>> 0.000 Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002>> 10 15 pw_create_r3d >>> start Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 15 pw_create_r3d >>> 0.000 Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002>> 10 16 pw_create_r3d >>> start Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 16 pw_create_r3d >>> 0.000 Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002>> 10 17 pw_create_r3d >>> start Hos >>> tmem: 979 MB GPUmem: 0 MB >>> 000000:000002<< 10 17 pw_create_r3d >>> 0.000 Hos >>> tmem: 979 MB GPUmem: 0 MB >>> ``` >>> >>> Best >>> Bartosz >>> >>> ?roda, 23 pa?dziernika 2024 o 09:15:33 UTC+2 Frederick Stein napisa?(a): >>> >>>> Dear Bartosz, >>>> My fix is merged. Can you switch to the CP2K master and try it again? >>>> We are still working on a few issues with the Intel compilers such that we >>>> may eventually migrate from ifort to ifx. >>>> Best, >>>> Frederick >>>> >>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: >>>> >>>>> Great! Thank you for your help. >>>>> >>>>> Best >>>>> Bartosz >>>>> >>>>> wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein >>>>> napisa?(a): >>>>> >>>>>> I have a fix for it. In contrast to my first thought, it is a case of >>>>>> invalid type conversion from real to complex numbers (yes, Fortran is >>>>>> rather strict about it) in pw_derive. This may also be present in a few >>>>>> other spots. I am currently running more tests and I will open a pull >>>>>> request within the next few days. >>>>>> Best, >>>>>> Frederick >>>>>> >>>>>> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 >>>>>> UTC+2: >>>>>> >>>>>>> I can reproduce the error locally. I am investigating it now. >>>>>>> >>>>>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 >>>>>>> UTC+2: >>>>>>> >>>>>>>> I was loading it as it was needed for compilation. I have unloaded >>>>>>>> the module, but the error still occurs: >>>>>>>> >>>>>>>> ``` >>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>> 0..13 2 2 0 0 >>>>>>>> 14..23 0 0 0 0 >>>>>>>> 24..64 0 0 0 0 >>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>> Command (PID=15485): >>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>> Uptime: 1.757102 s >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>>>>>> >>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>>>>>> >>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>> >>>>>>>> =================================================================================== >>>>>>>> ``` >>>>>>>> >>>>>>>> >>>>>>>> and the last 100 lines: >>>>>>>> >>>>>>>> ``` >>>>>>>> 000000:000002>> 11 37 pw_create_c1d >>>>>>>> start >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 37 pw_create_c1d >>>>>>>> 0.000 >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 10 64 >>>>>>>> pw_pool_create_pw 0.000 >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 10 25 pw_copy >>>>>>>> start Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 10 25 pw_copy >>>>>>>> 0.001 Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 10 17 pw_axpy >>>>>>>> start Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 10 17 pw_axpy >>>>>>>> 0.001 Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 10 19 mp_sum_d >>>>>>>> start Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 10 19 mp_sum_d >>>>>>>> 0.000 Hostmem: >>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 10 3 pw_poisson_solve >>>>>>>> start >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 3 >>>>>>>> pw_poisson_rebuild s >>>>>>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 3 >>>>>>>> pw_poisson_rebuild 0 >>>>>>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 65 >>>>>>>> pw_pool_create_pw st >>>>>>>> art Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 38 >>>>>>>> pw_create_c1d sta >>>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 38 >>>>>>>> pw_create_c1d 0.0 >>>>>>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 65 >>>>>>>> pw_pool_create_pw 0. >>>>>>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 26 pw_copy >>>>>>>> start Hostme >>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 26 pw_copy >>>>>>>> 0.001 Hostme >>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 3 >>>>>>>> pw_multiply_with sta >>>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 3 >>>>>>>> pw_multiply_with 0.0 >>>>>>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 27 pw_copy >>>>>>>> start Hostme >>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 27 pw_copy >>>>>>>> 0.001 Hostme >>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 3 >>>>>>>> pw_integral_ab start >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 20 mp_sum_d >>>>>>>> start Ho >>>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 20 mp_sum_d >>>>>>>> 0.001 Ho >>>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 11 3 >>>>>>>> pw_integral_ab 0.004 >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 11 4 >>>>>>>> pw_poisson_set start >>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 66 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 39 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 39 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 66 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 28 pw_copy >>>>>>>> start Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 28 pw_copy >>>>>>>> 0.001 Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 7 pw_derive >>>>>>>> start H >>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 7 pw_derive >>>>>>>> 0.002 H >>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 67 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 40 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 40 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 67 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 29 pw_copy >>>>>>>> start Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 29 pw_copy >>>>>>>> 0.001 Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 8 pw_derive >>>>>>>> start H >>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 8 pw_derive >>>>>>>> 0.002 H >>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 68 >>>>>>>> pw_pool_create_pw >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 13 41 >>>>>>>> pw_create_c1d >>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 13 41 >>>>>>>> pw_create_c1d >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 68 >>>>>>>> pw_pool_create_pw >>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 30 pw_copy >>>>>>>> start Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002<< 12 30 pw_copy >>>>>>>> 0.001 Hos >>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>> 000000:000002>> 12 9 pw_derive >>>>>>>> start H >>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>> ``` >>>>>>>> >>>>>>>> This is the list of currently loaded modules (all come with intel): >>>>>>>> >>>>>>>> ``` >>>>>>>> Currently Loaded Modulefiles: >>>>>>>> 1) GCCcore/13.3.0 7) >>>>>>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>>>>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>>>>>> >>>>>>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>>>>>> >>>>>>>> 4) intel-compilers/2024.2.0 10) >>>>>>>> imkl-FFTW/2024.2.0-iimpi-2024a >>>>>>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>>>>>> >>>>>>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>>>>>> ``` >>>>>>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>>>>>> napisa?(a): >>>>>>>> >>>>>>>>> Dear Bartosz, >>>>>>>>> I am currently running some tests with the latest Intel compiler >>>>>>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>>>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>>>>>> interferences with between the Intel and the GCC compilers. >>>>>>>>> Best, >>>>>>>>> Frederick >>>>>>>>> >>>>>>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 >>>>>>>>> UTC+2: >>>>>>>>> >>>>>>>>>> The error for ssmp is: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>> 0..13 4 4 0 0 >>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>>>>>> Command (PID=54845): >>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>> Uptime: 2.861583 s >>>>>>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: >>>>>>>>>> 54845 Segmentation fault (core dumped) >>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> and the last 100 lines of output: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> 000000:000001>> 12 20 mp_sum_d >>>>>>>>>> start Ho >>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 20 mp_sum_d >>>>>>>>>> 0.000 Ho >>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 11 13 >>>>>>>>>> dbcsr_dot_sd 0.000 H >>>>>>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 10 12 >>>>>>>>>> calculate_ptrace_kp 0.0 >>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 9 6 >>>>>>>>>> evaluate_core_matrix_traces >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 9 6 rebuild_ks_matrix >>>>>>>>>> start Ho >>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 10 6 >>>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 11 140 >>>>>>>>>> pw_pool_create_pw st >>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 79 >>>>>>>>>> pw_create_c1d sta >>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 79 >>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 11 140 >>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 11 141 >>>>>>>>>> pw_pool_create_pw st >>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 80 >>>>>>>>>> pw_create_c1d sta >>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 80 >>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 11 141 >>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 11 61 pw_copy >>>>>>>>>> start Hostme >>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 11 61 pw_copy >>>>>>>>>> 0.004 Hostme >>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 11 35 pw_axpy >>>>>>>>>> start Hostme >>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 11 35 pw_axpy >>>>>>>>>> 0.002 Hostme >>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 11 6 >>>>>>>>>> pw_poisson_solve sta >>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>> pw_poisson_rebuild >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>> pw_poisson_rebuild >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 142 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 81 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 81 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 142 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 62 pw_copy >>>>>>>>>> start Hos >>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 62 pw_copy >>>>>>>>>> 0.003 Hos >>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>> pw_multiply_with >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>> pw_multiply_with >>>>>>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 63 pw_copy >>>>>>>>>> start Hos >>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 63 pw_copy >>>>>>>>>> 0.003 Hos >>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>> pw_integral_ab st >>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>> pw_integral_ab 0. >>>>>>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 12 7 >>>>>>>>>> pw_poisson_set st >>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 143 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 14 82 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 14 82 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 143 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 64 >>>>>>>>>> pw_copy start >>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 64 >>>>>>>>>> pw_copy 0.003 >>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 16 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 16 >>>>>>>>>> pw_derive 0.00 >>>>>>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 144 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 14 83 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 14 83 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 144 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 65 >>>>>>>>>> pw_copy start >>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001<< 13 65 >>>>>>>>>> pw_copy 0.004 >>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> 000000:000001>> 13 17 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> for psmp the last 100 lines is: >>>>>>>>>> >>>>>>>>>> ``` >>>>>>>>>> 000000:000002<< 9 7 >>>>>>>>>> evaluate_core_matrix_traces >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 9 7 rebuild_ks_matrix >>>>>>>>>> start Ho >>>>>>>>>> >>>>>>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 10 7 >>>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 164 >>>>>>>>>> pw_pool_create_pw st >>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 93 >>>>>>>>>> pw_create_c1d sta >>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 93 >>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 11 164 >>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 165 >>>>>>>>>> pw_pool_create_pw st >>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 94 >>>>>>>>>> pw_create_c1d sta >>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 94 >>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 11 165 >>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 73 pw_copy >>>>>>>>>> start Hostme >>>>>>>>>> >>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 11 73 pw_copy >>>>>>>>>> 0.001 Hostme >>>>>>>>>> >>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 41 pw_axpy >>>>>>>>>> start Hostme >>>>>>>>>> >>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 11 41 pw_axpy >>>>>>>>>> 0.001 Hostme >>>>>>>>>> >>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>>>>>> start Hostm >>>>>>>>>> >>>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>>>>>> 0.000 Hostm >>>>>>>>>> >>>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 11 7 >>>>>>>>>> pw_poisson_solve sta >>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>> pw_poisson_rebuild >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>> pw_poisson_rebuild >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 166 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 95 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 95 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 166 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 74 pw_copy >>>>>>>>>> start Hos >>>>>>>>>> >>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 74 pw_copy >>>>>>>>>> 0.001 Hos >>>>>>>>>> >>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>> pw_multiply_with >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>> pw_multiply_with >>>>>>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 75 pw_copy >>>>>>>>>> start Hos >>>>>>>>>> >>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 75 pw_copy >>>>>>>>>> 0.001 Hos >>>>>>>>>> >>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>> pw_integral_ab st >>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 53 >>>>>>>>>> mp_sum_d start >>>>>>>>>> >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 53 >>>>>>>>>> mp_sum_d 0.000 >>>>>>>>>> >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>> pw_integral_ab 0. >>>>>>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 12 8 >>>>>>>>>> pw_poisson_set st >>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 167 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 14 96 >>>>>>>>>> pw_create_c1d >>>>>>>>>> >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 14 96 >>>>>>>>>> pw_create_c1d >>>>>>>>>> >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 167 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 76 >>>>>>>>>> pw_copy start >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>> pw_copy 0.001 >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>> pw_derive 0.00 >>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>> pw_create_c1d >>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>> pw_create_c1d >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>> pw_pool_create_pw >>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>> pw_copy start >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>> pw_copy 0.001 >>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>> pw_derive star >>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>> ``` >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> Bartosz >>>>>>>>>> >>>>>>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick >>>>>>>>>> Stein napisa?(a): >>>>>>>>>> >>>>>>>>>>> Dear Bartosz, >>>>>>>>>>> I have no idea about the issue with LibXSMM. >>>>>>>>>>> Regarding the trace, I do not know either as there is not much >>>>>>>>>>> that could break in pw_derive (it just performs multiplications) and the >>>>>>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>>>>>> with the psmp version. >>>>>>>>>>> Best, >>>>>>>>>>> Frederick >>>>>>>>>>> >>>>>>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 >>>>>>>>>>> UTC+2: >>>>>>>>>>> >>>>>>>>>>>> The error is: >>>>>>>>>>>> >>>>>>>>>>>> ``` >>>>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>>>> 0..13 2 2 0 0 >>>>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>>>> >>>>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>>>>> Command (PID=2607388): >>>>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>>>> Uptime: 5.288243 s >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> =================================================================================== >>>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>>>>>> >>>>>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>>>>> >>>>>>>>>>>> =================================================================================== >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> =================================================================================== >>>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>>>>> >>>>>>>>>>>> =================================================================================== >>>>>>>>>>>> ``` >>>>>>>>>>>> >>>>>>>>>>>> and the last 20 lines: >>>>>>>>>>>> >>>>>>>>>>>> ``` >>>>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>>>> pw_derive star >>>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>>>> pw_derive 0.00 >>>>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>>>> pw_create_c1d >>>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>>>> pw_create_c1d >>>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>>>> pw_copy start >>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>>>> pw_derive star >>>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>> ``` >>>>>>>>>>>> >>>>>>>>>>>> Thanks! >>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>>>>>> napisa?(a): >>>>>>>>>>>> >>>>>>>>>>>>> Please pick one of the failing tests. Then, add the TRACE >>>>>>>>>>>>> keyword to the &GLOBAL section and then run the test manually. This >>>>>>>>>>>>> increases the size of the output file dramatically (to some million lines). >>>>>>>>>>>>> Can you send me the last ~20 lines of the output? >>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um 17:09:40 >>>>>>>>>>>>> UTC+2: >>>>>>>>>>>>> >>>>>>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>> >>>>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick Stein >>>>>>>>>>>>>> napisa?(a): >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 >>>>>>>>>>>>>>> (add '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of >>>>>>>>>>>>>>> the ssmp? >>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um >>>>>>>>>>>>>>> 15:37:43 UTC+2: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> thanks again for help. So I have tested different >>>>>>>>>>>>>>>> simulation variants and I know that the problem occurs when using OMP. For >>>>>>>>>>>>>>>> MPI calculations without OMP all tests pass. I have also tested the effect >>>>>>>>>>>>>>>> of the `OMP_PROC_BIND` and `OMP_PLACES` parameters and >>>>>>>>>>>>>>>> apart from the effect on simulation time, they have no significant effect >>>>>>>>>>>>>>>> on the presence of errors. Below are the results for ssmp: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, >>>>>>>>>>>>>>>> time >>>>>>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> and psmp: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: >>>>>>>>>>>>>>>> 130; 495min >>>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>>>>>> 484min >>>>>>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>>>> 563min >>>>>>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>>>> 556min >>>>>>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; >>>>>>>>>>>>>>>> 511min >>>>>>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / >>>>>>>>>>>>>>>> 4227; failed: 98; 263min >>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Any ideas what I could do next to have more information >>>>>>>>>>>>>>>> about the source of the problem or maybe you see a potential solution at >>>>>>>>>>>>>>>> this stage? I would appreciate any further help. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Best >>>>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick >>>>>>>>>>>>>>>> Stein napisa?(a): >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test >>>>>>>>>>>>>>>>> do not run that efficiently with such a large number of threads. 2 should >>>>>>>>>>>>>>>>> be sufficient. >>>>>>>>>>>>>>>>> The test result suggests that most of the functionality >>>>>>>>>>>>>>>>> may work but due to a missing backtrace (or similar information), it is >>>>>>>>>>>>>>>>> hard to tell why they fail. You could also try to run some of the >>>>>>>>>>>>>>>>> single-node tests to assess the stability of CP2K. >>>>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um >>>>>>>>>>>>>>>>> 13:48:42 UTC+2: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f1ea9911-f9bd-4dea-ab06-55ac0171df9fn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Sat Oct 26 15:13:25 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Sat, 26 Oct 2024 08:13:25 -0700 (PDT) Subject: [CP2K-user] [CP2K:20820] Re: Gallium + CO2 lack of convergence In-Reply-To: <996b3dfe-cc2b-4b47-8de1-b3f8921b8a77n@googlegroups.com> References: <01922cb9-0de8-4887-bacb-da4a2ed78a28n@googlegroups.com> <6d229766-9b30-422f-8ce5-154ace7da6f8n@googlegroups.com> <1342769e-8153-4387-b5a8-b18c6b084635n@googlegroups.com> <30f8cec8-8b86-4dc1-a499-4f2af91e947an@googlegroups.com> <335efca4-9014-4d60-8c0e-42d4ab964000n@googlegroups.com> <8cb7ec66-c7da-4798-b626-d9f93489be9en@googlegroups.com> <996b3dfe-cc2b-4b47-8de1-b3f8921b8a77n@googlegroups.com> Message-ID: <1f787ddc-2f7a-4294-bd34-92ae21762ae8n@googlegroups.com> Dear Michela, You mentioned a liquid, but the xyz you posted shows a sort of lattice forming a cubic cage containing the CO2. For sure the system sie is very small and I doubt it represents a reliable model for liquid Ga with CO2. Ga crystalline in a base centred orthorhombic structure, maybe you can start to verify whether you can reproduce that. >From the output, one observes that the pressure is rather negative (compressive), indeed the volume very rapidly shrinks and the temperature of the barostat becomes crazy. To generate a reasonable liquid-model rather long sampling is required and some structural parameters (e.g., correlation functions) and electronic parameters should be used to verify the reliability of the model. Once the model of the liquid is OK, one can think of inserting a solute and re-equilibrate. Regards Marcella On Thursday, October 24, 2024 at 3:47:21?PM UTC+2 bnzmi... at gmail.com wrote: > Hi Marcella and everyone, > > I wanted to follow up on your advice. I used an equilibrated structure of > 64 Ga atoms, eliminated ~8 central Ga atoms and added the CO2 molecule. The > MD steps converged until step #14, then the SCF loop did not converge. I > wanted to update anyone who may be looking at this thread for guidance: I > will be sizing up my cell dimensions. > > Thank you, > > Michela > > On Thursday, October 3, 2024 at 1:22:48?PM UTC-4 Marcella Iannuzzi wrote: > >> Hi >> If you have a good equilibrated Ga liquid box (right density and low >> stress tensor) I wouldn't run GEO_OPT >> Obviously introducing CO2 is going to change the conditions, but I would >> anyway start with a NVT run for a first equilibration and then run NPT to >> re-equilibrate the volume. I would add the CO2 molecule and remove all the >> Ga atoms within a certain radius from the center of mass of the molecule, >> and then run the equilibrations as described above. >> The only way to lower the concentration is to increase the amount of Ga, >> i.e., increase the box. >> Regards >> Marcella >> >> On Thursday, October 3, 2024 at 5:02:46?PM UTC+2 bnzmi... at gmail.com >> wrote: >> >>> Hi Marcella, >>> >>> Thank you again! >>> >>> 1) should I run GEO_OPT on the pure Ga liquid, then add CO2? >>> 2) How should I form a cavity artificially without disrupting the newly >>> equilibrated Ga structure? >>> 3) I only have one molecule of CO2 in there - How should I go about >>> lowering the concentration? Should I just increase my #Ga atoms and add 1 >>> CO2 molecule? >>> >>> Best, >>> >>> Michela >>> On Thursday, October 3, 2024 at 10:20:25?AM UTC-4 Marcella Iannuzzi >>> wrote: >>> >>>> >>>> Dear Michela, >>>> >>>> The procedure you describe does not sound very appropriate to me. >>>> You should first obtain a liquid system, without solute. >>>> I suppose you should check for density and other properties and have a >>>> sufficiently large box. >>>> Then you can create a cavity in the equilibrated liquid and insert the >>>> solute, still with the right C-O bond length. >>>> If the SCF does not converge anymore after a few steps it is probably >>>> because of the coordinates. >>>> The concentration of CO2 seems rather high. >>>> >>>> You can use GAPW. It is more commonly used for all electron >>>> calculations. With PP, GPW is as accurate. >>>> >>>> Regards >>>> Marcella >>>> >>>> >>>> >>>> On Thursday, October 3, 2024 at 3:02:10?PM UTC+2 bnzmi... at gmail.com >>>> wrote: >>>> >>>>> Hi Marcella, >>>>> >>>>> thank you for your kind response and your time. >>>>> >>>>> 1) I just switched to double zeta quality a few hours ago, but my MD >>>>> just crashed because, weirdly, it converged for the first few SCF loops, >>>>> but then it stopped converging (attached the output file here to explain). >>>>> >>>>> 2) I am using GAPW because I found that augmented plane waves method >>>>> worked really well with my liquid Al systems before. that method is also >>>>> reported in DFT literature for liquid Ga. Rationalizing it, I think it >>>>> works because it samples regions of space with different charge densities >>>>> with more accuracy. Do you think I should consider something else? >>>>> >>>>> 3) I used a cell size to represent the density of liquid Ga with the # >>>>> of atoms I have. I prepared my coordinates with a Python script, then >>>>> relaxed the geometry in Avogadro2 software and inserted CO2 such that it >>>>> was a distance of min 2.5 A to minimize initial repulsion with Ga atoms. Do >>>>> you have any suggestions to prepare a structure? I am leaving 1-2 A on all >>>>> sides from the unit cell boundaries because I have been worried about Ga >>>>> atoms being too close to neighbors across periodic boundaries. >>>>> >>>>> Michela >>>>> On Thursday, October 3, 2024 at 8:23:26?AM UTC-4 Marcella Iannuzzi >>>>> wrote: >>>>> >>>>>> Dear Michela, >>>>>> >>>>>> The basis set you are using is of poor quality. >>>>>> The coordinates you sent show a rather strange C-O bond length >>>>>> The cell is very small, but still there is vacuum space among the >>>>>> replicas in all directions, it is a rather weird choice of coordinates. >>>>>> >>>>>> Is there a reason why you are using GAPW? >>>>>> >>>>>> Regards >>>>>> Marcella >>>>>> On Wednesday, October 2, 2024 at 7:27:59?PM UTC+2 bnzmi... at gmail.com >>>>>> wrote: >>>>>> >>>>>>> Good morning dear CP2K community, >>>>>>> >>>>>>> how are you? You may know me from previous posts on liquid Al (+CO2) >>>>>>> MD troubleshooting. All of your responses have been super helpful so far, >>>>>>> and I am coming here again for a different liquid metal. >>>>>>> >>>>>>> My simulations with pure liquid Gallium have been less troublesome >>>>>>> than all of my liquid Al simulations, but the MD with 1 CO2 molecule won't >>>>>>> converge. Can I please get some help troubleshooting? >>>>>>> >>>>>>> Thank you, >>>>>>> >>>>>>> Michela >>>>>>> >>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/1f787ddc-2f7a-4294-bd34-92ae21762ae8n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oq.adewuyi at gmail.com Sat Oct 26 18:17:09 2024 From: oq.adewuyi at gmail.com (Quadri Olakunle Adewuyi) Date: Sat, 26 Oct 2024 13:17:09 -0500 Subject: [CP2K-user] [CP2K:20820] SCCS Convergence issue Message-ID: Dear CP2K Experts, I run geometry optimization with SCCS implicit solvent, but SCCS does not converge after 200 iterations. Kindly help me check if I am missing something here is my input and output details below &GLOBAL PROJECT POMI RUN_TYPE GEO_OPT PRINT_LEVEL MEDIUM &END GLOBAL &FORCE_EVAL METHOD Quickstep &DFT BASIS_SET_FILE_NAME BASIS_MOLOPT POTENTIAL_FILE_NAME GTH_POTENTIALS !Charge and Multiplicity Charge 0 MULTIPLICITY 1 &MGRID CUTOFF 700 NGRIDS 4 REL_CUTOFF 60 &END MGRID &QS EPS_DEFAULT 1.0E-10 &END QS &SCF SCF_GUESS ATOMIC EPS_SCF 1.0E-5 MAX_SCF 500 &OT ON MINIMIZER DIIS &END OT &END SCF &POISSON PERIODIC NONE POISSON_SOLVER MT &END POISSON &XC &XC_FUNCTIONAL &PBE PARAMETRIZATION revPBE &END PBE &END XC_FUNCTIONAL &VDW_POTENTIAL POTENTIAL_TYPE PAIR_POTENTIAL &PAIR_POTENTIAL PARAMETER_FILE_NAME dftd3.dat TYPE DFTD3(BJ) REFERENCE_FUNCTIONAL PBE R_CUTOFF [angstrom] 16 &END PAIR_POTENTIAL &END VDW_POTENTIAL &END XC &SCCS ON DERIVATIVE_METHOD FFT DIELECTRIC_CONSTANT 78.36 EPS_SCCS 1.0E-6 EPS_SCF 1.0E-5 MAX_ITER 200 METHOD ANDREUSSI !METHOD FATTEBERT-GYGI MIXING 0.2 &ANDREUSSI RHO_MAX 0.001 RHO_MIN 0.0001 &END ANDREUSSI !&FATTEBERT-GYGI !BETA 1.3 !RHO_ZERO 0.0004 !&END FATTEBERT-GYGI &END SCCS &END DFT &SUBSYS &TOPOLOGY COORD_FILE_NAME POMl.xyz COORD_FILE_FORMAT XYZ &END TOPOLOGY &CELL ! unit cells that are orthorhombic are more efficient with CP2K A [angstrom] 36.00000000 0.000000000 0.000000000 B [angstrom] 0.000000000 36.00000000 0.000000000 C [angstrom] 0.000000000 0.000000000 36.00000000 PERIODIC NONE &END CELL &KIND W ELEMENT W BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q14 &END KIND &KIND P ELEMENT P BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q5 &END KIND &KIND O ELEMENT O BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q6 &END KIND &KIND H ELEMENT H BASIS_SET DZVP-MOLOPT-SR-GTH POTENTIAL GTH-PBE-q1 &END KIND &END SUBSYS &PRINT &TOTAL_NUMBERS ON &END TOTAL_NUMBERS &END PRINT &END FORCE_EVAL &MOTION &GEO_OPT TYPE MINIMIZATION MAX_ITER 200 OPTIMIZER LBFGS &LBFGS MAX_F_PER_ITER 20 &END LBFGS &END GEO_OPT &END MOTION here is the output ----------------------------------------------------------------------------- 1 OT CG 0.15E+00 295.6 0.00073221 -3858.9269373219 -3.86E+03 *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not converge *** *** in 200 steps *** 2 OT LS 0.60E+00 791.6 -3860.5237639880 3 OT CG 0.60E+00 347.1 0.00088457 -3858.9883305609 -6.14E-02 4 OT LS 0.60E-02 330.8 -3858.0304951763 5 OT CG 0.60E-02 233.0 0.00088692 -3858.9868935523 1.44E-03 6 OT LS 0.11E-01 250.0 -3858.9914525675 7 OT CG 0.11E-01 270.8 0.00084581 -3858.9954528356 -8.56E-03 8 OT LS 0.20E-01 462.9 -3859.0030589695 9 OT CG 0.20E-01 471.2 0.00079042 -3859.0171304820 -2.17E-02 *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not converge *** *** in 200 steps *** 10 OT LS 0.79E-01 756.4 -3859.3170317584 *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not converge *** *** in 200 steps *** 11 OT CG 0.79E-01 789.4 351.12465571 -4029.5115025971 -1.70E+02 12 OT LS 0.40E-01 280.1 -3798.2663029849 13 OT CG 0.40E-01 213.4 0.01133245 -3806.9423627571 2.23E+02 14 OT LS 0.16E+00 162.5 -3812.8065913366 15 OT CG 0.16E+00 210.6 0.00805198 -3826.1089223976 -1.92E+01 16 OT LS 0.63E+00 189.0 -3838.1840259780 17 OT CG 0.63E+00 139.3 0.01014323 -3836.3256620356 -1.02E+01 18 OT LS 0.69E-01 178.3 -3783.3719988474 19 OT CG 0.69E-01 191.3 0.00797083 -3836.4177607144 -9.21E-02 However when I changed the EPS under the SCCS section as below, it did converge. Still, it did not give information about solvation-free and polarization energy which makes with and without solvent have the same energy. &SCCS ON DERIVATIVE_METHOD FFT DIELECTRIC_CONSTANT 78.36 EPS_SCCS 1.0E-7 EPS_SCF 1.0E-6 output Electronic density on regular grids: -1119.9999756896 0.0000243104 Core density on regular grids: 1119.9999999412 -0.0000000588 Total charge density on r-space grids: 0.0000242516 Total charge density g-space grids: 0.0000242516 Overlap energy of the core charge distribution: 0.00003421286733 Self energy of the core charge distribution: -7742.74303176718968 Core Hamiltonian energy: 1994.07265119910176 Hartree energy: 2522.08940846651103 Exchange-correlation energy: -632.17490075971477 Dispersion energy: -0.55798902100351 SCCS| Hartree energy of solute and solvent [Hartree] 2522.08940846651103 SCCS| Hartree energy of the solute only [Hartree] 2522.08940846651103 SCCS| Polarisation energy [Hartree] 0.00000000000000 SCCS| [kcal/mol] 0.000 SCCS| Cavitation energy [Hartree] 0.00000000000000 SCCS| [kcal/mol] 0.000 SCCS| Dispersion free energy [Hartree] 0.00000000000000 SCCS| [kcal/mol] 0.000 SCCS| Repulsion free energy [Hartree] 0.00000000000000 SCCS| [kcal/mol] 0.000 SCCS| Solvation free energy [Hartree] 0.00000000000000 SCCS| [kcal/mol] 0.000 Total energy: -3859.31382766942852 I will appreciate your kind suggestion and support. Thank you in advance Quadri Adewuyi PhD student -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/CAMAC%2BZqkBtE-Nh6Q-dhG9oHmkL9JxOaSj%3D1MSxiXVuRPhSO0tg%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From captainmozak at gmail.com Sat Oct 26 19:51:15 2024 From: captainmozak at gmail.com (Muhammad Saleh) Date: Sun, 27 Oct 2024 04:51:15 +0900 Subject: [CP2K-user] [CP2K:20821] SCCS Convergence issue In-Reply-To: References: Message-ID: HI Try to increase the iteration let say 800, I know it is quite long but since it is the first time, sometime it takes long iteration to reach the convergence especially if your initial coordinate is far from equilibration. The other one is to set a large SCF threshold for instance to -02 and -03 (you have set the SCF and SCC to -05 and -06). Once it converge, you can use the last convergence coordinate and restart the calculation but this time, set as -06 or -07. Hope it helps Best MuS On Sun, Oct 27, 2024 at 3:17?AM Quadri Olakunle Adewuyi < oq.adewuyi at gmail.com> wrote: > Dear CP2K Experts, > > I run geometry optimization with SCCS implicit solvent, but SCCS does not > converge after 200 iterations. > Kindly help me check if I am missing something > here is my input and output details below > > &GLOBAL > > PROJECT POMI > > RUN_TYPE GEO_OPT > > PRINT_LEVEL MEDIUM > > &END GLOBAL > > &FORCE_EVAL > > METHOD Quickstep > > &DFT > > BASIS_SET_FILE_NAME BASIS_MOLOPT > > POTENTIAL_FILE_NAME GTH_POTENTIALS > > !Charge and Multiplicity > > Charge 0 > > MULTIPLICITY 1 > > &MGRID > > CUTOFF 700 > > NGRIDS 4 > > REL_CUTOFF 60 > > &END MGRID > > &QS > > EPS_DEFAULT 1.0E-10 > > &END QS > > &SCF > > SCF_GUESS ATOMIC > > EPS_SCF 1.0E-5 > > MAX_SCF 500 > > &OT ON > > MINIMIZER DIIS > > &END OT > > &END SCF > > &POISSON > > PERIODIC NONE > > POISSON_SOLVER MT > > &END POISSON > > &XC > > &XC_FUNCTIONAL > > &PBE > > PARAMETRIZATION revPBE > > &END PBE > > &END XC_FUNCTIONAL > > &VDW_POTENTIAL > > POTENTIAL_TYPE PAIR_POTENTIAL > > &PAIR_POTENTIAL > > PARAMETER_FILE_NAME dftd3.dat > > TYPE DFTD3(BJ) > > REFERENCE_FUNCTIONAL PBE > > R_CUTOFF [angstrom] 16 > > &END PAIR_POTENTIAL > > &END VDW_POTENTIAL > > &END XC > > &SCCS ON > > DERIVATIVE_METHOD FFT > > DIELECTRIC_CONSTANT 78.36 > > EPS_SCCS 1.0E-6 > > EPS_SCF 1.0E-5 > > MAX_ITER 200 > > METHOD ANDREUSSI > > !METHOD FATTEBERT-GYGI > > MIXING 0.2 > > &ANDREUSSI > > RHO_MAX 0.001 > > RHO_MIN 0.0001 > > &END ANDREUSSI > > !&FATTEBERT-GYGI > > !BETA 1.3 > > !RHO_ZERO 0.0004 > > !&END FATTEBERT-GYGI > > &END SCCS > > &END DFT > > &SUBSYS > > &TOPOLOGY > > COORD_FILE_NAME POMl.xyz > > COORD_FILE_FORMAT XYZ > > &END TOPOLOGY > > &CELL > > ! unit cells that are orthorhombic are more efficient with CP2K > > A [angstrom] 36.00000000 0.000000000 0.000000000 > > B [angstrom] 0.000000000 36.00000000 0.000000000 > > C [angstrom] 0.000000000 0.000000000 36.00000000 > > PERIODIC NONE > > &END CELL > > &KIND W > > ELEMENT W > > BASIS_SET DZVP-MOLOPT-SR-GTH > > POTENTIAL GTH-PBE-q14 > > &END KIND > > &KIND P > > ELEMENT P > > BASIS_SET DZVP-MOLOPT-SR-GTH > > POTENTIAL GTH-PBE-q5 > > &END KIND > > &KIND O > > ELEMENT O > > BASIS_SET DZVP-MOLOPT-SR-GTH > > POTENTIAL GTH-PBE-q6 > > &END KIND > > &KIND H > > ELEMENT H > > BASIS_SET DZVP-MOLOPT-SR-GTH > > POTENTIAL GTH-PBE-q1 > > &END KIND > > &END SUBSYS > > &PRINT > > &TOTAL_NUMBERS ON > > &END TOTAL_NUMBERS > > &END PRINT > > &END FORCE_EVAL > > &MOTION > > &GEO_OPT > > TYPE MINIMIZATION > > MAX_ITER 200 > > OPTIMIZER LBFGS > > &LBFGS > > MAX_F_PER_ITER 20 > > &END LBFGS > > &END GEO_OPT > > &END MOTION > > > > here is the output > > > ----------------------------------------------------------------------------- > > 1 OT CG 0.15E+00 295.6 0.00073221 -3858.9269373219 > -3.86E+03 > > > *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not > converge *** > > *** in 200 steps > *** > > > 2 OT LS 0.60E+00 791.6 -3860.5237639880 > > 3 OT CG 0.60E+00 347.1 0.00088457 -3858.9883305609 > -6.14E-02 > > 4 OT LS 0.60E-02 330.8 -3858.0304951763 > > 5 OT CG 0.60E-02 233.0 0.00088692 -3858.9868935523 > 1.44E-03 > > 6 OT LS 0.11E-01 250.0 -3858.9914525675 > > 7 OT CG 0.11E-01 270.8 0.00084581 -3858.9954528356 > -8.56E-03 > > 8 OT LS 0.20E-01 462.9 -3859.0030589695 > > 9 OT CG 0.20E-01 471.2 0.00079042 -3859.0171304820 > -2.17E-02 > > > *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not > converge *** > > *** in 200 steps > *** > > > 10 OT LS 0.79E-01 756.4 -3859.3170317584 > > > *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not > converge *** > > *** in 200 steps > *** > > > 11 OT CG 0.79E-01 789.4 351.12465571 -4029.5115025971 > -1.70E+02 > > 12 OT LS 0.40E-01 280.1 -3798.2663029849 > > 13 OT CG 0.40E-01 213.4 0.01133245 -3806.9423627571 > 2.23E+02 > > 14 OT LS 0.16E+00 162.5 -3812.8065913366 > > 15 OT CG 0.16E+00 210.6 0.00805198 -3826.1089223976 > -1.92E+01 > > 16 OT LS 0.63E+00 189.0 -3838.1840259780 > > 17 OT CG 0.63E+00 139.3 0.01014323 -3836.3256620356 > -1.02E+01 > > 18 OT LS 0.69E-01 178.3 -3783.3719988474 > > 19 OT CG 0.69E-01 191.3 0.00797083 -3836.4177607144 > -9.21E-02 > > > > > > However when I changed the EPS under the SCCS section as below, it did > converge. Still, it did not give information about solvation-free and > polarization energy which makes with and without solvent have the same > energy. &SCCS ON > > DERIVATIVE_METHOD FFT > > DIELECTRIC_CONSTANT 78.36 > > EPS_SCCS 1.0E-7 > > EPS_SCF 1.0E-6 > > > > > > output > > > Electronic density on regular grids: -1119.9999756896 > 0.0000243104 > > Core density on regular grids: 1119.9999999412 > -0.0000000588 > > Total charge density on r-space grids: 0.0000242516 > > Total charge density g-space grids: 0.0000242516 > > > Overlap energy of the core charge distribution: > 0.00003421286733 > > Self energy of the core charge distribution: > -7742.74303176718968 > > Core Hamiltonian energy: > 1994.07265119910176 > > Hartree energy: > 2522.08940846651103 > > Exchange-correlation energy: > -632.17490075971477 > > Dispersion energy: > -0.55798902100351 > > > SCCS| Hartree energy of solute and solvent [Hartree] > 2522.08940846651103 > > SCCS| Hartree energy of the solute only [Hartree] > 2522.08940846651103 > > SCCS| Polarisation energy [Hartree] > 0.00000000000000 > > SCCS| [kcal/mol] > 0.000 > > SCCS| Cavitation energy [Hartree] > 0.00000000000000 > > SCCS| [kcal/mol] > 0.000 > > SCCS| Dispersion free energy [Hartree] > 0.00000000000000 > > SCCS| [kcal/mol] > 0.000 > > SCCS| Repulsion free energy [Hartree] > 0.00000000000000 > > SCCS| [kcal/mol] > 0.000 > > SCCS| Solvation free energy [Hartree] > 0.00000000000000 > > SCCS| [kcal/mol] > 0.000 > > > Total energy: > -3859.31382766942852 > > > > > I will appreciate your kind suggestion and support. > > Thank you in advance > > > Quadri Adewuyi > PhD student > > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+unsubscribe at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/CAMAC%2BZqkBtE-Nh6Q-dhG9oHmkL9JxOaSj%3D1MSxiXVuRPhSO0tg%40mail.gmail.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/CAPD_w6wwbH2mN%3DQStUg-ADigHhstgGmowO6vz6BYvAG8RvTzHw%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oq.adewuyi at gmail.com Sat Oct 26 22:05:39 2024 From: oq.adewuyi at gmail.com (Quadri Olakunle Adewuyi) Date: Sat, 26 Oct 2024 17:05:39 -0500 Subject: [CP2K-user] [CP2K:20822] SCCS Convergence issue In-Reply-To: References: Message-ID: Dear Muhammad, Thanks for the suggestion, I appreciate it. I will try to increase the iteration and update you. Thanks so much Quadri On Sat, 26 Oct 2024 at 14:34, Muhammad Saleh wrote: > HI > > Try to increase the iteration let say 800, I know it is quite long but > since it is the first time, sometime it takes long iteration to reach the > convergence especially if your initial coordinate is far from > equilibration. The other one is to set a large SCF threshold for instance > to -02 and -03 (you have set the SCF and SCC to -05 and -06). Once it > converge, you can use the last convergence coordinate and restart the > calculation but this time, set as -06 or -07. Hope it helps > > Best > MuS > > On Sun, Oct 27, 2024 at 3:17?AM Quadri Olakunle Adewuyi < > oq.adewuyi at gmail.com> wrote: > >> Dear CP2K Experts, >> >> I run geometry optimization with SCCS implicit solvent, but SCCS does not >> converge after 200 iterations. >> Kindly help me check if I am missing something >> here is my input and output details below >> >> &GLOBAL >> >> PROJECT POMI >> >> RUN_TYPE GEO_OPT >> >> PRINT_LEVEL MEDIUM >> >> &END GLOBAL >> >> &FORCE_EVAL >> >> METHOD Quickstep >> >> &DFT >> >> BASIS_SET_FILE_NAME BASIS_MOLOPT >> >> POTENTIAL_FILE_NAME GTH_POTENTIALS >> >> !Charge and Multiplicity >> >> Charge 0 >> >> MULTIPLICITY 1 >> >> &MGRID >> >> CUTOFF 700 >> >> NGRIDS 4 >> >> REL_CUTOFF 60 >> >> &END MGRID >> >> &QS >> >> EPS_DEFAULT 1.0E-10 >> >> &END QS >> >> &SCF >> >> SCF_GUESS ATOMIC >> >> EPS_SCF 1.0E-5 >> >> MAX_SCF 500 >> >> &OT ON >> >> MINIMIZER DIIS >> >> &END OT >> >> &END SCF >> >> &POISSON >> >> PERIODIC NONE >> >> POISSON_SOLVER MT >> >> &END POISSON >> >> &XC >> >> &XC_FUNCTIONAL >> >> &PBE >> >> PARAMETRIZATION revPBE >> >> &END PBE >> >> &END XC_FUNCTIONAL >> >> &VDW_POTENTIAL >> >> POTENTIAL_TYPE PAIR_POTENTIAL >> >> &PAIR_POTENTIAL >> >> PARAMETER_FILE_NAME dftd3.dat >> >> TYPE DFTD3(BJ) >> >> REFERENCE_FUNCTIONAL PBE >> >> R_CUTOFF [angstrom] 16 >> >> &END PAIR_POTENTIAL >> >> &END VDW_POTENTIAL >> >> &END XC >> >> &SCCS ON >> >> DERIVATIVE_METHOD FFT >> >> DIELECTRIC_CONSTANT 78.36 >> >> EPS_SCCS 1.0E-6 >> >> EPS_SCF 1.0E-5 >> >> MAX_ITER 200 >> >> METHOD ANDREUSSI >> >> !METHOD FATTEBERT-GYGI >> >> MIXING 0.2 >> >> &ANDREUSSI >> >> RHO_MAX 0.001 >> >> RHO_MIN 0.0001 >> >> &END ANDREUSSI >> >> !&FATTEBERT-GYGI >> >> !BETA 1.3 >> >> !RHO_ZERO 0.0004 >> >> !&END FATTEBERT-GYGI >> >> &END SCCS >> >> &END DFT >> >> &SUBSYS >> >> &TOPOLOGY >> >> COORD_FILE_NAME POMl.xyz >> >> COORD_FILE_FORMAT XYZ >> >> &END TOPOLOGY >> >> &CELL >> >> ! unit cells that are orthorhombic are more efficient with CP2K >> >> A [angstrom] 36.00000000 0.000000000 0.000000000 >> >> B [angstrom] 0.000000000 36.00000000 0.000000000 >> >> C [angstrom] 0.000000000 0.000000000 36.00000000 >> >> PERIODIC NONE >> >> &END CELL >> >> &KIND W >> >> ELEMENT W >> >> BASIS_SET DZVP-MOLOPT-SR-GTH >> >> POTENTIAL GTH-PBE-q14 >> >> &END KIND >> >> &KIND P >> >> ELEMENT P >> >> BASIS_SET DZVP-MOLOPT-SR-GTH >> >> POTENTIAL GTH-PBE-q5 >> >> &END KIND >> >> &KIND O >> >> ELEMENT O >> >> BASIS_SET DZVP-MOLOPT-SR-GTH >> >> POTENTIAL GTH-PBE-q6 >> >> &END KIND >> >> &KIND H >> >> ELEMENT H >> >> BASIS_SET DZVP-MOLOPT-SR-GTH >> >> POTENTIAL GTH-PBE-q1 >> >> &END KIND >> >> &END SUBSYS >> >> &PRINT >> >> &TOTAL_NUMBERS ON >> >> &END TOTAL_NUMBERS >> >> &END PRINT >> >> &END FORCE_EVAL >> >> &MOTION >> >> &GEO_OPT >> >> TYPE MINIMIZATION >> >> MAX_ITER 200 >> >> OPTIMIZER LBFGS >> >> &LBFGS >> >> MAX_F_PER_ITER 20 >> >> &END LBFGS >> >> &END GEO_OPT >> >> &END MOTION >> >> >> >> here is the output >> >> >> ----------------------------------------------------------------------------- >> >> 1 OT CG 0.15E+00 295.6 0.00073221 -3858.9269373219 >> -3.86E+03 >> >> >> *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not >> converge *** >> >> *** in 200 steps >> *** >> >> >> 2 OT LS 0.60E+00 791.6 -3860.5237639880 >> >> 3 OT CG 0.60E+00 347.1 0.00088457 -3858.9883305609 >> -6.14E-02 >> >> 4 OT LS 0.60E-02 330.8 -3858.0304951763 >> >> 5 OT CG 0.60E-02 233.0 0.00088692 -3858.9868935523 >> 1.44E-03 >> >> 6 OT LS 0.11E-01 250.0 -3858.9914525675 >> >> 7 OT CG 0.11E-01 270.8 0.00084581 -3858.9954528356 >> -8.56E-03 >> >> 8 OT LS 0.20E-01 462.9 -3859.0030589695 >> >> 9 OT CG 0.20E-01 471.2 0.00079042 -3859.0171304820 >> -2.17E-02 >> >> >> *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not >> converge *** >> >> *** in 200 steps >> *** >> >> >> 10 OT LS 0.79E-01 756.4 -3859.3170317584 >> >> >> *** WARNING in qs_sccs.F:594 :: The SCCS iteration cycle did not >> converge *** >> >> *** in 200 steps >> *** >> >> >> 11 OT CG 0.79E-01 789.4 351.12465571 -4029.5115025971 >> -1.70E+02 >> >> 12 OT LS 0.40E-01 280.1 -3798.2663029849 >> >> 13 OT CG 0.40E-01 213.4 0.01133245 -3806.9423627571 >> 2.23E+02 >> >> 14 OT LS 0.16E+00 162.5 -3812.8065913366 >> >> 15 OT CG 0.16E+00 210.6 0.00805198 -3826.1089223976 >> -1.92E+01 >> >> 16 OT LS 0.63E+00 189.0 -3838.1840259780 >> >> 17 OT CG 0.63E+00 139.3 0.01014323 -3836.3256620356 >> -1.02E+01 >> >> 18 OT LS 0.69E-01 178.3 -3783.3719988474 >> >> 19 OT CG 0.69E-01 191.3 0.00797083 -3836.4177607144 >> -9.21E-02 >> >> >> >> >> >> However when I changed the EPS under the SCCS section as below, it did >> converge. Still, it did not give information about solvation-free and >> polarization energy which makes with and without solvent have the same >> energy. &SCCS ON >> >> DERIVATIVE_METHOD FFT >> >> DIELECTRIC_CONSTANT 78.36 >> >> EPS_SCCS 1.0E-7 >> >> EPS_SCF 1.0E-6 >> >> >> >> >> >> output >> >> >> Electronic density on regular grids: -1119.9999756896 >> 0.0000243104 >> >> Core density on regular grids: 1119.9999999412 >> -0.0000000588 >> >> Total charge density on r-space grids: 0.0000242516 >> >> Total charge density g-space grids: 0.0000242516 >> >> >> Overlap energy of the core charge distribution: >> 0.00003421286733 >> >> Self energy of the core charge distribution: >> -7742.74303176718968 >> >> Core Hamiltonian energy: >> 1994.07265119910176 >> >> Hartree energy: >> 2522.08940846651103 >> >> Exchange-correlation energy: >> -632.17490075971477 >> >> Dispersion energy: >> -0.55798902100351 >> >> >> SCCS| Hartree energy of solute and solvent [Hartree] >> 2522.08940846651103 >> >> SCCS| Hartree energy of the solute only [Hartree] >> 2522.08940846651103 >> >> SCCS| Polarisation energy [Hartree] >> 0.00000000000000 >> >> SCCS| [kcal/mol] >> 0.000 >> >> SCCS| Cavitation energy [Hartree] >> 0.00000000000000 >> >> SCCS| [kcal/mol] >> 0.000 >> >> SCCS| Dispersion free energy [Hartree] >> 0.00000000000000 >> >> SCCS| [kcal/mol] >> 0.000 >> >> SCCS| Repulsion free energy [Hartree] >> 0.00000000000000 >> >> SCCS| [kcal/mol] >> 0.000 >> >> SCCS| Solvation free energy [Hartree] >> 0.00000000000000 >> >> SCCS| [kcal/mol] >> 0.000 >> >> >> Total energy: >> -3859.31382766942852 >> >> >> >> >> I will appreciate your kind suggestion and support. >> >> Thank you in advance >> >> >> Quadri Adewuyi >> PhD student >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "cp2k" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cp2k+unsubscribe at googlegroups.com. >> To view this discussion visit >> https://groups.google.com/d/msgid/cp2k/CAMAC%2BZqkBtE-Nh6Q-dhG9oHmkL9JxOaSj%3D1MSxiXVuRPhSO0tg%40mail.gmail.com >> >> . >> > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+unsubscribe at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/CAPD_w6wwbH2mN%3DQStUg-ADigHhstgGmowO6vz6BYvAG8RvTzHw%40mail.gmail.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/CAMAC%2BZq8seDfwfU%2B77tuNneo3s5VFNh-_-U78v%3DyTOtrH6doqg%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sethinaina336 at gmail.com Sun Oct 27 17:47:30 2024 From: sethinaina336 at gmail.com (Naina Sethi) Date: Sun, 27 Oct 2024 10:47:30 -0700 (PDT) Subject: [CP2K-user] [CP2K:20823] Memory error in CP2K Message-ID: <69c1e5f4-b464-4750-83aa-c6511e7df686n@googlegroups.com> Hey CP2K users, I am facing an issue with the memory while trying to optimise a molecule using CP2K. While using energy cut off of 450Ry the calculation runs smoothly, whereas with 800Ry the calculation doesn't even start with the first step of optimization cycle. Can someone please suggest any keyword, that can help running the calculation with 800Ry. I have attached the input for reference. The following error persists: *In file 'ps_wavelet_kernel.F90', around line 1160: Error allocating 1526169600 bytes: Cannot allocate memoryError termination. Backtrace:In file 'ps_wavelet_kernel.F90', around line 1160: Error allocating 1526169600 bytes: Cannot allocate memoryError termination. Backtrace:--------------------------------------------------------------------------* -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/69c1e5f4-b464-4750-83aa-c6511e7df686n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Dy.inp Type: chemical/x-gamess-input Size: 10119 bytes Desc: not available URL: From sastargetx at gmail.com Mon Oct 28 01:53:48 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Sun, 27 Oct 2024 18:53:48 -0700 (PDT) Subject: [CP2K-user] [CP2K:20824] OpenBLAS wget error Message-ID: Hi; I'm trying to install this new build of CP2K and it errors out whenever it tries to get my architecture info using OpenBLAS: wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz usage: sha256sum [-bctwz] [files ...] OpenBLAS-0.3.27.tar.gz ERROR: (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code detected. All I did was download the file from the GitHub and run as is. Attempting to install this on my M1 Max MacBook Pro. Thanks in advance. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hutter at chem.uzh.ch Mon Oct 28 08:24:09 2024 From: hutter at chem.uzh.ch (=?iso-8859-1?Q?J=FCrg_Hutter?=) Date: Mon, 28 Oct 2024 08:24:09 +0000 Subject: [CP2K-user] [CP2K:20825] Memory error in CP2K In-Reply-To: <69c1e5f4-b464-4750-83aa-c6511e7df686n@googlegroups.com> References: <69c1e5f4-b464-4750-83aa-c6511e7df686n@googlegroups.com> Message-ID: Hi apparently, you don't have enough memory available. You can either use more resources (with more memory) or reduce the size of the box (your 40 A are on the large side, probably 30-32 A would do) or go for a lower cutoff. regards JH ________________________________________ From: cp2k at googlegroups.com on behalf of Naina Sethi Sent: Sunday, October 27, 2024 6:47 PM To: cp2k Subject: [CP2K:20823] Memory error in CP2K Hey CP2K users, I am facing an issue with the memory while trying to optimise a molecule using CP2K. While using energy cut off of 450Ry the calculation runs smoothly, whereas with 800Ry the calculation doesn't even start with the first step of optimization cycle. Can someone please suggest any keyword, that can help running the calculation with 800Ry. I have attached the input for reference. The following error persists: In file 'ps_wavelet_kernel.F90', around line 1160: Error allocating 1526169600 bytes: Cannot allocate memory Error termination. Backtrace: In file 'ps_wavelet_kernel.F90', around line 1160: Error allocating 1526169600 bytes: Cannot allocate memory Error termination. Backtrace: -------------------------------------------------------------------------- -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/69c1e5f4-b464-4750-83aa-c6511e7df686n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZR0P278MB07590DC6B8416D51770277B29F4A2%40ZR0P278MB0759.CHEP278.PROD.OUTLOOK.COM. From bamaz.97 at gmail.com Mon Oct 28 08:29:16 2024 From: bamaz.97 at gmail.com (bartosz mazur) Date: Mon, 28 Oct 2024 01:29:16 -0700 (PDT) Subject: [CP2K-user] [CP2K:20826] Re: compilation problems - LHS and RHS of an assignment statement have incompatible types In-Reply-To: References: <17064317-728e-4164-b086-edd664bd8d28n@googlegroups.com> <21825b1f-9a66-4d63-a223-e958db74d714n@googlegroups.com> <00fddcc1-3a8d-41fa-a5bf-d30c5b637b45n@googlegroups.com> <74d53224-f8e3-4c5f-9a23-90fd2b4e81edn@googlegroups.com> <791c7cac-d72c-4d79-b7ff-c9581366eed0n@googlegroups.com> <888943c2-f7b4-4de1-9ab1-338e4d786528n@googlegroups.com> <2ef8c5e3-56e5-4574-a1f3-95ffe2d6fd2en@googlegroups.com> <463cb4b0-c840-4e7d-9bca-09f007a69925n@googlegroups.com> <73ca2ac6-0c07-403c-b08e-d1d44cfcfdddn@googlegroups.com> <3c0b331a-fe25-4176-8692-ff5d7c466c44n@googlegroups.com> <9c503856-0751-48b3-8bfb-e37cf5cb91a6n@googlegroups.com> <024be246-9696-4577-a330-f5a234dc51edn@googlegroups.com> <9027c53b-4155-418c-9d08-ea77e5ea5bcfn@googlegroups.com> <7042b62f-62de-43ad-ad94-b940977c9e2an@googlegroups.com> <5473442a-c035-4d51-833f-4c340767ee66n@googlegroups.com> Message-ID: Many thanks Frederick for your help! pi?tek, 25 pa?dziernika 2024 o 14:27:36 UTC+2 Frederick Stein napisa?(a): > Regarding the other issues: > I can confirm them but cannot provide fixes for all of them because the > probably trigger bugs in ifort. Because ifort is already deprecated, these > bugs will probably not be fixed. Furthermore, we do not see any issues on > our Intel CI. I will fix what I can but some of them will be left as we > will focus our efforts on the support of the new ifx compiler. > > Frederick Stein schrieb am Freitag, 25. Oktober 2024 um 11:46:00 UTC+2: > >> Dear Bartosz, >> I will check the other issues with your regtests. >> Regarding your latest issue, please provide more information such as an >> output file or a hint on the context. If I am supposed to retry the >> calculation on my local machine, I need all additional input files such as >> your plumed file. I can run your input file up to the point that CP2K needs >> plumed. >> Best, >> Frederick >> bartosz mazur schrieb am Freitag, 25. Oktober 2024 um 10:15:19 UTC+2: >> >>> I just got another error with LibXSMM, now in my regular simulation and >>> without using OpenMP. This is the error: >>> >>> ``` >>> [1729843139.920274] [r23c01b04:2913 :0] ib_md.c:295 UCX >>> ERROR ibv_reg_mr(address=0x14f0b46fc080, length=7424, access=0xf) failed: >>> Cannot allocate memory >>> [1729843139.920290] [r23c01b04:2913 :0] ucp_mm.c:70 UCX >>> ERROR failed to register address 0x14f0b46fc080 (host) length 7424 on >>> md[4]=mlx5_0: Input/output error (md supports: host) >>> >>> LIBXSMM_VERSION: develop-1.17-3834 (25693946)[1729843139.932647] >>> [r23c01b04:2945 :0] ib_md.c:295 UCX ERROR >>> ibv_reg_mr(address=0x1491f069e040, length=8128, access=0xf) failed: Cannot >>> allocate memory >>> [1729843139.932660] [r23c01b04:2945 :0] ucp_mm.c:70 UCX >>> ERROR failed to register address 0x1491f069e040 (host) length 8128 on >>> md[4]=mlx5_0: Input/output error (md supports: host) >>> >>> >>> CLX/DP TRY JIT STA COL >>> 0..13 4 4 0 0 >>> 14..23 4 4 0 0 >>> >>> 24..64 0 0 0 0 >>> Registry and code: 13 MB + 80 KB (gemm=8) >>> Command (PID=2913): >>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>> cp2k.inp -o cp2k.out >>> Uptime: 407633.177169 s >>> ``` >>> >>> and this is simulation input I'm using: >>> >>> ``` >>> &GLOBAL >>> PROJECT uam1o_npt_rms >>> RUN_TYPE MD >>> PRINT_LEVEL LOW >>> PREFERRED_DIAG_LIBRARY SCALAPACK >>> &END GLOBAL >>> >>> &FORCE_EVAL >>> METHOD QUICKSTEP >>> STRESS_TENSOR ANALYTICAL >>> &DFT >>> BASIS_SET_FILE_NAME BASIS_MOLOPT_UZH >>> POTENTIAL_FILE_NAME POTENTIAL_UZH >>> &MGRID >>> CUTOFF 500 >>> &END MGRID >>> &XC >>> &XC_FUNCTIONAL PBE >>> &END XC_FUNCTIONAL >>> &VDW_POTENTIAL >>> POTENTIAL_TYPE PAIR_POTENTIAL >>> &PAIR_POTENTIAL >>> TYPE DFTD3(BJ) >>> PARAMETER_FILE_NAME dftd3.dat >>> REFERENCE_FUNCTIONAL PBE >>> R_CUTOFF 25.0 >>> &END PAIR_POTENTIAL >>> &END VDW_POTENTIAL >>> &END XC >>> &END DFT >>> >>> &SUBSYS >>> &CELL >>> A 12.2807999 0.0000000 0.0000000 >>> B 7.6258602 9.6257200 0.0000000 >>> C -2.1557724 -1.0420258 18.0042801 >>> &END CELL >>> &COORD >>> Zn 11.37811 4.60286 0.24515 >>> Zn 8.15435 3.05288 8.74518 >>> Zn 6.37590 3.97311 17.74650 >>> Zn 9.59842 5.54014 9.24747 >>> S 11.79344 6.72692 17.10850 >>> S 4.06825 3.00573 9.90358 >>> S 5.95830 1.84422 0.90027 >>> S 13.67407 5.58944 8.10767 >>> O 10.72408 3.58291 1.89315 >>> O 8.51986 4.01962 1.53085 >>> O 6.60135 3.91587 7.68572 >>> O 7.74637 5.79259 8.21600 >>> O 15.32810 8.58246 5.10041 >>> O 9.35608 2.93551 7.09500 >>> O 10.38999 4.93007 7.45977 >>> O 11.66491 6.35111 1.31266 >>> O 9.48582 6.62478 0.77364 >>> O 2.59062 2.40094 3.91496 >>> O 7.03031 4.99173 16.09885 >>> O 9.23544 4.56122 16.46252 >>> O 11.14602 4.67776 10.31440 >>> O 10.00982 2.79915 9.77218 >>> O 2.41388 0.01898 12.91899 >>> O 8.39375 5.66143 10.89628 >>> O 7.36998 3.66087 10.53589 >>> O 6.08863 2.22161 16.68336 >>> O 8.26988 1.95313 17.21650 >>> O 15.16937 6.16381 14.09906 >>> N 13.25907 3.80728 0.04001 >>> N 2.36335 -0.74130 17.33402 >>> N 7.60676 1.08576 8.95623 >>> N 15.77729 5.75974 9.67861 >>> N 4.49430 4.76652 17.95756 >>> N 15.38873 9.31230 0.67467 >>> N 10.14308 7.50848 9.04236 >>> N 1.96529 2.83557 8.33233 >>> C 6.76554 5.18292 7.68414 >>> C 14.28210 4.11624 0.86006 >>> C 9.47998 3.39622 2.09658 >>> C 3.20112 3.42080 0.84626 >>> C 9.91466 1.18589 3.17244 >>> C 9.08210 2.29987 3.02657 >>> C 5.74710 6.04945 7.01821 >>> C 7.83265 2.30920 3.66005 >>> C 3.35793 2.34328 -0.04029 >>> C 4.51663 1.46385 -0.02755 >>> C 16.24194 7.75266 5.73606 >>> C 4.78940 5.52817 6.14198 >>> C 7.40810 1.21174 4.39947 >>> C 16.18016 6.38244 5.49010 >>> C 9.48869 0.06986 3.88005 >>> C 11.27238 1.77457 17.14330 >>> C 5.77166 7.43009 7.27236 >>> C 11.14819 8.24901 17.58588 >>> C 8.22170 0.08058 4.47135 >>> C 0.15087 1.02286 17.07544 >>> C 17.16180 8.28565 6.64351 >>> C 10.57067 7.01060 1.31282 >>> C 6.72654 0.47459 8.14002 >>> C 10.27972 3.79035 6.89470 >>> C 14.15006 8.72843 8.15880 >>> C 11.73751 2.06868 5.82537 >>> C 11.38838 3.41515 5.96966 >>> C 10.52304 8.34339 1.98566 >>> C 12.16584 4.39562 5.33967 >>> C 14.89762 7.93801 9.04648 >>> C 14.86698 6.48365 9.03575 >>> C 2.67167 1.17044 3.27681 >>> C 11.52468 8.76552 2.86608 >>> C 13.29140 4.04007 4.60622 >>> C 3.78230 0.36534 3.52266 >>> C 12.87823 1.70260 5.12344 >>> C 8.27761 0.34001 9.85941 >>> C 9.42677 9.18364 1.73295 >>> C 3.27553 4.45658 9.42657 >>> C 13.66559 2.69775 4.53650 >>> C 15.77023 8.59069 9.93240 >>> C 1.68356 0.78491 2.36643 >>> C 10.98451 3.41041 10.31327 >>> C 3.46873 4.45681 17.14097 >>> C 8.27403 5.18373 15.89814 >>> C 14.54907 5.15099 17.15930 >>> C 7.83119 7.39584 14.82858 >>> C 8.66916 6.28563 14.97331 >>> C 11.99928 2.54577 10.98702 >>> C 9.92072 6.28547 14.34388 >>> C 16.54982 7.26986 0.04271 >>> C 15.39103 8.14919 0.03189 >>> C 1.50023 0.84646 12.27989 >>> C 12.95126 3.06908 11.86817 >>> C 10.34198 7.38826 13.61070 >>> C 1.55836 2.21699 12.52561 >>> C 8.25354 8.51697 14.12666 >>> C 6.48249 6.79770 0.85630 >>> C 11.97760 1.16465 10.73446 >>> C 6.60385 0.32218 0.42301 >>> C 9.52282 8.51550 13.54043 >>> C 17.60321 7.54791 0.92891 >>> C 0.58530 0.31102 11.36884 >>> C 7.18362 1.56332 16.68291 >>> C 11.01926 8.11905 9.86341 >>> C 7.47582 4.80132 11.10039 >>> C 3.59282 -0.13430 9.84955 >>> C 6.01179 6.51430 12.17471 >>> C 6.36853 5.17005 12.02942 >>> C 7.23131 0.22715 16.01652 >>> C 5.59963 4.18477 12.66234 >>> C 2.84614 0.65728 8.96213 >>> C 2.87561 2.11161 8.97508 >>> C 15.08536 7.39548 14.73440 >>> C 6.23001 -0.19920 15.13769 >>> C 4.47482 4.53325 13.40042 >>> C 13.97400 8.19851 14.48576 >>> C 4.87173 6.87322 12.88120 >>> C 9.47231 8.25578 8.14046 >>> C 8.32790 -0.61137 16.27301 >>> C 14.46698 4.13864 8.58475 >>> C 4.09294 5.87331 13.47165 >>> C 1.97640 0.00563 8.07267 >>> C 16.07240 7.78504 15.64417 >>> H 14.10215 4.93465 1.55678 >>> H 3.98110 3.68721 1.55899 >>> H 10.89072 1.19647 2.69205 >>> H 7.19958 3.19021 3.56839 >>> H 4.75923 4.45384 5.96230 >>> H 6.45299 1.21835 4.92062 >>> H 15.44211 6.00062 4.78824 >>> H 17.75043 8.81610 3.97156 >>> H 10.41563 1.57993 16.49923 >>> H 6.49332 7.81303 7.99143 >>> H 0.24800 0.19739 16.37425 >>> H 9.53586 -0.26872 6.84508 >>> H 6.19685 1.12218 7.44173 >>> H 13.45550 8.28133 7.44815 >>> H 11.11633 1.31384 6.30260 >>> H 11.87413 5.44074 5.42962 >>> H 12.38442 8.12016 3.04474 >>> H 13.88694 4.78876 4.08791 >>> H 4.53915 0.70283 4.22717 >>> H 0.88557 0.65625 5.03328 >>> H 8.96418 0.89159 10.50060 >>> H 8.67994 8.85961 1.01083 >>> H 16.35704 8.00331 10.63471 >>> H 13.12606 1.45212 2.16563 >>> H 3.64702 3.63930 16.44281 >>> H 13.76743 4.88477 16.44833 >>> H 6.85355 7.37827 15.30535 >>> H 10.55820 5.40745 14.43410 >>> H 12.97886 4.14375 12.04672 >>> H 11.29905 7.38966 13.09313 >>> H 2.29216 2.60091 13.23073 >>> H -0.01303 -0.23279 14.03603 >>> H 7.34113 6.99275 1.49776 >>> H 11.26049 0.78023 10.01184 >>> H 17.50743 8.37258 1.63130 >>> H 8.21398 8.86531 11.16822 >>> H 11.54834 7.47018 10.56097 >>> H 4.28503 0.31205 10.56295 >>> H 6.62643 7.27289 11.69479 >>> H 5.89748 3.14154 12.57118 >>> H 5.36986 0.44461 14.95599 >>> H 3.88656 3.78035 13.92095 >>> H 13.21826 7.85764 13.78163 >>> H 16.85773 7.91771 12.97237 >>> H 8.78884 7.70469 7.49554 >>> H 9.07452 -0.28399 16.99402 >>> H 1.39009 0.59398 7.37083 >>> H 4.63062 7.11938 15.84758 >>> &END COORD >>> &KIND Zn >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q12 >>> POTENTIAL GTH-PBE-q12 >>> &END KIND >>> &KIND S >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 >>> POTENTIAL GTH-PBE-q6 >>> &END KIND >>> &KIND O >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q6 >>> POTENTIAL GTH-PBE-q6 >>> &END KIND >>> &KIND N >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q5 >>> POTENTIAL GTH-PBE-q5 >>> &END KIND >>> &KIND C >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q4 >>> POTENTIAL GTH-PBE-q4 >>> &END KIND >>> &KIND H >>> BASIS_SET TZVP-MOLOPT-PBE-GTH-q1 >>> POTENTIAL GTH-PBE-q1 >>> &END KIND >>> &END SUBSYS >>> &END FORCE_EVAL >>> >>> &MOTION >>> &MD >>> ENSEMBLE NPT_I >>> TEMPERATURE 298 >>> TIMESTEP 1.0 >>> STEPS 50000 >>> &THERMOSTAT >>> TYPE NOSE >>> &NOSE >>> LENGTH 3 >>> YOSHIDA 3 >>> TIMECON 1000 >>> &END NOSE >>> &END THERMOSTAT >>> &BAROSTAT >>> PRESSURE 1.0 >>> TIMECON 4000 >>> &END BAROSTAT >>> &END MD >>> &FREE_ENERGY >>> METHOD METADYN >>> &METADYN >>> USE_PLUMED .TRUE. >>> PLUMED_INPUT_FILE plumed.dat >>> &END METADYN >>> &END FREE_ENERGY >>> &PRINT >>> &TRAJECTORY >>> &EACH >>> MD 5 >>> &END EACH >>> &END TRAJECTORY >>> &FORCES >>> UNIT eV*angstrom^-1 >>> &EACH >>> MD 5 >>> &END EACH >>> &END FORCES >>> &CELL >>> &EACH >>> MD 5 >>> &END EACH >>> &END CELL >>> &END PRINT >>> &END MOTION >>> ``` >>> >>> This simulation was performed with previous version of cp2k (so without >>> your fix). >>> pi?tek, 25 pa?dziernika 2024 o 09:50:47 UTC+2 bartosz mazur napisa?(a): >>> >>>> Hi Frederick, >>>> >>>> it helped with most of the tests! Now only 13 have failed. In the >>>> attachments you will find full output from regtests and here is output from >>>> single job with TRACE enabled: >>>> >>>> ``` >>>> Loading intel/2024a >>>> Loading requirement: GCCcore/13.3.0 zlib/1.3.1-GCCcore-13.3.0 >>>> binutils/2.42-GCCcore-13.3.0 intel-compilers/2024.2.0 >>>> numactl/2.0.18-GCCcore-13.3.0 UCX/1.16.0-GCCcore-13.3.0 >>>> impi/2021.13.0-intel-compilers-2024.2.0 imkl/2024.2.0 iimpi/2024a >>>> imkl-FFTW/2024.2.0-iimpi-2024a >>>> >>>> Currently Loaded Modulefiles: >>>> 1) GCCcore/13.3.0 7) >>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>> >>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>> >>>> 4) intel-compilers/2024.2.0 10) imkl-FFTW/2024.2.0-iimpi-2024a >>>> >>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>> >>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>> 2 MPI processes with 2 OpenMP threads each >>>> started at Fri Oct 25 09:34:34 CEST 2024 in /lustre/tmp/slurm/3127182 >>>> SIRIUS 7.6.1, git hash: >>>> https://api.github.com/repos/electronic-structure/SIRIUS/git/ref/tags/v7.6.1 >>>> Warning! Compiled in 'debug' mode with assert statements enabled! >>>> >>>> >>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>> CLX/DP TRY JIT STA COL >>>> 0..13 8 8 0 0 >>>> 14..23 0 0 0 0 >>>> 24..64 0 0 0 0 >>>> Registry and code: 13 MB + 64 KB (gemm=8) >>>> Command (PID=423503): >>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>> dftd3src1.inp -o dftd3src1.out >>>> Uptime: 2.752513 s >>>> >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 0 PID 423503 RUNNING AT r21c01b03 >>>> >>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>> >>>> =================================================================================== >>>> >>>> >>>> =================================================================================== >>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>> = RANK 1 PID 423504 RUNNING AT r21c01b03 >>>> >>>> = KILLED BY SIGNAL: 9 (Killed) >>>> >>>> =================================================================================== >>>> finished at Fri Oct 25 09:34:39 CEST 2024 >>>> ``` >>>> >>>> and the last lines: >>>> >>>> ``` >>>> 000000:000002<< 13 3 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 13 4 >>>> mp_sendrecv_dm2 >>>> start Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 13 4 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 12 2 >>>> pw_nn_compose_r 0 >>>> .003 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 11 1 xc_pw_derive >>>> 0.003 H >>>> ostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 11 5 pw_zero >>>> start Hostme >>>> m: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 11 5 pw_zero >>>> 0.000 Hostme >>>> m: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 11 2 xc_pw_derive >>>> start H >>>> ostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 12 3 >>>> pw_nn_compose_r s >>>> tart Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 13 5 >>>> mp_sendrecv_dm2 >>>> start Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 13 5 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 13 6 >>>> mp_sendrecv_dm2 >>>> start Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 13 6 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 12 3 >>>> pw_nn_compose_r 0 >>>> .002 Hostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 11 2 xc_pw_derive >>>> 0.002 H >>>> ostmem: 955 MB GPUmem: 0 MB >>>> 000000:000002>> 11 6 pw_zero >>>> start Hostme >>>> m: 955 MB GPUmem: 0 MB >>>> 000000:000002<< 11 6 pw_zero >>>> 0.001 Hostme >>>> m: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 11 3 xc_pw_derive >>>> start H >>>> ostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 12 4 >>>> pw_nn_compose_r s >>>> tart Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 13 7 >>>> mp_sendrecv_dm2 >>>> start Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002<< 13 7 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 13 8 >>>> mp_sendrecv_dm2 >>>> start Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002<< 13 8 >>>> mp_sendrecv_dm2 >>>> 0.000 Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002<< 12 4 >>>> pw_nn_compose_r 0 >>>> .002 Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002<< 11 3 xc_pw_derive >>>> 0.002 H >>>> ostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 11 1 >>>> pw_spline_scale_deriv >>>> start Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002<< 11 1 >>>> pw_spline_scale_deriv >>>> 0.001 Hostmem: 960 MB GPUmem: 0 MB >>>> 000000:000002>> 11 20 >>>> pw_pool_give_back_pw >>>> start Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002<< 11 20 >>>> pw_pool_give_back_pw >>>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002>> 11 21 >>>> pw_pool_give_back_pw >>>> start Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002<< 11 21 >>>> pw_pool_give_back_pw >>>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002>> 11 22 >>>> pw_pool_give_back_pw >>>> start Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002<< 11 22 >>>> pw_pool_give_back_pw >>>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002>> 11 23 >>>> pw_pool_give_back_pw >>>> start Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002<< 11 23 >>>> pw_pool_give_back_pw >>>> 0.000 Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002>> 11 1 >>>> xc_functional_eval s >>>> tart Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002>> 12 1 b97_lda_eval >>>> star >>>> t Hostmem: 965 MB GPUmem: 0 MB >>>> 000000:000002<< 12 1 b97_lda_eval >>>> 0.10 >>>> 3 Hostmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 11 1 >>>> xc_functional_eval 0 >>>> .103 Hostmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 1 >>>> xc_rho_set_and_dset_create >>>> 0.120 Hostmem: 979 MB GPUmem: 0 MB >>>> 000000:000002>> 10 1 >>>> check_for_derivatives s >>>> tart Hostmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 1 >>>> check_for_derivatives 0 >>>> .000 Hostmem: 979 MB GPUmem: 0 MB >>>> 000000:000002>> 10 14 pw_create_r3d >>>> start Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 14 pw_create_r3d >>>> 0.000 Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002>> 10 15 pw_create_r3d >>>> start Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 15 pw_create_r3d >>>> 0.000 Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002>> 10 16 pw_create_r3d >>>> start Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 16 pw_create_r3d >>>> 0.000 Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002>> 10 17 pw_create_r3d >>>> start Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> 000000:000002<< 10 17 pw_create_r3d >>>> 0.000 Hos >>>> tmem: 979 MB GPUmem: 0 MB >>>> ``` >>>> >>>> Best >>>> Bartosz >>>> >>>> ?roda, 23 pa?dziernika 2024 o 09:15:33 UTC+2 Frederick Stein napisa?(a): >>>> >>>>> Dear Bartosz, >>>>> My fix is merged. Can you switch to the CP2K master and try it again? >>>>> We are still working on a few issues with the Intel compilers such that we >>>>> may eventually migrate from ifort to ifx. >>>>> Best, >>>>> Frederick >>>>> >>>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 17:45:21 UTC+2: >>>>> >>>>>> Great! Thank you for your help. >>>>>> >>>>>> Best >>>>>> Bartosz >>>>>> >>>>>> wtorek, 22 pa?dziernika 2024 o 15:24:04 UTC+2 Frederick Stein >>>>>> napisa?(a): >>>>>> >>>>>>> I have a fix for it. In contrast to my first thought, it is a case >>>>>>> of invalid type conversion from real to complex numbers (yes, Fortran is >>>>>>> rather strict about it) in pw_derive. This may also be present in a few >>>>>>> other spots. I am currently running more tests and I will open a pull >>>>>>> request within the next few days. >>>>>>> Best, >>>>>>> Frederick >>>>>>> >>>>>>> Frederick Stein schrieb am Dienstag, 22. Oktober 2024 um 13:12:49 >>>>>>> UTC+2: >>>>>>> >>>>>>>> I can reproduce the error locally. I am investigating it now. >>>>>>>> >>>>>>>> bartosz mazur schrieb am Dienstag, 22. Oktober 2024 um 11:58:57 >>>>>>>> UTC+2: >>>>>>>> >>>>>>>>> I was loading it as it was needed for compilation. I have unloaded >>>>>>>>> the module, but the error still occurs: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>> 0..13 2 2 0 0 >>>>>>>>> 14..23 0 0 0 0 >>>>>>>>> 24..64 0 0 0 0 >>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>> Command (PID=15485): >>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>> Uptime: 1.757102 s >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>> = RANK 0 PID 15485 RUNNING AT r30c01b01 >>>>>>>>> >>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>> = RANK 1 PID 15486 RUNNING AT r30c01b01 >>>>>>>>> >>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>> >>>>>>>>> =================================================================================== >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> >>>>>>>>> and the last 100 lines: >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> 000000:000002>> 11 37 >>>>>>>>> pw_create_c1d start >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 37 >>>>>>>>> pw_create_c1d 0.000 >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 10 64 >>>>>>>>> pw_pool_create_pw 0.000 >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 10 25 pw_copy >>>>>>>>> start Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 10 25 pw_copy >>>>>>>>> 0.001 Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 10 17 pw_axpy >>>>>>>>> start Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 10 17 pw_axpy >>>>>>>>> 0.001 Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 10 19 mp_sum_d >>>>>>>>> start Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 10 19 mp_sum_d >>>>>>>>> 0.000 Hostmem: >>>>>>>>> 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 10 3 >>>>>>>>> pw_poisson_solve start >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 3 >>>>>>>>> pw_poisson_rebuild s >>>>>>>>> tart Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 3 >>>>>>>>> pw_poisson_rebuild 0 >>>>>>>>> .000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 65 >>>>>>>>> pw_pool_create_pw st >>>>>>>>> art Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 38 >>>>>>>>> pw_create_c1d sta >>>>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 38 >>>>>>>>> pw_create_c1d 0.0 >>>>>>>>> 00 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 65 >>>>>>>>> pw_pool_create_pw 0. >>>>>>>>> 000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 26 pw_copy >>>>>>>>> start Hostme >>>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 26 pw_copy >>>>>>>>> 0.001 Hostme >>>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 3 >>>>>>>>> pw_multiply_with sta >>>>>>>>> rt Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 3 >>>>>>>>> pw_multiply_with 0.0 >>>>>>>>> 01 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 27 pw_copy >>>>>>>>> start Hostme >>>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 27 pw_copy >>>>>>>>> 0.001 Hostme >>>>>>>>> m: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 3 >>>>>>>>> pw_integral_ab start >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 20 mp_sum_d >>>>>>>>> start Ho >>>>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 20 mp_sum_d >>>>>>>>> 0.001 Ho >>>>>>>>> stmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 11 3 >>>>>>>>> pw_integral_ab 0.004 >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 11 4 >>>>>>>>> pw_poisson_set start >>>>>>>>> Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 66 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 39 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 39 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 66 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 28 pw_copy >>>>>>>>> start Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 28 pw_copy >>>>>>>>> 0.001 Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 7 pw_derive >>>>>>>>> start H >>>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 7 pw_derive >>>>>>>>> 0.002 H >>>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 67 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 40 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 40 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 67 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 29 pw_copy >>>>>>>>> start Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 29 pw_copy >>>>>>>>> 0.001 Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 8 pw_derive >>>>>>>>> start H >>>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 8 pw_derive >>>>>>>>> 0.002 H >>>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 68 >>>>>>>>> pw_pool_create_pw >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 13 41 >>>>>>>>> pw_create_c1d >>>>>>>>> start Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 13 41 >>>>>>>>> pw_create_c1d >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 68 >>>>>>>>> pw_pool_create_pw >>>>>>>>> 0.000 Hostmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 30 pw_copy >>>>>>>>> start Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002<< 12 30 pw_copy >>>>>>>>> 0.001 Hos >>>>>>>>> tmem: 697 MB GPUmem: 0 MB >>>>>>>>> 000000:000002>> 12 9 pw_derive >>>>>>>>> start H >>>>>>>>> ostmem: 697 MB GPUmem: 0 MB >>>>>>>>> ``` >>>>>>>>> >>>>>>>>> This is the list of currently loaded modules (all come with intel): >>>>>>>>> >>>>>>>>> ``` >>>>>>>>> Currently Loaded Modulefiles: >>>>>>>>> 1) GCCcore/13.3.0 7) >>>>>>>>> impi/2021.13.0-intel-compilers-2024.2.0 >>>>>>>>> 2) zlib/1.3.1-GCCcore-13.3.0 8) imkl/2024.2.0 >>>>>>>>> >>>>>>>>> 3) binutils/2.42-GCCcore-13.3.0 9) iimpi/2024a >>>>>>>>> >>>>>>>>> 4) intel-compilers/2024.2.0 10) >>>>>>>>> imkl-FFTW/2024.2.0-iimpi-2024a >>>>>>>>> 5) numactl/2.0.18-GCCcore-13.3.0 11) intel/2024a >>>>>>>>> >>>>>>>>> 6) UCX/1.16.0-GCCcore-13.3.0 >>>>>>>>> ``` >>>>>>>>> wtorek, 22 pa?dziernika 2024 o 11:12:57 UTC+2 Frederick Stein >>>>>>>>> napisa?(a): >>>>>>>>> >>>>>>>>>> Dear Bartosz, >>>>>>>>>> I am currently running some tests with the latest Intel compiler >>>>>>>>>> myself. What bothers me about your setup is the module GCC13/13.3.0 . Why >>>>>>>>>> is it loaded? Can you unload it? This would at least reduce potential >>>>>>>>>> interferences with between the Intel and the GCC compilers. >>>>>>>>>> Best, >>>>>>>>>> Frederick >>>>>>>>>> >>>>>>>>>> bartosz mazur schrieb am Montag, 21. Oktober 2024 um 16:33:45 >>>>>>>>>> UTC+2: >>>>>>>>>> >>>>>>>>>>> The error for ssmp is: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>>> 0..13 4 4 0 0 >>>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>>> Registry and code: 13 MB + 32 KB (gemm=4) >>>>>>>>>>> Command (PID=54845): >>>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>>> Uptime: 2.861583 s >>>>>>>>>>> /var/spool/slurmd/r30c01b15/job3120330/slurm_script: line 36: >>>>>>>>>>> 54845 Segmentation fault (core dumped) >>>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.ssmp -i >>>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> and the last 100 lines of output: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> 000000:000001>> 12 20 >>>>>>>>>>> mp_sum_d start Ho >>>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 20 >>>>>>>>>>> mp_sum_d 0.000 Ho >>>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 11 13 >>>>>>>>>>> dbcsr_dot_sd 0.000 H >>>>>>>>>>> ostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 10 12 >>>>>>>>>>> calculate_ptrace_kp 0.0 >>>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 9 6 >>>>>>>>>>> evaluate_core_matrix_traces >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 9 6 >>>>>>>>>>> rebuild_ks_matrix start Ho >>>>>>>>>>> stmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 10 6 >>>>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 11 140 >>>>>>>>>>> pw_pool_create_pw st >>>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 79 >>>>>>>>>>> pw_create_c1d sta >>>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 79 >>>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 11 140 >>>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 11 141 >>>>>>>>>>> pw_pool_create_pw st >>>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 80 >>>>>>>>>>> pw_create_c1d sta >>>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 80 >>>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>>> 00 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 11 141 >>>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>>> 000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 11 61 pw_copy >>>>>>>>>>> start Hostme >>>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 11 61 pw_copy >>>>>>>>>>> 0.004 Hostme >>>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 11 35 pw_axpy >>>>>>>>>>> start Hostme >>>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 11 35 pw_axpy >>>>>>>>>>> 0.002 Hostme >>>>>>>>>>> m: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 11 6 >>>>>>>>>>> pw_poisson_solve sta >>>>>>>>>>> rt Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>>> pw_poisson_rebuild >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>>> pw_poisson_rebuild >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 142 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 81 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 81 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 142 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 62 pw_copy >>>>>>>>>>> start Hos >>>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 62 pw_copy >>>>>>>>>>> 0.003 Hos >>>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>>> pw_multiply_with >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>>> pw_multiply_with >>>>>>>>>>> 0.002 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 63 pw_copy >>>>>>>>>>> start Hos >>>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 63 pw_copy >>>>>>>>>>> 0.003 Hos >>>>>>>>>>> tmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 6 >>>>>>>>>>> pw_integral_ab st >>>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 12 6 >>>>>>>>>>> pw_integral_ab 0. >>>>>>>>>>> 005 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 12 7 >>>>>>>>>>> pw_poisson_set st >>>>>>>>>>> art Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 143 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 14 82 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 14 82 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 143 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 64 >>>>>>>>>>> pw_copy start >>>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 64 >>>>>>>>>>> pw_copy 0.003 >>>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 16 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 16 >>>>>>>>>>> pw_derive 0.00 >>>>>>>>>>> 6 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 144 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 14 83 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 14 83 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 144 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 65 >>>>>>>>>>> pw_copy start >>>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001<< 13 65 >>>>>>>>>>> pw_copy 0.004 >>>>>>>>>>> Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000001>> 13 17 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 380 MB GPUmem: 0 MB >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> for psmp the last 100 lines is: >>>>>>>>>>> >>>>>>>>>>> ``` >>>>>>>>>>> 000000:000002<< 9 7 >>>>>>>>>>> evaluate_core_matrix_traces >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 9 7 >>>>>>>>>>> rebuild_ks_matrix start Ho >>>>>>>>>>> >>>>>>>>>>> stmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 10 7 >>>>>>>>>>> qs_ks_build_kohn_sham_matrix >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 164 >>>>>>>>>>> pw_pool_create_pw st >>>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 93 >>>>>>>>>>> pw_create_c1d sta >>>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 93 >>>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 11 164 >>>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 165 >>>>>>>>>>> pw_pool_create_pw st >>>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 94 >>>>>>>>>>> pw_create_c1d sta >>>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 94 >>>>>>>>>>> pw_create_c1d 0.0 >>>>>>>>>>> 00 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 11 165 >>>>>>>>>>> pw_pool_create_pw 0. >>>>>>>>>>> 000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 73 pw_copy >>>>>>>>>>> start Hostme >>>>>>>>>>> >>>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 11 73 pw_copy >>>>>>>>>>> 0.001 Hostme >>>>>>>>>>> >>>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 41 pw_axpy >>>>>>>>>>> start Hostme >>>>>>>>>>> >>>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 11 41 pw_axpy >>>>>>>>>>> 0.001 Hostme >>>>>>>>>>> >>>>>>>>>>> m: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 52 mp_sum_d >>>>>>>>>>> start Hostm >>>>>>>>>>> >>>>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 11 52 mp_sum_d >>>>>>>>>>> 0.000 Hostm >>>>>>>>>>> >>>>>>>>>>> em: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 11 7 >>>>>>>>>>> pw_poisson_solve sta >>>>>>>>>>> rt Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>>> pw_poisson_rebuild >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>>> pw_poisson_rebuild >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 166 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 95 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 95 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 166 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 74 pw_copy >>>>>>>>>>> start Hos >>>>>>>>>>> >>>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 74 pw_copy >>>>>>>>>>> 0.001 Hos >>>>>>>>>>> >>>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>>> pw_multiply_with >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>>> pw_multiply_with >>>>>>>>>>> 0.001 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 75 pw_copy >>>>>>>>>>> start Hos >>>>>>>>>>> >>>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 75 pw_copy >>>>>>>>>>> 0.001 Hos >>>>>>>>>>> >>>>>>>>>>> tmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 7 >>>>>>>>>>> pw_integral_ab st >>>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 53 >>>>>>>>>>> mp_sum_d start >>>>>>>>>>> >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 53 >>>>>>>>>>> mp_sum_d 0.000 >>>>>>>>>>> >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 12 7 >>>>>>>>>>> pw_integral_ab 0. >>>>>>>>>>> 003 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 12 8 >>>>>>>>>>> pw_poisson_set st >>>>>>>>>>> art Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 167 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 14 96 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 14 96 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 167 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 76 >>>>>>>>>>> pw_copy start >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>>> pw_derive 0.00 >>>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>>> pw_create_c1d >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>>> pw_copy start >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>>> pw_derive star >>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>> ``` >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Bartosz >>>>>>>>>>> >>>>>>>>>>> poniedzia?ek, 21 pa?dziernika 2024 o 08:58:34 UTC+2 Frederick >>>>>>>>>>> Stein napisa?(a): >>>>>>>>>>> >>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>> I have no idea about the issue with LibXSMM. >>>>>>>>>>>> Regarding the trace, I do not know either as there is not much >>>>>>>>>>>> that could break in pw_derive (it just performs multiplications) and the >>>>>>>>>>>> sequence of operations is to unspecific. It may be that the code actually >>>>>>>>>>>> breaks somewhere else. Can you do the same with the ssmp and post the last >>>>>>>>>>>> 100 lines? This way, we remove the asynchronicity issues for backtraces >>>>>>>>>>>> with the psmp version. >>>>>>>>>>>> Best, >>>>>>>>>>>> Frederick >>>>>>>>>>>> >>>>>>>>>>>> bartosz mazur schrieb am Sonntag, 20. Oktober 2024 um 16:47:15 >>>>>>>>>>>> UTC+2: >>>>>>>>>>>> >>>>>>>>>>>>> The error is: >>>>>>>>>>>>> >>>>>>>>>>>>> ``` >>>>>>>>>>>>> LIBXSMM_VERSION: develop-1.17-3834 (25693946) >>>>>>>>>>>>> CLX/DP TRY JIT STA COL >>>>>>>>>>>>> 0..13 2 2 0 0 >>>>>>>>>>>>> 14..23 0 0 0 0 >>>>>>>>>>>>> >>>>>>>>>>>>> 24..64 0 0 0 0 >>>>>>>>>>>>> Registry and code: 13 MB + 16 KB (gemm=2) >>>>>>>>>>>>> Command (PID=2607388): >>>>>>>>>>>>> /lustre/pd01/hpc-kuchta-1716987452/software/cp2k/exe/local/cp2k.psmp -i >>>>>>>>>>>>> H2O-9.inp -o H2O-9.out >>>>>>>>>>>>> Uptime: 5.288243 s >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> =================================================================================== >>>>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>>>> = RANK 0 PID 2607388 RUNNING AT r21c01b10 >>>>>>>>>>>>> >>>>>>>>>>>>> = KILLED BY SIGNAL: 11 (Segmentation fault) >>>>>>>>>>>>> >>>>>>>>>>>>> =================================================================================== >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> =================================================================================== >>>>>>>>>>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES >>>>>>>>>>>>> = RANK 1 PID 2607389 RUNNING AT r21c01b10 >>>>>>>>>>>>> = KILLED BY SIGNAL: 9 (Killed) >>>>>>>>>>>>> >>>>>>>>>>>>> =================================================================================== >>>>>>>>>>>>> ``` >>>>>>>>>>>>> >>>>>>>>>>>>> and the last 20 lines: >>>>>>>>>>>>> >>>>>>>>>>>>> ``` >>>>>>>>>>>>> 000000:000002<< 13 76 >>>>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002>> 13 19 >>>>>>>>>>>>> pw_derive star >>>>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002<< 13 19 >>>>>>>>>>>>> pw_derive 0.00 >>>>>>>>>>>>> 2 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002>> 13 168 >>>>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002>> 14 97 >>>>>>>>>>>>> pw_create_c1d >>>>>>>>>>>>> start Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002<< 14 97 >>>>>>>>>>>>> pw_create_c1d >>>>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002<< 13 168 >>>>>>>>>>>>> pw_pool_create_pw >>>>>>>>>>>>> 0.000 Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002>> 13 77 >>>>>>>>>>>>> pw_copy start >>>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002<< 13 77 >>>>>>>>>>>>> pw_copy 0.001 >>>>>>>>>>>>> Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> 000000:000002>> 13 20 >>>>>>>>>>>>> pw_derive star >>>>>>>>>>>>> t Hostmem: 693 MB GPUmem: 0 MB >>>>>>>>>>>>> ``` >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks! >>>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 17:18:39 UTC+2 Frederick Stein >>>>>>>>>>>>> napisa?(a): >>>>>>>>>>>>> >>>>>>>>>>>>>> Please pick one of the failing tests. Then, add the TRACE >>>>>>>>>>>>>> keyword to the &GLOBAL section and then run the test manually. This >>>>>>>>>>>>>> increases the size of the output file dramatically (to some million lines). >>>>>>>>>>>>>> Can you send me the last ~20 lines of the output? >>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um >>>>>>>>>>>>>> 17:09:40 UTC+2: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> I'm using do_regtests.py script, not make regtesting, but I >>>>>>>>>>>>>>> assume it makes no difference. As I mentioned in previous message for >>>>>>>>>>>>>>> `--ompthreads 1` all tests were passed both for ssmp and psmp. For ssmp >>>>>>>>>>>>>>> with `--ompthreads 2` I observe similar errors as for psmp with the same >>>>>>>>>>>>>>> setting, I provide example output as attachment. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> pi?tek, 18 pa?dziernika 2024 o 16:24:16 UTC+2 Frederick >>>>>>>>>>>>>>> Stein napisa?(a): >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>>> What happens if you set the number of OpenMP threads to 1 >>>>>>>>>>>>>>>> (add '--ompthreads 1' to TESTOPTS)? What errors do you observe in case of >>>>>>>>>>>>>>>> the ssmp? >>>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 18. Oktober 2024 um >>>>>>>>>>>>>>>> 15:37:43 UTC+2: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Hi Frederick, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> thanks again for help. So I have tested different >>>>>>>>>>>>>>>>> simulation variants and I know that the problem occurs when using OMP. For >>>>>>>>>>>>>>>>> MPI calculations without OMP all tests pass. I have also tested the effect >>>>>>>>>>>>>>>>> of the `OMP_PROC_BIND` and `OMP_PLACES` parameters and >>>>>>>>>>>>>>>>> apart from the effect on simulation time, they have no significant effect >>>>>>>>>>>>>>>>> on the presence of errors. Below are the results for ssmp: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, correct, total, wrong, failed, >>>>>>>>>>>>>>>>> time >>>>>>>>>>>>>>>>> spread, threads, 3850, 4144, 4, 290, 186min >>>>>>>>>>>>>>>>> spread, cores, 3831, 4144, 3, 310, 183min >>>>>>>>>>>>>>>>> spread, sockets, 3864, 4144, 3, 277, 104min >>>>>>>>>>>>>>>>> close, threads, 3879, 4144, 3, 262, 171min >>>>>>>>>>>>>>>>> close, cores, 3854, 4144, 0, 290, 168min >>>>>>>>>>>>>>>>> close, sockets, 3865, 4144, 3, 276, 104min >>>>>>>>>>>>>>>>> master, threads, 4121, 4144, 0, 23, 1002min >>>>>>>>>>>>>>>>> master, cores, 4121, 4144, 0, 23, 986min >>>>>>>>>>>>>>>>> master, sockets, 3942, 4144, 3, 199, 219min >>>>>>>>>>>>>>>>> false, threads, 3918, 4144, 0, 226, 178min >>>>>>>>>>>>>>>>> false, cores, 3919, 4144, 3, 222, 176min >>>>>>>>>>>>>>>>> false, sockets, 3856, 4144, 4, 284, 104min >>>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> and psmp: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>>> OMP_PROC_BIND, OMP_PLACES, results >>>>>>>>>>>>>>>>> spread, threads, Summary: correct: 4097 / 4227; failed: >>>>>>>>>>>>>>>>> 130; 495min >>>>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>>>> spread, cores, 26 / 362 >>>>>>>>>>>>>>>>> close, threads, Summary: correct: 4133 / 4227; failed: 94; >>>>>>>>>>>>>>>>> 484min >>>>>>>>>>>>>>>>> close, cores, 60 / 362 >>>>>>>>>>>>>>>>> close, sockets, 13 / 362 >>>>>>>>>>>>>>>>> master, threads, 13 / 362 >>>>>>>>>>>>>>>>> master, cores, 79 / 362 >>>>>>>>>>>>>>>>> master, sockets, Summary: correct: 4153 / 4227; failed: >>>>>>>>>>>>>>>>> 74; 563min >>>>>>>>>>>>>>>>> false, threads, Summary: correct: 4153 / 4227; failed: 74; >>>>>>>>>>>>>>>>> 556min >>>>>>>>>>>>>>>>> false, cores, Summary: correct: 4106 / 4227; failed: 121; >>>>>>>>>>>>>>>>> 511min >>>>>>>>>>>>>>>>> false, sockets, 96 / 362 >>>>>>>>>>>>>>>>> not specified, not specified, Summary: correct: 4129 / >>>>>>>>>>>>>>>>> 4227; failed: 98; 263min >>>>>>>>>>>>>>>>> ``` >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Any ideas what I could do next to have more information >>>>>>>>>>>>>>>>> about the source of the problem or maybe you see a potential solution at >>>>>>>>>>>>>>>>> this stage? I would appreciate any further help. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Best >>>>>>>>>>>>>>>>> Bartosz >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> pi?tek, 11 pa?dziernika 2024 o 14:30:25 UTC+2 Frederick >>>>>>>>>>>>>>>>> Stein napisa?(a): >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Dear Bartosz, >>>>>>>>>>>>>>>>>> If I am not mistaken, you used 8 OpenMP threads. The test >>>>>>>>>>>>>>>>>> do not run that efficiently with such a large number of threads. 2 should >>>>>>>>>>>>>>>>>> be sufficient. >>>>>>>>>>>>>>>>>> The test result suggests that most of the functionality >>>>>>>>>>>>>>>>>> may work but due to a missing backtrace (or similar information), it is >>>>>>>>>>>>>>>>>> hard to tell why they fail. You could also try to run some of the >>>>>>>>>>>>>>>>>> single-node tests to assess the stability of CP2K. >>>>>>>>>>>>>>>>>> Best, >>>>>>>>>>>>>>>>>> Frederick >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> bartosz mazur schrieb am Freitag, 11. Oktober 2024 um >>>>>>>>>>>>>>>>>> 13:48:42 UTC+2: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> Sorry, forgot attachments. >>>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>>> -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dc826aea-b9a5-4f40-be62-bc82e31bf99en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Mon Oct 28 08:56:42 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Mon, 28 Oct 2024 08:56:42 +0000 Subject: [CP2K-user] [CP2K:20827] OpenBLAS wget error In-Reply-To: References: Message-ID: Hi, did you install homebrew including the package coreutils (?brew install coreutils?)? From: cp2k at googlegroups.com on behalf of Rashid Riboul Date: Monday, 28 October 2024 at 06:21 To: cp2k Subject: [CP2K:20824] OpenBLAS wget error Hi; I'm trying to install this new build of CP2K and it errors out whenever it tries to get my architecture info using OpenBLAS: wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz usage: sha256sum [-bctwz] [files ...] OpenBLAS-0.3.27.tar.gz ERROR: (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code detected. All I did was download the file from the GitHub and run as is. Attempting to install this on my M1 Max MacBook Pro. Thanks in advance. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB08276447C3E22DC4F59BEEECF44A2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierreb24 at gmail.com Mon Oct 28 09:55:40 2024 From: pierreb24 at gmail.com (Pierre Beaujean) Date: Mon, 28 Oct 2024 02:55:40 -0700 (PDT) Subject: [CP2K-user] [CP2K:20828] AIMD with NPT (NPT_I?) gets quickly stuck Message-ID: Hello, I have an issue while running AIMD with NPT (here with NPT_I): on some systems, after a few steps (about ~200, but that can vary), the calculation just stop doing anything useful (output does not change, nor the .dcd or .ener files). However, when SSHing the calculation nodes, CPU usage is still high. So maybe it is stuck in a loop somewhere. More information: - Here is an example input: input.txt . I would rather not share the geometry publicly, but lets say that it is a polymer chain with a few molecules of organic solvent (ethylene carbonate, in fact), and it contains nothing exotic (but what is exotic and what is not is probably a matter of taste :p ). The corresponding output is there - The system was previously relaxed with at 10 ps NVT that ran smoothly. I first tried to restart the calculation from the last NVT step, adding the barostat, same result. - Some systems of similar size and composition (actually, with other solvents) runs smoothly. - I add already post this problem on Github (see https://github.com/cp2k/cp2k/issues/3361), but it was occuring with NPT_V, while NPT_I was working. My criterion for the convergence were indeed a bit loose back then, so this time, I reduced them by two order of magnitude (EPS_SCF is now 1e-8 and EPS_DEFAULT is 1e-12), but ... No luck there :( - I'm currently using issue with 9.1 (changing the version during a project is not always a good idea), but I also find this error with 2023.1 as well (I could try with the latest version, but I need to go into the trouble of making a new EasyBuild file). I use cp2k.psmp on 256 cores with MPI (OMP_NUM_THREADS is set to 1). - I tried on another supercomputer, same problem. - It *always* stop at the same point (see the output there ), right before a new MD step: the "Total energy" line is always the last thing printed (this was already the case in my Github message). If you have any insight, thanks :) -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/e02be921-0c06-45b0-9564-a4e6e4ff1964n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsupright at gmail.com Tue Oct 29 01:19:57 2024 From: hsupright at gmail.com (sh X) Date: Mon, 28 Oct 2024 18:19:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20829] Free energy post-processing of cp2k metadynamics Message-ID: hello, whenI try to calculate the free energy of metadynamics by graph.psmp: graph.psmp -ndim 4 -ndw 1 2 -file ANM-100K-1.restart -cp2k The free energy value of the fes.dat file is always 0. What happens? -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/c6ab9f08-ca30-4270-90be-770cd8b8c793n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ANM-100K-1.restart Type: application/octet-stream Size: 12569 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fes.dat Type: application/octet-stream Size: 610000 bytes Desc: not available URL: From sastargetx at gmail.com Tue Oct 29 05:31:30 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Mon, 28 Oct 2024 22:31:30 -0700 (PDT) Subject: [CP2K-user] [CP2K:20830] OpenBLAS wget error In-Reply-To: References: Message-ID: Yes; it installed/updated with the command from the arch file. On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: > Hi, did you install homebrew including the package coreutils (?brew > install coreutils?)? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Monday, 28 October 2024 at 06:21 > *To: *cp2k > *Subject: *[CP2K:20824] OpenBLAS wget error > > Hi; I'm trying to install this new build of CP2K and it errors out > whenever it tries to get my architecture info using OpenBLAS: > > wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz > usage: sha256sum [-bctwz] [files ...] > OpenBLAS-0.3.27.tar.gz > ERROR: > (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) > Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. > > ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code > detected. > > > > All I did was download the file from the GitHub and run as is. Attempting > to install this on my M1 Max MacBook Pro. > > > > Thanks in advance. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Tue Oct 29 07:44:32 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Tue, 29 Oct 2024 07:44:32 +0000 Subject: [CP2K-user] [CP2K:20831] OpenBLAS wget error In-Reply-To: References: Message-ID: Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should return ?/opt/homebrew/bin/sha256sum?). From: cp2k at googlegroups.com on behalf of Rashid Riboul Date: Tuesday, 29 October 2024 at 06:55 To: cp2k Subject: Re: [CP2K:20830] OpenBLAS wget error Yes; it installed/updated with the command from the arch file. On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: Hi, did you install homebrew including the package coreutils (?brew install coreutils?)? From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Monday, 28 October 2024 at 06:21 To: cp2k Subject: [CP2K:20824] OpenBLAS wget error Hi; I'm trying to install this new build of CP2K and it errors out whenever it tries to get my architecture info using OpenBLAS: wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz usage: sha256sum [-bctwz] [files ...] OpenBLAS-0.3.27.tar.gz ERROR: (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code detected. All I did was download the file from the GitHub and run as is. Attempting to install this on my M1 Max MacBook Pro. Thanks in advance. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB082774057475D8D1C36CB22FF44B2%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sastargetx at gmail.com Wed Oct 30 05:08:10 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Tue, 29 Oct 2024 22:08:10 -0700 (PDT) Subject: [CP2K-user] [CP2K:20831] OpenBLAS wget error In-Reply-To: References: Message-ID: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Yes; "/opt/homebrew/bin" is the first one in PATH for me. On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: > Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and > ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should > return ?/opt/homebrew/bin/sha256sum?). > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Tuesday, 29 October 2024 at 06:55 > *To: *cp2k > *Subject: *Re: [CP2K:20830] OpenBLAS wget error > > Yes; it installed/updated with the command from the arch file. > > > > On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: > > Hi, did you install homebrew including the package coreutils (?brew > install coreutils?)? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Monday, 28 October 2024 at 06:21 > *To: *cp2k > *Subject: *[CP2K:20824] OpenBLAS wget error > > Hi; I'm trying to install this new build of CP2K and it errors out > whenever it tries to get my architecture info using OpenBLAS: > > wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz > usage: sha256sum [-bctwz] [files ...] > OpenBLAS-0.3.27.tar.gz > ERROR: > (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) > Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. > > ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code > detected. > > > > All I did was download the file from the GitHub and run as is. Attempting > to install this on my M1 Max MacBook Pro. > > > > Thanks in advance. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Wed Oct 30 06:23:24 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Wed, 30 Oct 2024 06:23:24 +0000 Subject: [CP2K-user] [CP2K:20832] OpenBLAS wget error In-Reply-To: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> References: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Message-ID: Does this patch solve the problem? From: cp2k at googlegroups.com on behalf of Rashid Riboul Date: Wednesday, 30 October 2024 at 06:34 To: cp2k Subject: Re: [CP2K:20831] OpenBLAS wget error Yes; "/opt/homebrew/bin" is the first one in PATH for me. On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should return ?/opt/homebrew/bin/sha256sum?). From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Tuesday, 29 October 2024 at 06:55 To: cp2k Subject: Re: [CP2K:20830] OpenBLAS wget error Yes; it installed/updated with the command from the arch file. On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: Hi, did you install homebrew including the package coreutils (?brew install coreutils?)? From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Monday, 28 October 2024 at 06:21 To: cp2k Subject: [CP2K:20824] OpenBLAS wget error Hi; I'm trying to install this new build of CP2K and it errors out whenever it tries to get my architecture info using OpenBLAS: wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz usage: sha256sum [-bctwz] [files ...] OpenBLAS-0.3.27.tar.gz ERROR: (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code detected. All I did was download the file from the GitHub and run as is. Attempting to install this on my M1 Max MacBook Pro. Thanks in advance. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827F5FC68E5B08E1F3899D6F4542%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sastargetx at gmail.com Wed Oct 30 07:06:37 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Wed, 30 Oct 2024 00:06:37 -0700 (PDT) Subject: [CP2K-user] [CP2K:20833] OpenBLAS wget error In-Reply-To: References: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Message-ID: I re-downloaded the master version to check if the patch was utilized; it was, but I still get that error after trying to compile. On Tuesday, October 29, 2024 at 11:23:36?PM UTC-7 Krack Matthias wrote: > Does this patch > > solve the problem? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Wednesday, 30 October 2024 at 06:34 > *To: *cp2k > *Subject: *Re: [CP2K:20831] OpenBLAS wget error > > Yes; "/opt/homebrew/bin" is the first one in PATH for me. > > On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: > > Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and > ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should > return ?/opt/homebrew/bin/sha256sum?). > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Tuesday, 29 October 2024 at 06:55 > *To: *cp2k > *Subject: *Re: [CP2K:20830] OpenBLAS wget error > > Yes; it installed/updated with the command from the arch file. > > > > On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: > > Hi, did you install homebrew including the package coreutils (?brew > install coreutils?)? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Monday, 28 October 2024 at 06:21 > *To: *cp2k > *Subject: *[CP2K:20824] OpenBLAS wget error > > Hi; I'm trying to install this new build of CP2K and it errors out > whenever it tries to get my architecture info using OpenBLAS: > > wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz > usage: sha256sum [-bctwz] [files ...] > OpenBLAS-0.3.27.tar.gz > ERROR: > (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) > Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. > > ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code > detected. > > > > All I did was download the file from the GitHub and run as is. Attempting > to install this on my M1 Max MacBook Pro. > > > > Thanks in advance. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/b2a27352-38d8-4342-9b07-4187b3a2c186n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.krack at psi.ch Wed Oct 30 08:22:31 2024 From: matthias.krack at psi.ch (Krack Matthias) Date: Wed, 30 Oct 2024 08:22:31 +0000 Subject: [CP2K-user] [CP2K:20834] OpenBLAS wget error In-Reply-To: References: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Message-ID: The patch is not yet in the master version. From: cp2k at googlegroups.com on behalf of Rashid Riboul Date: Wednesday, 30 October 2024 at 09:21 To: cp2k Subject: Re: [CP2K:20833] OpenBLAS wget error I re-downloaded the master version to check if the patch was utilized; it was, but I still get that error after trying to compile. On Tuesday, October 29, 2024 at 11:23:36?PM UTC-7 Krack Matthias wrote: Does this patch solve the problem? From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Wednesday, 30 October 2024 at 06:34 To: cp2k Subject: Re: [CP2K:20831] OpenBLAS wget error Yes; "/opt/homebrew/bin" is the first one in PATH for me. On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should return ?/opt/homebrew/bin/sha256sum?). From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Tuesday, 29 October 2024 at 06:55 To: cp2k Subject: Re: [CP2K:20830] OpenBLAS wget error Yes; it installed/updated with the command from the arch file. On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: Hi, did you install homebrew including the package coreutils (?brew install coreutils?)? From: cp... at googlegroups.com on behalf of Rashid Riboul Date: Monday, 28 October 2024 at 06:21 To: cp2k Subject: [CP2K:20824] OpenBLAS wget error Hi; I'm trying to install this new build of CP2K and it errors out whenever it tries to get my architecture info using OpenBLAS: wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz usage: sha256sum [-bctwz] [files ...] OpenBLAS-0.3.27.tar.gz ERROR: (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code detected. All I did was download the file from the GitHub and run as is. Attempting to install this on my M1 Max MacBook Pro. Thanks in advance. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+uns... at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/b2a27352-38d8-4342-9b07-4187b3a2c186n%40googlegroups.com. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ZRAP278MB0827EA7208FE101FD7058D6AF4542%40ZRAP278MB0827.CHEP278.PROD.OUTLOOK.COM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sastargetx at gmail.com Wed Oct 30 16:52:16 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Wed, 30 Oct 2024 09:52:16 -0700 (PDT) Subject: [CP2K-user] [CP2K:20835] OpenBLAS wget error In-Reply-To: References: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Message-ID: Oh; I'm not aware of how to download the patch, then. I will attempt later tonight and let you know the results! On Wednesday, October 30, 2024 at 1:22:49?AM UTC-7 Krack Matthias wrote: > The patch is not yet in the master version. > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Wednesday, 30 October 2024 at 09:21 > *To: *cp2k > *Subject: *Re: [CP2K:20833] OpenBLAS wget error > > I re-downloaded the master version to check if the patch was utilized; it > was, but I still get that error after trying to compile. > > On Tuesday, October 29, 2024 at 11:23:36?PM UTC-7 Krack Matthias wrote: > > Does this patch > > solve the problem? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Wednesday, 30 October 2024 at 06:34 > *To: *cp2k > *Subject: *Re: [CP2K:20831] OpenBLAS wget error > > Yes; "/opt/homebrew/bin" is the first one in PATH for me. > > On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: > > Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and > ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should > return ?/opt/homebrew/bin/sha256sum?). > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Tuesday, 29 October 2024 at 06:55 > *To: *cp2k > *Subject: *Re: [CP2K:20830] OpenBLAS wget error > > Yes; it installed/updated with the command from the arch file. > > > > On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: > > Hi, did you install homebrew including the package coreutils (?brew > install coreutils?)? > > > > *From: *cp... at googlegroups.com on behalf of > Rashid Riboul > *Date: *Monday, 28 October 2024 at 06:21 > *To: *cp2k > *Subject: *[CP2K:20824] OpenBLAS wget error > > Hi; I'm trying to install this new build of CP2K and it errors out > whenever it tries to get my architecture info using OpenBLAS: > > wget --quiet https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz > usage: sha256sum [-bctwz] [files ...] > OpenBLAS-0.3.27.tar.gz > ERROR: > (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) > Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. > > ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code > detected. > > > > All I did was download the file from the GitHub and run as is. Attempting > to install this on my M1 Max MacBook Pro. > > > > Thanks in advance. > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com > > . > > -- > You received this message because you are subscribed to the Google Groups > "cp2k" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to cp2k+uns... at googlegroups.com. > > To view this discussion visit > https://groups.google.com/d/msgid/cp2k/b2a27352-38d8-4342-9b07-4187b3a2c186n%40googlegroups.com > > . > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ebeaa504-2ec0-41d4-83ad-fe510f61221cn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncerhasan20 at gmail.com Wed Oct 30 21:04:57 2024 From: tuncerhasan20 at gmail.com (=?UTF-8?Q?Hasan_Tun=C3=A7er?=) Date: Wed, 30 Oct 2024 14:04:57 -0700 (PDT) Subject: [CP2K-user] [CP2K:20836] DFT-MD Error in CP2K Message-ID: Hi, I am trying to do DFT-MD. However, although the job is done succesfully, my atoms are getting extremely far apart from each other. What might be the reason? Please see the input file as attached. Thanks, Hasan -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ba62e0ed-8ffb-49ab-b58c-0cd5e790c271n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Ru_md (1).inp Type: chemical/x-gamess-input Size: 1565 bytes Desc: not available URL: From l13douma at 163.com Thu Oct 31 03:04:37 2024 From: l13douma at 163.com (=?GBK?B?8t3y9g==?=) Date: Thu, 31 Oct 2024 11:04:37 +0800 (CST) Subject: [CP2K-user] [CP2K:20837] ScF is a convergent trend, but rebound in Ni(111)Optimization. Message-ID: <2644929c.385d.192e0883b17.Coremail.l13douma@163.com> Dear cp2k Experts, i wan to Optimation Ni111slab?It is used for amino adsorption energy test,144 atoms.The nickel magnetic moment is 1.The optional multiplicity is 145.Open UKS?Smear.use RevPbe+D3(bj),CUTOFF 650,REL_CUTOFF 55,ALPHA 0.4,NBROYDEN 12.the Scf have a convergent trend, but rebound.I'm using the cp2k 2024.1 version.It is not known in terms of which parameters need to be modified.Hope to get some suggestions to make the calculation go smoothly. Thanks,everyone A new cp2k user. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/2644929c.385d.192e0883b17.Coremail.l13douma%40163.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 23Ni111.inp Type: application/octet-stream Size: 12295 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: slurm-74628887.out Type: application/octet-stream Size: 18179 bytes Desc: not available URL: From sastargetx at gmail.com Thu Oct 31 06:57:21 2024 From: sastargetx at gmail.com (Rashid Riboul) Date: Wed, 30 Oct 2024 23:57:21 -0700 (PDT) Subject: [CP2K-user] [CP2K:20838] OpenBLAS wget error In-Reply-To: References: <2e536901-5616-41b0-884f-4966c9a7037en@googlegroups.com> Message-ID: Yes; the patch fixed the issue! Thanks for the help! On Wednesday, October 30, 2024 at 10:13:27?AM UTC-7 Rashid Riboul wrote: > Oh; I'm not aware of how to download the patch, then. I will attempt later > tonight and let you know the results! > > On Wednesday, October 30, 2024 at 1:22:49?AM UTC-7 Krack Matthias wrote: > >> The patch is not yet in the master version. >> >> >> >> *From: *cp... at googlegroups.com on behalf of >> Rashid Riboul >> *Date: *Wednesday, 30 October 2024 at 09:21 >> *To: *cp2k >> *Subject: *Re: [CP2K:20833] OpenBLAS wget error >> >> I re-downloaded the master version to check if the patch was utilized; it >> was, but I still get that error after trying to compile. >> >> On Tuesday, October 29, 2024 at 11:23:36?PM UTC-7 Krack Matthias wrote: >> >> Does this patch >> >> solve the problem? >> >> >> >> *From: *cp... at googlegroups.com on behalf of >> Rashid Riboul >> *Date: *Wednesday, 30 October 2024 at 06:34 >> *To: *cp2k >> *Subject: *Re: [CP2K:20831] OpenBLAS wget error >> >> Yes; "/opt/homebrew/bin" is the first one in PATH for me. >> >> On Tuesday, October 29, 2024 at 12:44:46?AM UTC-7 Krack Matthias wrote: >> >> Is ?/opt/homebrew/bin? in your PATH and does it precede ?/sbin? and >> ?/usr/bin? therein (check with ?echo $PATH? and ?which sha256sum? should >> return ?/opt/homebrew/bin/sha256sum?). >> >> >> >> *From: *cp... at googlegroups.com on behalf of >> Rashid Riboul >> *Date: *Tuesday, 29 October 2024 at 06:55 >> *To: *cp2k >> *Subject: *Re: [CP2K:20830] OpenBLAS wget error >> >> Yes; it installed/updated with the command from the arch file. >> >> >> >> On Monday, October 28, 2024 at 1:56:58?AM UTC-7 Krack Matthias wrote: >> >> Hi, did you install homebrew including the package coreutils (?brew >> install coreutils?)? >> >> >> >> *From: *cp... at googlegroups.com on behalf of >> Rashid Riboul >> *Date: *Monday, 28 October 2024 at 06:21 >> *To: *cp2k >> *Subject: *[CP2K:20824] OpenBLAS wget error >> >> Hi; I'm trying to install this new build of CP2K and it errors out >> whenever it tries to get my architecture info using OpenBLAS: >> >> wget --quiet >> https://www.cp2k.org/static/downloads/OpenBLAS-0.3.27.tar.gz >> usage: sha256sum [-bctwz] [files ...] >> OpenBLAS-0.3.27.tar.gz >> ERROR: >> (/Users/caracallynx/MD/cp2k/tools/toolchain/scripts/get_openblas_arch.sh) >> Checksum of OpenBLAS-0.3.27.tar.gz could not be verified, abort. >> >> ERROR: (./scripts/stage0/setup_buildtools.sh, line 60) Non-zero exit code >> detected. >> >> >> >> All I did was download the file from the GitHub and run as is. Attempting >> to install this on my M1 Max MacBook Pro. >> >> >> >> Thanks in advance. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "cp2k" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cp2k+uns... at googlegroups.com. >> To view this discussion visit >> https://groups.google.com/d/msgid/cp2k/f63cafe7-9a29-4ffa-b5e1-38b3b43316d0n%40googlegroups.com >> >> . >> >> -- >> You received this message because you are subscribed to the Google Groups >> "cp2k" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cp2k+uns... at googlegroups.com. >> >> To view this discussion visit >> https://groups.google.com/d/msgid/cp2k/dc0d112a-0dc3-4bd3-90cd-13e86eadb68dn%40googlegroups.com >> >> . >> >> -- >> You received this message because you are subscribed to the Google Groups >> "cp2k" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cp2k+uns... at googlegroups.com. >> >> To view this discussion visit >> https://groups.google.com/d/msgid/cp2k/2e536901-5616-41b0-884f-4966c9a7037en%40googlegroups.com >> >> . >> >> -- >> You received this message because you are subscribed to the Google Groups >> "cp2k" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to cp2k+uns... at googlegroups.com. >> >> To view this discussion visit >> https://groups.google.com/d/msgid/cp2k/b2a27352-38d8-4342-9b07-4187b3a2c186n%40googlegroups.com >> >> . >> > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/ebc61459-7250-4f59-8a09-bc55228909d8n%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marci.akira at gmail.com Thu Oct 31 07:30:34 2024 From: marci.akira at gmail.com (Marcella Iannuzzi) Date: Thu, 31 Oct 2024 00:30:34 -0700 (PDT) Subject: [CP2K-user] [CP2K:20840] Re: ScF is a convergent trend, but rebound in Ni(111)Optimization. In-Reply-To: <2644929c.385d.192e0883b17.Coremail.l13douma@163.com> References: <2644929c.385d.192e0883b17.Coremail.l13douma@163.com> Message-ID: <19debd92-3dfb-4145-9d95-a14fb317204fn@googlegroups.com> Hi .. Have you already tried to increase the maximum number of SCF iterations? Regards Marcella On Thursday, October 31, 2024 at 5:46:32?AM UTC+1 l13d... at 163.com wrote: > Dear cp2k Experts, > > i wan to Optimation Ni111slab?It is used for amino adsorption energy > test,144 atoms.The nickel magnetic moment is 1.The optional multiplicity is > 145.Open UKS?Smear.use RevPbe+D3(bj),CUTOFF 650,REL_CUTOFF 55,ALPHA > 0.4,NBROYDEN 12.the Scf have a convergent trend, but rebound.I'm using the > cp2k 2024.1 version.It is not known in terms of which parameters need to be > modified.Hope to get some suggestions to make the calculation go smoothly. > > Thanks,everyone > A new cp2k user. > -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/19debd92-3dfb-4145-9d95-a14fb317204fn%40googlegroups.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From l13douma at gmail.com Thu Oct 31 12:41:44 2024 From: l13douma at gmail.com (=?UTF-8?B?5Ya35a+55LiH5aSr?=) Date: Thu, 31 Oct 2024 05:41:44 -0700 Subject: [CP2K-user] [CP2K:20840] ScF is a convergent trend, but rebound in Ni(111)Optimization. Message-ID: Dear Marcella Iannuzzi Due to my email being restricted, I registered for Google Mail.I thought the email was sent, but it was blocked. My problem is that optimizing SCF with nickel 111 does not converge. You asked me if I had increased the step size. First,thanks for your letter. I have tried to set the SCF to 128 steps, and I have also tried to change ALPHA to 0.05 and temperature to 3000k in smear, but it is still in the current situation. -- You received this message because you are subscribed to the Google Groups "cp2k" group. To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/cp2k/CAFtS%2B2h3BGc--Abn%3DkF%2B3U5Ja_fhcasANKyGB_bz9S%3D9zduR%2BQ%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: