[CP2K-user] [CP2K:20258] QM/MM cp2k/gmx SCF convergence issue with MGGA_XC_M06_L functionals
Emma Stevens
eestevens7 at gmail.com
Thu May 30 22:14:30 UTC 2024
Hi all,
I'm trying to run QM/MM simulations on a solvated system with 40 atoms in
the QM region using the M06L functional. I've run the same calculation
using BLYP with no error, but when I specify M06L in my input file, the SCF
calculations fail to converge after 21 iterations. The same convergence
error occurs when I use revM06L.
I've also tried implementing a finer grid with the additional parameters
below, but get an error saying "tau with finer grids not implemented"
&XC_GRID
USE_FINER_GRID
&END XC_GRID
Since I'm new to cp2k, I'm guessing there is an error in my input. Based on
examples I've seen on here, I've tried both of these formats for specifying
the functional (the latter says it doesn't recognize LIBXC as a subsection
of XC_FUNCTIONAL).
&XC
DENSITY_CUTOFF 1.0E-12
GRADIENT_CUTOFF 1.0E-12
TAU_CUTOFF 1.0E-12
&XC_FUNCTIONAL
&MGGA_X_M06_L
&END MGGA_X_M06_L
&MGGA_C_M06_L
&END MGGA_C_M06_L
&END XC_FUNCTIONAL
&END XC
--------------------------------------------------------------
&XC
DENSITY_CUTOFF 1.0E-12
GRADIENT_CUTOFF 1.0E-12
TAU_CUTOFF 1.0E-12
&XC_FUNCTIONAL
&LIBXC
FUNCTIONAL MGGA_X_M06_L
&END LIBXC
&LIBXC
FUNCTIONAL MGGA_C_M06_L
&END LIBXC
&END XC_FUNCTIONAL
&END XC
I also consistently get a warning that my restart file doesn't exist even
though it does seem to be output during my calculation. I've attached
examples of my input, output, md.log, and mdp files and have the commands
I'm running below. Any tips would be greatly appreciated!
I'm running gromacs 2022.5, cp2k 2024.1, and plumed 2.9
Thanks,
Emma
gmx_mpi_d grompp -f rxn_QMMM_DEF.mdp -c confout.gro -t state.cpt -n
index.ndx -p topol.top -qmi rxn_QMMM_DEF_cp2k.inp -o rxn_QMMM_DEF.tpr
srun --mpi=pmi2 -n $SLURM_TASKS_PER_NODE gmx_mpi_d mdrun -ntomp
$SLURM_CPUS_PER_TASK -s rxn_QMMM_DEF.tpr -plumed plumed.dat
--
You received this message because you are subscribed to the Google Groups "cp2k" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cp2k+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cp2k/450eeb0b-3226-473d-9a87-55873c87ddd2n%40googlegroups.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20240530/fd0ebb9d/attachment-0001.htm>
-------------- next part --------------
:-) GROMACS - gmx mdrun, 2022.5-plumed_2.9.0 (double precision) (-:
Copyright 1991-2022 The GROMACS Authors.
GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.
Current GROMACS contributors:
Mark Abraham Andrey Alekseenko Cathrine Bergh
Christian Blau Eliane Briand Kevin Boyd
Oliver Fleetwood Stefan Fleischmann Vytas Gapsys
Gaurav Garg Gilles Gouaillardet Alan Gray
Victor Holanda M. Eric Irrgang Joe Jordan
Christoph Junghans Prashanth Kanduri Sebastian Kehl
Sebastian Keller Carsten Kutzner Magnus Lundborg
Pascal Merz Dmitry Morozov Szilard Pall
Roland Schulz Michael Shirts David van der Spoel
Alessandra Villa Sebastian Wingbermuehle Artem Zhmurov
Previous GROMACS contributors:
Emile Apol Rossen Apostolov James Barnett
Herman J.C. Berendsen Par Bjelkmar Viacheslav Bolnykh
Aldert van Buuren Carlo Camilloni Rudi van Drunen
Anton Feenstra Gerrit Groenhof Bert de Groot
Anca Hamuraru Vincent Hindriksen Aleksei Iupinov
Dimitrios Karkoulis Peter Kasson Jiri Kraus
Per Larsson Justin A. Lemkul Viveca Lindahl
Erik Marklund Pieter Meulenhoff Vedran Miletic
Teemu Murtola Sander Pronk Alexey Shvetsov
Alfons Sijbers Peter Tieleman Jon Vincent
Teemu Virolainen Christian Wennberg Maarten Wolf
Coordinated by the GROMACS project leaders:
Paul Bauer, Berk Hess, and Erik Lindahl
GROMACS: gmx mdrun, version 2022.5-plumed_2.9.0 (double precision)
Executable: /work/donglab/software/gromacs-2022.5_cp2k-2024.1_plumed-2.9/bin/gmx_mpi_d
Data prefix: /work/donglab/software/gromacs-2022.5_cp2k-2024.1_plumed-2.9
Working dir: /work/donglab/stevens.emm/NP3_MINP_Prep/1a2b_reaction/prep_MD/D2F/rxn_QMMM
Process ID: 53942
Command line:
gmx_mpi_d mdrun -ntomp 16 -s rxn_QMMM_DEF.tpr -plumed plumed.dat
GROMACS version: 2022.5-plumed_2.9.0
Precision: double
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: disabled
SIMD instructions: AVX_512
CPU FFT library: fftw-3.3.10-avx-avx2-avx2_128-avx512
GPU FFT library: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /shared/centos7/gcc/11.1.0/bin/gcc GNU 11.1.0
C compiler flags: -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -pthread -O3 -DNDEBUG
C++ compiler: /shared/centos7/gcc/11.1.0/bin/g++ GNU 11.1.0
C++ compiler flags: -mavx512f -mfma -mavx512vl -mavx512dq -mavx512bw -pthread -Wno-missing-field-initializers -fexcess-precision=fast -funroll-all-loops -pthread -fopenmp -O3 -DNDEBUG
Running on 1 node with total 32 cores, 32 processing units
Hardware detected on host d3190 (the node of MPI rank 0):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
Family: 6 Model: 106 Stepping: 6
Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl avx512secondFMA clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sha sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
Number of AVX-512 FMA units: 2
Hardware topology: Basic
Packages, cores, and logical processors:
[indices refer to OS logical processors]
Package 1: [ 1] [ 3] [ 5] [ 7] [ 9] [ 11] [ 13] [ 15] [ 17] [ 19] [ 21] [ 23] [ 25] [ 27] [ 29] [ 31] [ 33] [ 35] [ 37] [ 39]
Package 0: [ 32] [ 34] [ 36] [ 38] [ 40] [ 42] [ 44] [ 46] [ 48] [ 50] [ 52] [ 54]
CPU limit set by OS: -1 Recommended max number of threads: 32
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.
Lindahl
GROMACS: High performance molecular simulations through multi-level
parallelism from laptops to supercomputers
SoftwareX 1 (2015) pp. 19-25
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
GROMACS
In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
GROMACS 4.5: a high-throughput and highly parallel open source molecular
simulation toolkit
Bioinformatics 29 (2013) pp. 845-54
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------
The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 16 (and the command-line setting agreed with that)
Input Parameters:
integrator = md
tinit = 0
dt = 0.001
nsteps = 50
init-step = 0
simulation-part = 1
mts = false
comm-mode = Linear
nstcomm = 100
bd-fric = 0
ld-seed = 1811413247
emtol = 10
emstep = 0.01
niter = 20
fcstep = 0
nstcgsteep = 1000
nbfgscorr = 10
rtpi = 0.05
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 1
nstcalcenergy = 1
nstenergy = 1
nstxout-compressed = 1
compressed-x-precision = 1000
cutoff-scheme = Verlet
nstlist = 10
pbc = xyz
periodic-molecules = false
verlet-buffer-tolerance = 0.005
rlist = 1
coulombtype = PME
coulomb-modifier = Potential-shift
rcoulomb-switch = 0
rcoulomb = 1
epsilon-r = 1
epsilon-rf = inf
vdw-type = Cut-off
vdw-modifier = Potential-shift
rvdw-switch = 0
rvdw = 1
DispCorr = EnerPres
table-extension = 1
fourierspacing = 0.12
fourier-nx = 32
fourier-ny = 32
fourier-nz = 32
pme-order = 4
ewald-rtol = 1e-05
ewald-rtol-lj = 0.001
lj-pme-comb-rule = Geometric
ewald-geometry = 3d
epsilon-surface = 0
tcoupl = V-rescale
nsttcouple = 10
nh-chain-length = 0
print-nose-hoover-chain-variables = false
pcoupl = C-rescale
pcoupltype = Isotropic
nstpcouple = 10
tau-p = 2
compressibility (3x3):
compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
ref-p (3x3):
ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
refcoord-scaling = COM
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
QMMM = false
qm-opts:
ngQM = 0
constraint-algorithm = Lincs
continuation = true
Shake-SOR = false
shake-tol = 0.0001
lincs-order = 4
lincs-iter = 1
lincs-warnangle = 30
nwall = 0
wall-type = 9-3
wall-r-linpot = -1
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = false
awh = false
rotation = false
interactiveMD = false
disre = No
disre-weighting = Conservative
disre-mixed = false
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orire-fc = 0
orire-tau = 0
nstorireout = 100
free-energy = no
cos-acceleration = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
simulated-tempering = false
swapcoords = no
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
applied-forces:
electric-field:
x:
E0 = 0
omega = 0
t0 = 0
sigma = 0
y:
E0 = 0
omega = 0
t0 = 0
sigma = 0
z:
E0 = 0
omega = 0
t0 = 0
sigma = 0
density-guided-simulation:
active = false
group = protein
similarity-measure = inner-product
atom-spreading-weight = unity
force-constant = 1e+09
gaussian-transform-spreading-width = 0.2
gaussian-transform-spreading-range-in-multiples-of-width = 4
reference-density-filename = reference.mrc
nst = 1
normalize-densities = true
adaptive-force-scaling = false
adaptive-force-scaling-time-constant = 4
shift-vector =
transformation-matrix =
qmmm-cp2k:
active = true
qmgroup = QM
qmmethod = INPUT
qmfilenames =
qmcharge = 0
qmmultiplicity = 1
grpopts:
nrdf: 110.96 8205.04
ref-t: 300 300
tau-t: 0.1 0.1
annealing: No No
annealing-npoints: 0 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0
Changing nstlist from 10 to 100, rlist from 1 to 1.111
Initializing Domain Decomposition on 2 ranks
Dynamic load balancing: auto
Using update groups, nr 1407, average size 2.9 atoms, max. radius 0.078 nm
Minimum cell size due to atom displacement: 0.897 nm
Initial maximum distances in bonded interactions:
two-body bonded interactions: 0.391 nm, Exclusion, atoms 3 28
Minimum cell size due to bonded interactions: 0.000 nm
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Using 0 separate PME ranks because: Separate PME-only ranks are not compatible with QMMM MdModule; there are too few total ranks for efficient splitting
Optimizing the DD grid for 2 cells with a minimum initial size of 1.121 nm
The maximum allowed number of cells is: X 3 Y 3 Z 3
Domain decomposition grid 2 x 1 x 1, separate PME ranks 0
PME domain decomposition: 2 x 1 x 1
Domain decomposition rank 0, coordinates 0 0 0
The initial number of communication pulses is: X 1
The initial domain decomposition cell size is: X 1.74 nm
The maximum allowed distance for atom groups involved in interactions is:
non-bonded interactions 1.267 nm
(the following are initial values, they could change due to box deformation)
two-body bonded interactions (-rdd) 1.267 nm
multi-body bonded interactions (-rdd) 1.267 nm
When dynamic load balancing gets turned on, these settings will change to:
The maximum number of communication pulses is: X 1
The minimum size for domain decomposition cells is 1.267 nm
The requested allowed shrink of DD cells (option -dds) is: 0.80
The allowed shrink of domain decomposition cells is: X 0.73
The maximum allowed distance for atom groups involved in interactions is:
non-bonded interactions 1.267 nm
two-body bonded interactions (-rdd) 1.267 nm
multi-body bonded interactions (-rdd) 1.267 nm
Using 2 MPI processes
Non-default thread affinity set, disabling internal thread affinity
Using 16 OpenMP threads per MPI process
Note: Your choice of number of MPI ranks and amount of resources results in using 16 OpenMP threads per rank, which is most likely inefficient. The optimum is usually between 1 and 8 threads per rank.
System total charge: 0.000
Will do PME sum in reciprocal space for electrostatic interactions.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------
Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
Potential shift: LJ r^-12: -1.000e+00 r^-6: -1.000e+00, Ewald -1.000e-05
Initialized non-bonded Coulomb Ewald tables, spacing: 9.33e-04 size: 1073
Long Range LJ corr.: <C6> 2.7987e-04
Using SIMD 4x8 nonbonded short-range kernels
Using a dual 4x8 pair-list setup updated with dynamic pruning:
outer list: updated every 100 steps, buffer 0.111 nm, rlist 1.111 nm
inner list: updated every 21 steps, buffer 0.001 nm, rlist 1.001 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
outer list: updated every 100 steps, buffer 0.237 nm, rlist 1.237 nm
inner list: updated every 21 steps, buffer 0.042 nm, rlist 1.042 nm
Using Lorentz-Berthelot Lennard-Jones combination rule
Linking all bonded interactions to atoms
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- --- Thank You --- -------- --------
Intra-simulation communication will occur every 1 steps.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. Bernetti, G. Bussi
Pressure control using stochastic cell rescaling
J. Chem. Phys. 153 (2020) pp. 114107
-------- -------- --- Thank You --- -------- --------
There are: 4141 Atoms
Atom distribution over 2 domains: av 2070 stddev 49 min 2053 max 2088
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
0: rest
PLUMED: PLUMED is starting
PLUMED: Version: 2.9.0 (git: Unknown) compiled on May 17 2024 at 14:22:40
PLUMED: Please cite these papers when using PLUMED [1][2]
PLUMED: For further information see the PLUMED web page at http://www.plumed.org
PLUMED: Root: /work/donglab/software/plumed2.9_gcc11.1_openmpi4.1.2/lib/plumed
PLUMED: For installed feature, see /work/donglab/software/plumed2.9_gcc11.1_openmpi4.1.2/lib/plumed/src/config/config.txt
PLUMED: Molecular dynamics engine: gromacs
PLUMED: Precision of reals: 8
PLUMED: Running over 2 nodes
PLUMED: Number of threads: 16
PLUMED: Cache line size: 512
PLUMED: Number of atoms: 4141
PLUMED: File suffix:
PLUMED: FILE: plumed.dat
PLUMED: Action DISTANCE
PLUMED: with label dist01
PLUMED: between atoms 32 31
PLUMED: using periodic boundary conditions
PLUMED: Action DISTANCE
PLUMED: with label dist02
PLUMED: between atoms 32 3965
PLUMED: using periodic boundary conditions
PLUMED: Action MATHEVAL
PLUMED: with label RC
PLUMED: with arguments dist01 dist02
PLUMED: with function : d1-d2
PLUMED: with variables : d1 d2
PLUMED: function as parsed by lepton: (d1)-(d2)
PLUMED: derivatives as computed by lepton:
PLUMED: 1
PLUMED: -1
PLUMED: Action MATHEVAL
PLUMED: with label SUM
PLUMED: with arguments dist01 dist02
PLUMED: with function : d1+d2
PLUMED: with variables : d1 d2
PLUMED: function as parsed by lepton: (d1)+(d2)
PLUMED: derivatives as computed by lepton:
PLUMED: 1
PLUMED: 1
PLUMED: Action UPPER_WALLS
PLUMED: with label SUM-uwall
PLUMED: with arguments SUM
PLUMED: added component to this action: SUM-uwall.bias
PLUMED: at 0.700000
PLUMED: with an offset 0.000000
PLUMED: with force constant 500000.000000
PLUMED: and exponent 2.000000
PLUMED: rescaled 1.000000
PLUMED: added component to this action: SUM-uwall.force2
PLUMED: Action RESTRAINT
PLUMED: with label RESTRAIN
PLUMED: with arguments RC
PLUMED: added component to this action: RESTRAIN.bias
PLUMED: at -0.300000
PLUMED: with harmonic force constant 500.000000
PLUMED: and linear force constant 0.000000
PLUMED: added component to this action: RESTRAIN.force2
PLUMED: Action PRINT
PLUMED: with label @6
PLUMED: with stride 1
PLUMED: with arguments RC RESTRAIN.bias
PLUMED: on file COLVAR
PLUMED: with format %f
PLUMED: Action ENDPLUMED
PLUMED: with label @7
PLUMED: END FILE: plumed.dat
PLUMED: Timestep: 0.001000
PLUMED: KbT: 2.494339
PLUMED: Relevant bibliography:
PLUMED: [1] The PLUMED consortium, Nat. Methods 16, 670 (2019)
PLUMED: [2] Tribello, Bonomi, Branduardi, Camilloni, and Bussi, Comput. Phys. Commun. 185, 604 (2014)
PLUMED: Please read and cite where appropriate!
PLUMED: Finished setup
Started mdrun on rank 0 Thu May 30 12:33:56 2024
Step Time
0 0.00000
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rxn_QMMM_DEF_cp2k.out
Type: application/octet-stream
Size: 165054 bytes
Desc: not available
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20240530/fd0ebb9d/attachment-0002.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rxn_QMMM_DEF.mdp
Type: application/octet-stream
Size: 2954 bytes
Desc: not available
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20240530/fd0ebb9d/attachment-0003.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: rxn_QMMM_DEF_cp2k.inp
Type: chemical/x-gamess-input
Size: 2709 bytes
Desc: not available
URL: <https://lists.cp2k.org/archives/cp2k-user/attachments/20240530/fd0ebb9d/attachment-0001.inp>
More information about the CP2K-user
mailing list