<div dir="ltr"><div dir="ltr">
<p>Dear all,</p>
<p><br></p><p>a colleague in my lab found an unexpected dependence of the DFT-D3 dispersion energy on the amount of vacuum in a slab calculation, which could indicate a bug in the DFT-D3 implementation.</p>
<p><br></p><p>Here the message from Sasha:</p><p>I have a slab geometry for a slab that is ~20 A thick (less than 20) and I perforrm a fully periodic calculation.</p>
<p>To test the convergence of the vacuum region i performed ENERGY calculations with a cell of 40 A (so 20 A of vacuum) and, afterwards, the same ENERGY calculation with a cell thickness of 60A (so 40 A of vacuum).</p>
<p><br></p>
<p>I did define R_CUTOFF for the VDW as 8 A, i.e. less than half of the vacuum region. I was therefore expecting the dispersion energy to be exactly the same (?).</p>
<p><br></p>
<p>The ENERGY calculations for the SAME geometry (file p.xyz) give rise to a difference of 1.8eV in the dispersion energy term for the DFT-D3 calculation, when i change the cell size.</p>
<p>I repeated the same calculations with DFT-D2 and in this case there is no difference in the dispersion energy (as I would have expected).</p>
<p><br></p>
<p>In the tar file you can find the input that I used for the four different calculations (DFT-D3 with 20 and 40 A of vacuum and DFT-D2 same two cases) as well as output files and the geometry p.xyz.</p>
<p>Does anybody have an idea on what could be wrong?<br></p>
<p><br></p>
<p>Thanks in advance for your help</p>
<p><br></p><p>Kind regards,<br></p><p>Sasha (and Leopold)</p><p><br></p><p>P.S. I used cp2k revision svn:14377</p></div></div>