Chinese Physics Letters, 2020, Vol. 37, No. 12, Article code 126301 Accuracy of Machine Learning Potential for Predictions of Multiple-Target Physical Properties Yulou Ouyang (欧阳宇楼)1, Zhongwei Zhang (张忠卫)1, Cuiqian Yu (俞崔前)1, Jia He (何佳)1, Gang Yan (严钢)1,2, and Jie Chen (陈杰)1* Affiliations 1Center for Phononics and Thermal Energy Science, China–EU Joint Lab for Nanophononics, School of Physics Science and Engineering, Tongji University, Shanghai 200092, China 2Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai 200092, China Received 26 September 2020; accepted 27 October 2020; published online 8 December 2020 Supported by the National Natural Science Foundation of China (Grant Nos. 12075168 and 11890703), the Science and Technology Commission of Shanghai Municipality (Grant Nos. 19ZR1478600, 18ZR1442000 and 18JC1410900), the Fundamental Research Funds for the Central Universities (Grant No. 22120200069), and the Open Fund of Hunan Provincial Key Laboratory of Advanced Materials for New Energy Storage and Conversion (Grant No. 2018TP1037_201901).
*Corresponding author. Email: jie@tongji.edu.cn
Citation Text: , Zhang Z W, Yu C Q, He J, and Yan G et al. 2020 Chin. Phys. Lett. 37 126301    Abstract The accurate and rapid prediction of materials' physical properties, such as thermal transport and mechanical properties, are of particular importance for potential applications of featuring novel materials. We demonstrate, using graphene as an example, how machine learning potential, combined with the Boltzmann transport equation and molecular dynamics simulations, can simultaneously provide an accurate prediction of multiple-target physical properties, with an accuracy comparable to that of density functional theory calculation and/or experimental measurements. Benchmarked quantities include the Grüneisen parameter, the thermal expansion coefficient, Young's modulus, Poisson's ratio, and thermal conductivity. Moreover, the transferability of commonly used empirical potential in predicting multiple-target physical properties is also examined. Our study suggests that atomic simulation, in conjunction with machine learning potential, represents a promising method of exploring the various physical properties of novel materials. DOI:10.1088/0256-307X/37/12/126301 PACS:63.20.dk, 63.20.kg © 2020 Chinese Physics Society Article Text Molecular dynamics (MD) simulation is an atomic-level and powerful technique for studying the various physical properties of materials.[1–6] It can model a wide range of systems, ranging from gases and liquids to solids. In addition, it can also naturally take into account complex factors that often exist under real experimental conditions, such as defects, disorder, and anharmonicity, which are often difficult to handle in other atomic-level approaches, such as first-principles calculations. In MD simulation, the interatomic interaction is modeled by the empirical potential (also known as the force field); the accuracy of the empirical potential represents a crucial factor in regard to the prediction accuracy of physical properties in this type of simulation. Moreover, empirical potential has also been widely used to study dynamic behaviors in various fields, including biological medicine,[7] drug discovery,[8] food carbohydrates,[9] toxicology,[10] chemistry,[11] sensing,[12] thermal transport,[13–15] and nanoparticle self-assembly.[16] Many forms of empirical potential have been developed, according to the literature, such as the Stillinger–Weber (SW) potential,[17] the Tersoff potential,[18–20] the embedded-atom method (EAM),[21] the Morse potential,[22] and the reactive empirical bond order (REBO)[23,24] potential. However, all of these assume specific mathematical forms with certain parameters, with the empirical parameters generally obtained by fitting a very limited collection of experimental data for a given structure, such as the lattice constant and elastic constants, resulting in poor transferability of empirical potentials for predicting other material properties, or allotropes for the same element which are not used in the fitting. For example, the original Tersoff[19,20] potential was developed to describe the structural properties and energetics for covalently bonded diamond structured crystals (including carbon, silicon and germanium), based on their bond order potential. However, for other carbon allotropes, such as two-dimensional graphene, the original Tersoff potential provides a poor description of the phonon dispersion relation,[18] leading to inaccurate predictions relating to thermal conductivity $\kappa$. To overcome this deficiency, Lindsay et al.[18] developed the optimized Tersoff potential by fitting the experimentally measured phonon dispersion, thereby demonstrating that their optimized Tersoff potential could provide a more accurate prediction of $\kappa$ for both carbon nanotubes and graphene. Very recently, using a Reax force field combined with MD simulations, Ding et al.[25] found an anomalous dependence of thermal conductivity on biaxial strain in monolayer silicene that has not been observed before with other empirical potentials, such as Tersoff, or the modified EAM potential. This phenomenon stems from the fact that the empirical potential was originally obtained by fitting the physical properties of bulk silicon, with the result that cannot describe the geometric evolution and bonding dynamics of low-buckled monolayer silicene under an extreme loading of strain. Their results suggest that the physical properties predicted by MD simulations depend critically on the selected empirical potential. These examples highlight the fact that a variety of physical properties should be used in the fitting of a given empirical potential, in order to ensure its transferability in predicting other physical properties. Unfortunately, this requirement often cannot be satisfied in the development of empirical potentials. On the other hand, machine learning (ML) algorithms can provide completely new frameworks to predict the physical properties of materials.[26–29] There are two main approaches related to ML's prediction of the physical properties of materials. One is to train an ML model with appropriate algorithms and descriptors to directly predict physical properties without empirical potentials. The other framework is to use machine learning potential (MLP) in conjunction with traditional transport calculation methods, such as molecular dynamics simulations, to calculate the physical properties of materials. An ML model trained by the appropriate algorithms and descriptors has proven capable of accurately and efficiently predicting various physical properties of materials. For instance, Wang et al.[26] systematically summarized recent advances in the development of thermoelectric materials using an ML model. However, the underlying physical mechanisms for thermal transport properties, such as phonon group velocity, or the three-phonon scattering process, cannot be obtained by an ML model alone. Such deficiencies can be overcome by combining MLP with traditional transport calculations. No specific potential form is assumed a priori in the MLP, and the total energies and atomic forces calculated via the density-functional theory (DFT) are used for training, rather than only fitting certain experimental data, as is the case for conventional empirical potentials. Hence, MLPs can provide a comprehensive description of the potential energy surface (PES) of materials. To date, due to its many advantages, many different types of MLP have been developed, such as the high-dimensional neural network potential,[30] support vector machines,[31] the Gaussian approximation potential (GAP),[32–34] the spectral neighbor analysis potential (NNP),[35] and the moment tensor potential.[36] Previous studies have shown that these MLPs can accurately describe the PES of materials. For example, the GAP has been applied to silicon[34] and tungsten,[37] and the NNP can be employed in a variety of systems, such as the methanol molecule,[38] the ethane molecule[39] and ZnO.[40] Since MLP can effectively describe the PES of materials, a natural question is raised: does MLP have the capacity to accurately predict the physical properties of materials? Indeed, some studies[37,40,41] have shown that, compared with empirical potentials, MLP offers a superior description of phonon dispersion. However, in terms of the multiple macroscopic physical properties of materials, a key measure of the transferability of the developed potential, the validity of MLP predictions warrants further investigation. In this work, we develop an MLP parameter, set to describe the energies and atomic forces in graphene, and validate the accuracy of the developed MLP in modeling the various physical properties of graphene by comparing the prediction results with DFT calculations for Young's modulus, Poisson's ratio, the thermal expansion coefficient, the Grüneisen parameter, and thermal conductivity. Moreover, we perform detailed analysis to compare the accuracy of MLP with empirical potential in terms of its effectiveness in describing the three-phonon scattering process. Our results suggest that MLP, combined with atomic simulation, represents a promising method for exploring the multiple-target physical properties of novel materials. Methods. (A) The Gaussian Approximation Potential. In this work, we use the Gaussian approximation potential developed by Bartók and Csányi et al.,[32–34] which has the highest degree of accuracy in predicting interatomic forces of the various MLPs.[42] In the GAP, the total energy of a system, $E_{\rm total}$, is the summation of individual atomic contributions:[32] $$ E_{\rm total}=\sum\limits_i {e_{i}(\boldsymbol{q}_{i})},~~ \tag {1} $$ where $e_{i}$ denotes the contribution of energy from the $i$th atom, and $\boldsymbol{q}_{i}$ represents the descriptor vector, representing the atomic neighborhood environments of the $i$th atom. The local energy contribution, $e_{i}(\boldsymbol{q}_{i})$, is given by a linear combination of the kernel functions:[43] $$ e_{i}(\boldsymbol{q}_{i})=\sum\limits_j {\alpha_{j}K(\boldsymbol{q}_{i},\boldsymbol{q}_{j})},~~ \tag {2} $$ where the summation over $j$ includes all atomic neighborhood environments included in the training datasets. The coefficient $\alpha_{j}$ is determined by the training procedure, and the kernel function $K(\boldsymbol{q}_{i},\boldsymbol{q}_{j})$ is used to provide a measure for the similarity of any two atomic neighborhood environments. In our study, the datasets (atomic structures, total energy and atomic forces) were obtained from ab initio molecular dynamics (AIMD) simulations, using the Vienna ab initio simulation package (VASP)[44,45] with plane-wave basis sets and projector-augmented wave pseudopotentials. The pseudopotentials used in our work were taken from the potpaw_PBE.54 library of VASP. In the process of AIMD simulation, we used a plane wave basis with a cutoff energy of 600 eV and a $3\times 3 \times 1$ Monkhorst–Pack grid for sampling electronic states. The AIMD simulations were performed using the NVT ensemble, with a time step of 1 fs. The final datasets comprised 2000 snapshots of atomic configurations of 245 carbon atoms, at temperatures between 0 and 2000 K. (B) Mechanical Properties. MD simulations based on the trained GAP model were conducted to study Young's modulus and Poisson's ratio of single-layer graphene, using the LAMMPS[46] package, with a time step of 0.5 fs. Periodic boundary conditions (PBCs) were applied in the in-plane ($x$ and $y$) directions. With regard to structural relaxation, the initial structure was fully relaxed. To obtain Young's modulus, the canonical (NVT) ensemble was used to control the system temperature, and uniaxial strain was applied along the $x$ direction, at a constant strain rate. Young's modulus $E$ is given by $$ E=\frac{1}{V}\frac{\partial^{2}U}{\partial^{2}\varepsilon },~~ \tag {3} $$ where $V$ denotes the volume of the graphene, based on a thickness of 0.34 nm, $\varepsilon =\Delta l/l$ represents the strain, and $U$ is the total energy. Young's modulus can then be obtained by fitting the $\frac{\partial U}{\partial \varepsilon} \sim \varepsilon$ relation. Poisson's ratio $P$ is defined as $$ P = -\frac{\varepsilon_{y}}{\varepsilon_{x}},~~ \tag {4} $$ where $\varepsilon_{x}$ and $\varepsilon_{y}$ represent the transverse ($x$-direction) and longitudinal ($y$-direction) strains, respectively. We applied a small strain along the $x$ direction, and recorded the corresponding strain along the $y$ direction. The isothermal-isobaric (NPT) ensemble was employed in this simulation, and the simulation box was permitted to vary in the $y$ direction to release the stress.[47] Poisson's ratio is computed as the finite difference from the $\varepsilon_{y}\sim \varepsilon_{x}$ relation. The methods mentioned above are widely used to study the mechanical properties of materials.[48–52] (C) Thermal Properties. Phonon dispersion: PHONOPY code[53] and thirdorder.py script[54] were used to generate a series of supercells of graphene, containing $7\times 7 \times 1$ primitive cells for the calculation of second-order (harmonic) and third-order (anharmonic) interatomic force constants (IFCs), respectively. Next, DFT simulations and MD simulations (with machine learning potentials and empirical potentials) were respectively performed to obtain the IFCs. When employing VASP to calculate the IFCs in graphene, a Monkhorst–Pack $k$-mesh of $5\times 5 \times 1$ was used to sample the whole Brillouin zone, and the cutoff energy for the plane-wave basis was set to 600 eV. The phonon dispersions were obtained using PHONOPY. The phonon group velocity could then be obtained by calculating the slope of the phonon dispersion relation. Thermal conductivity: The ShengBTE package[54] was used to calculate $\kappa$, the three-phonon scattering rate, the Grüneisen parameter, and phonon relaxation times, which required the input of second- and third-order IFCs. In our work, a cutoff distance of up to the 9$^{\rm th}$ nearest neighbors was selected, in order to create the supercell for the calculations of anharmonic IFCs. A $40\times 40 \times 1$ $q$ mesh was used in the ShengBTE package to calculate the value of $\kappa$. These key parameters were deemed sufficiently to obtain converged $\kappa$ values in graphene.[55] Based on an iterative solution of the Peierls–Boltzmann transport equation (PBTE), the lattice thermal conductivity tensor $\kappa^{\alpha \beta}$ could then be obtained as follows:[54] $$ \kappa^{\alpha \beta }=\frac{1}{k_{\rm B}T^{2}\varOmega N}\sum\limits_\lambda {n_{0}(n_{0}+1){(\hbar \omega_{\lambda })}^{2}v_{\lambda }^{\alpha }F_{\lambda }^{\beta }},~~ \tag {5} $$ where $k_{\rm B}$, $T$, $\varOmega$ and $N$ represent the Boltzmann constant, the temperature, the volume of the unit cell, and the number of ${\boldsymbol q}$ points in the first Brillouin zone, respectively; $\lambda$ denotes a phonon mode (${\boldsymbol q}, s$) with a given wave vector ${\boldsymbol q}$ and a branch index $s$; $n_{0}$ is the equilibrium Bose–Einstein distribution function, $\hbar$ is the reduced Planck constant, $v_{\lambda }^{\alpha}$ represents the phonon group velocity in the $\alpha$-direction, and $F_{\lambda }^{\beta }= \tau_{\lambda }^{0}(v_{\lambda }^{\beta }+\varDelta_{\lambda}$), where $\tau_{\lambda }^{0}$ is the relaxation time of mode $\lambda$, and $\varDelta_{\lambda}$ is a correction term incorporating the inelastic three-phonon scattering processes. The reported result for $\kappa$ is averaged over the three diagonal terms of the $\kappa$ tensor. The Grüneisen parameter $\gamma ({\boldsymbol q}, s$) for the phonon mode (${\boldsymbol q}, s$) is given by[53] $$\begin{alignat}{1} \gamma (\boldsymbol{q},s)&=-\frac{\varOmega }{\omega_{\boldsymbol{q},s}}\frac{{\partial}\omega_{\boldsymbol{q},s}}{{\partial }\varOmega }\\ &=\boldsymbol{-}\frac{\varOmega }{2[\omega_{\boldsymbol{q},s}]^{2}}\boldsymbol{\langle}\boldsymbol{e}(\boldsymbol{q},s)\Big|\frac{{\partial }D(\boldsymbol{q})}{{\partial }\varOmega }\Big|\boldsymbol{e}(\boldsymbol{q},s)\rangle,~~ \tag {6} \end{alignat} $$ where $\omega_{\boldsymbol{q},s}$ and $\boldsymbol{e}(\boldsymbol{q},s)$ denote the eigen-frequency and eigen-vector for the phonon mode $(\boldsymbol{q},s)$, respectively, and $D(\boldsymbol{q})$ is the dynamical matrix. Thermal expansion coefficient: We adopted a quasi-harmonic approximation (QHA) to calculate the thermal expansion coefficient (TEC). First, a series of phonon spectra were calculated, using different lattice constants. For each lattice constant, the Helmholtz free energy could be obtained as follows: $$\begin{alignat}{1} F(a_{n}T)={}&E(a_{n})+\sum\limits_{\boldsymbol{q},s} \frac{\hbar}{2} \omega_{\boldsymbol{q},s}(a_{n})\\ &+k_{\rm B}T\ln {\Big[1-{\exp}\Big[-\frac{\hbar \omega_{\boldsymbol{q},s}(a_{n})}{k_{\rm B}T}\Big]\Big]},~~ \tag {7} \end{alignat} $$ where $n$ denotes the structural index, $a_{n}$ and $E(a_{n})$ are lattice parameters, and the electronic energy of the structure $n$, respectively; ${\omega }_{\boldsymbol{q},s}(a_{n})$ represents the vibrational frequency, corresponding to wave vector ${\boldsymbol q}$ and branch index $s$. In the QHA, since $\omega_{\boldsymbol{q},s}(a_{n})$ only depends on volume, the temperature-dependent lattice constant $a(T)$ can be obtained via the Birch–Murnaghan equation of states to fit the Helmholtz free energy, and $a(T)$ corresponds to the state with the minimum Helmholtz free energy. The fitting process can be implemented using the Phonopy–QHA script.[53,56] Finally, the TEC $\xi (T)$ can be achieved via $$ \xi (T)=\frac{1}{a(T)}\frac{da(T)}{dT}.~~ \tag {8} $$ Results and Discussion. Figure 1 shows the schematic routine used in this work to test the validity of MLP for the prediction of multiple-target physical properties. To begin with, we perform DFT calculations and AIMD simulations to generate a sufficient quantity of accurate data (atomic forces and energy) to form the datasets (see Methods). All generated data are then randomly divided into three datasets, including a training dataset (70%), a validation dataset (20%), and a testing dataset (10%). The methods mentioned above are widely used to train MLPs, based on the literature.[32–34] The training dataset is used to train the GAP model, and the training process is performed using the QUIP package.[57] The validation dataset is used to verify the performance of the initial GAP model in terms of its predictions of atomic forces and energy. If the discrepancy between the prediction from the GAP model and the validation data is greater than the set tolerance value, we can then adjust the hyper-parameters, and repeat the training-validating process.
cpl-37-12-126301-fig1.png
Fig. 1. Schematic routine used in this work to test the validity of multiple-target physical property prediction, based on machine learning potential.
After the training process, the testing dataset, which is not in the training and validation database, is then used to validate the accuracy of the final GAP model. Figure 2 shows the accuracy of the GAP model in predicting energies and forces for the testing dataset. The root-mean-square errors (RMSEs) in the energy and force predicted by the GAP model are 0.0097 eV/atom and 0.036 eV/Å, respectively, indicating that the developed GAP mode can reproduce both the energies and atomic forces with the same degree of accuracy as the ab initio calculations. The final hyper-parameters obtained from the above procedure are listed in Table 1. The meanings of all symbols can be found in Ref. [58], and further details on the construction of the GAP model can be found in Refs. [33,34,58,59] The developed MLP is then used to provide input parameters (energy, atomic forces) for the calculation of physical properties, as shown in Fig. 1. In order to specifically test the accuracy of MLP in terms of its ability to predict multiple-target physical properties, we use the same theoretical framework (such as PBTE or MD simulations) to predict various physical properties, except that the input parameters are from different sources (e.g., MLP or DFT, or empirical potentials).
cpl-37-12-126301-fig2.png
Fig. 2. Comparisons between MLP and DFT calculations for (a) energy per atom, and (b) atomic force.
Table 1. Hyper-parameters obtained after training the GAP model.
Hyper-parameter 2$b$ 3$b$ SOAP
$\delta$ (eV) 8.0 4.0 2.0
$r_{\rm cut}$ (Å) 6.0 6.0 6.0
$w_{\rm cut}$ (Å) 1.0 1.0 1.0
$n_{{\max}}$ 12
$l_{{\max}}$ 12
$N_{t}$ 200 400 1200
Sparse_method uniform uniform cur_points
With our developed MLP, we first calculate Young's modulus of graphene via MD simulations. The lengths of graphene in the $x$ and $y$ directions are $L_{x} = 8.58$ nm and $L_{y} = 4.96$ nm [Fig. 3(a)]. To predict Young's modulus, uniaxial strain varying from 0 to 0.02 is applied along the $x$ direction, and the curves of resulting energy vs strain are recorded, in which different strain rates are tested to ensure the convergence of the simulation results [Fig. 3(b)]. As listed in Table 2, the predicted value of Young's modulus, based on the MD simulation, is 0.98 TPa, which is in excellent agreement with both the experimental result[60] of $\sim $1.0 TPa and the DFT results[61,62] (see Table 2).
Table 2. Comparison between MLP and DFT calculations, in relation to predictions of Young's modulus, Poisson's ratio, and thermal expansion coefficient for graphene.
Physical properties MLP DFT
Young's modulus $E$ (TPa) 0.98 1.11[61] 1.05[62]
Poisson's ratio 0.193 0.186[62] 0.173[63]
Thermal expansion coefficient $-3.18$ $-3.6$[64] $-3.26$[65]
(µm$\cdot$K$^{-1}$)
Poisson's ratio is another fundamental property to describe the elastic deformation behavior of materials. We therefore investigate Poisson's ratio for graphene, based on the MLP. Both the $\varepsilon_{y}\sim \varepsilon_{x}$ relation for graphene [Fig. 3(c)] and the evolution of Poisson's ratios as a function of strain [Fig. 3(d)] are recorded. In the small strain limit of $\sim $0.01, Poisson's ratio, calculated using MLP, is about 0.193, which agrees well with the findings based on first-principles calculations[62,63] (see Table 2). To explore the material's intrinsic mechanical properties, the above physical properties are obtained at low temperatures, so as to avoid thermal fluctuation.
cpl-37-12-126301-fig3.png
Fig. 3. Molecular dynamics simulations of graphene using the GAP. (a) Simulation model of a graphene sheet. (b) Energy $U$ versus uniaxial strain $\varepsilon_{x}$ for graphene under different strain rates, with an NVT ensemble. The inset shows the linear fitting line to the $\frac{\partial U}{\partial \varepsilon_{x}} \sim \varepsilon_{x}$ relation. (c) Resultant strain in the $y$ direction ($\varepsilon_{y}$) versus applied strain in the $x$ direction ($\varepsilon_{x}$) with an NPT ensemble. (d) Poisson's ratio, extracted from the strain–strain relation in panel (c).
So far, we have verified that the developed MLP can accurately predict the mechanical properties of graphene. To further test the validity of MLP in describing other physical properties in relation to temperature effects, we calculate the TEC of graphene (i.e., the temperature dependence of the lattice) within the QHA (see Methods). The temperature-dependent TECs of graphene, obtained from both MLP and DFT, show a similar trend (Fig. 4). Moreover, the room-temperature TEC obtained via MLP also agrees quantitatively with the previous results from DFT calculations[64,65] (see Table 2). Across the whole temperature range, there is a slight deviation between TEC calculated by MLP and DFT. This may be due to the fact that the NVT ensemble is used to generate the training data for fitting MLP during AIMD simulation, giving rise to a deficiency whereby temperature induced volume change is not sufficiently considered in the construction of the MLP. The use of an NPT ensemble that takes into account the volume change caused by temperature effect in future AIMD simulations may further improve the accuracy of the GAP's predictive capability in regard to TEC. This requires further investigation.
cpl-37-12-126301-fig4.png
Fig. 4. Thermal expansion coefficients $\xi$ as a function of temperature, calculated via MLP and DFT, respectively.
In addition, to further test the effectiveness of MLP in predicting phonon properties, we calculate the phonon dispersion of graphene using the MLP, DFT, and the optimized Tersoff potential, as shown in Fig. 5(a). The phonon dispersion predicted via MLP shows excellent agreement with those derived from both DFT calculation and experimental results, for both acoustic and optical phonon branches. On the other hand, the calculation results based on the optimized Tersoff potential can only fit the acoustic phonon branches, but exhibit substantial deviation from the DFT and experimental curves for optical phonons. The phonon dispersion relation only reflects the harmonic phonon properties of the crystals. In Fig. 5(b), we compare the predictions of phonon group velocity, as derived from the MLP, DFT, and the optimized Tersoff potential. The phonon group velocity predicted by MLP shows good agreement with the DFT calculations for the entire frequency range. However, the optimized Tersoff potential's agreement with the DFT calculations for phonon group velocity is limited to only low-frequency acoustic phonons. For thermal transport properties such as thermal conductivity, anharmonicity-induced phonon-phonon scattering is crucial for determining thermal conductivity in pristine crystals.[66–68] Although optical phonons have a negligible contribution to thermal conductivity in bulk crystal, due to their relatively flat phonon band structure, they can participate in the three-phonon scattering process, which can indirectly affect thermal transport. Therefore, an accurate description of optical phonons in the dispersion relation is key to ensuring the correct phonon-phonon scattering channels in a reciprocal space. In this regard, the MLP developed in this study has a particular advantage over the optimized Tersoff potential, in the sense that MLP can accurately describe the phonon dispersion over the entire Brillouin zone.
cpl-37-12-126301-fig5.png
Fig. 5. (a) Comparison of phonon dispersion of graphene in the first Brillouin zone of graphene. Solid line: MLP results; dashed line: DFT results; dot-dashed lines: optimized Tersoff results; empty circle: experimental results. (b) Phonon group velocity $\upsilon$ as a function of frequency, predicted by MLP (filled symbol), DFT (stared symbol) and optimized Tersoff potential (empty symbol).
In order to verify explicitly the validity of MLP in predicting thermal transport properties, we calculate $\kappa$ based on the iterative solution to PBTE, using second- and third-order IFCs from both MLP and DFT calculations. To compare the predictions derived from the empirical potential, we follow the same calculation protocol used in the optimized Tersoff approach developed by Lindsay et al.[18,69] to further consider the isotope scattering (natural concentration of 1.1% $^{13}$C) and boundary scattering term $1/\tau_{\lambda }^{b}=\vert \nu_{\lambda }\vert /L$, where $L$ represents the sample length and $\nu_{\lambda}$ is the phonon group velocity of mode $\lambda$. Figure 6(a) shows the calculation results for the temperature dependence of $\kappa$ in graphene for a finite length $L=10$ µm. The prediction made via MLP shows excellent agreement with the DFT results. Moreover, our calculation results, based on the optimized Tersoff potential, are in good agreement with the results of Lindsay et al.,[18,69] and both the predictions obtained via the optimized Tersoff potential are notably higher than the DFT results for temperatures above 200 K. For instance, the optimized Tersoff potential overestimates $\kappa$ by 21.1% at 300 K, as compared to the DFT result. When further considering an infinitely long sample, the good agreement between DFT and MLP results remains the same, but the overestimation of $\kappa$ obtained via the optimized Tersoff potential becomes even greater, with a 24.3% overestimation at 300 K for infinite length. To identify the origin of this overestimation of $\kappa$ by the optimized Tersoff potential, we further investigate anharmonic phonon properties by examining the three-phonon scattering process for an infinitely long sample. As shown in Fig. 6(b), the three-phonon scattering phase space ($P_{3}$) predicted by MLP overlaps well with the DFT result, while the $P_{3}$ predicted by the optimized Tersoff potential is noticeably smaller than the DFT result in the low-frequency (1–10 THz) and intermediate frequency (25–40 THz) regions, as highlighted by the dashed ellipse in Fig. 6(b), which means fewer available channels for three-phonon scattering process.
cpl-37-12-126301-fig6.png
Fig. 6. (a) The predicted thermal conductivity ($\kappa$) of graphene with finite length $L=10$ µm, and infinite length, as a function of temperature, using different approaches. References: Lindsay et al.,[18,69] Peng et al.[70] and Feng et al.[71] Here (b)–(d) show, respectively, the calculation results of three-phonon scattering phase space ($P_{3}$), relaxation time ($\tau$) at 300 K, and the mode-dependent Grüneisen parameters ($\gamma$) for graphene with infinite length, calculated via MLP (filled symbol), DFT (stared symbol), and optimized Tersoff potential (empty symbol). The dashed ellipses in (b) and (c) highlight the area of interest for the purpose of comparison.
The energy and the momentum conservation laws restrict the permitted three-phonon channels. Generally, there are three typical three-phonon scattering processes in solids: acoustic $+$ acoustic$\leftrightarrow$acoustic, acoustic $+$ acoustic $\leftrightarrow$optical, and acoustic $+$ optical$\leftrightarrow$optical. As shown in Fig. 5(a), the dispersion relation for optical phonons predicted by the optimized Tersoff potential shows significant deviation from the dispersion predicted by the MLP and DFT calculations. For instance, the flexural optical (ZO) phonon is shown as having a far higher frequency, as compared to that given by the MLP and DFT calculations. The increased energy gap between acoustic and optical phonons via optimized Tersoff renders fewer phonon scattering processes which satisfy the energy conservation law, particularly for low-frequency acoustic phonons involved in the three-phonon process, giving rise to the reduced $P_{3}$ values highlighted by the dashed ellipse in Fig. 6(b), relating to the optimized Tersoff potential. Consequently, the optimized Tersoff potential significantly overestimates the phonon relaxation time $\tau$ in the dashed ellipse region, as shown in Fig. 6(c). Since the optimized Tersoff potential can predict phonon group velocity reasonably well [Fig. 5(b)], at least for the acoustic branches, we can conclude that the overestimation of $\kappa$ found when using the optimized Tersoff potential is caused by the overestimated phonon relaxation time, as compared to the DFT result, particularly with regard to low-frequency acoustic phonons. In addition, we calculate the Grüneisen parameter $\gamma$ in graphene, which is usually used to describe the anharmonicity of crystals. Again, we observe good agreement in the mode-dependent Grüneisen parameter between MLP and DFT calculations, while the prediction of $\gamma$ with optimized Tersoff shows obvious deviation from the DFT result for the TA and LA phonons at the low frequencies [Fig. 6(d)]. Such failings in the optimized Tersoff potential may be attributed to the fact that the quantities related to anharmonicity are not included when fitting the empirical potential.[18] Finally, we simply summarize the accuracy of MLP and DFT in predicting harmonic and anharmonic properties. For the harmonic properties, such as Young's modulus, the experimental measurement value is $\sim $1.0 TPa.[60] In our work, the predicted value of Young's modulus of graphene, using MLP combined with MD simulations, is 0.98 TPa, while the values found in the literature for Young's modulus, as calculated by DFT, are in the range of 1.05–1.11 TPa.[61–62] Both MLP and DFT calculations exhibit good agreement with the experimental data. Similar good agreement between MLP/DFT calculations and experimental measurements is also demonstrated for the phonon dispersion relation, as shown in Fig. 5(a). Therefore, the predictions by MLP combined with MD simulations demonstrate a high level of accuracy for harmonic properties, similarly to DFT calculations. For the anharmonic properties, we compare the Grüneisen parameter and TEC between MLP and DFT calculations by computing the relative error, defined as $$W=\frac{1}{N}\sum\limits_{i=1}^N \frac{{\rm MLP}_{i}-{\rm DFT}_{i}}{{\rm DFT}_{i}}.~~ \tag {9} $$ The relative error $W$ is 0.41% and 5.67% for the Grüneisen parameter and TEC, respectively. Therefore, MLP combined with MD simulation has been shown to predict both harmonic and anharmonic properties at a level of accuracy comparable to that derived from DFT calculations. Moreover, we further compare the efficiency of MLP and DFT calculations in terms of computational time. For instance, in this work, the DFT calculations of anharmonic IFCs for graphene (98 atoms in a supercell) take $\sim $5.5 days, using 40 CPU cores. Remarkably, the computational task for the MLP calculation is reduced to $\sim $5 min, with 10 CPU cores. More importantly, MLP, combined with MD simulation, has notable advantages over DFT calculations in terms of dealing with the dynamic processes of materials with complex features. For example, when studying the thermal properties of materials, MLP, combined with MD simulation, can carry out large-scale dynamic process simulation of complex and large systems, considering the anharmonic effects to all orders as well as the effects of boundary, doping, and defects on thermal transport properties, which are difficult to handle using DFT calculations. In summary, we have proved that machine learning potential is capable of simultaneously predicting multiple-target physical properties with the same level of accuracy as DFT calculations, including Young's modulus, Poisson's ratio, thermal expansion coefficients, Grüneisen parameters, and thermal conductivity. Moreover, our work shows that the machine learning potential obtained by training datasets including energy and atomic forces can accurately describe phonon properties and their scattering processes. In contrast, the commonly used empirical potentials, such as optimized Tersoff, show poor transferability in predicting multiple physical properties not specifically included in the fitting process, which seriously limits its application in terms of the accurate characterization of a material's physical properties. Combined with atomic simulation tools such as molecular dynamics simulations, our work suggests that machine learning potential represents a promising candidate in researcher of various physical properties and processes in novel materials.
References Size-dependent phononic thermal transport in low-dimensional nanomaterialsMolecular Dynamics Simulation of Effects of Stretching and Compressing on Thermal Conductivity of Aligned Silicon Oxygen ChainsThermal conductivity of nanowiresTunable phonon nanocapacitor built by carbon schwarzite based host-guest systemEmerging Theory, Materials, and Screening Methods: New Opportunities for Promoting Thermoelectric PerformanceRemarkable thermal rectification in pristine and symmetric monolayer graphene enabled by asymmetric thermal contactMolecular dynamics simulations of biomoleculesMolecular dynamics simulations and drug discoveryApplication of molecular dynamics simulation in food carbohydrate research—a reviewMolecular dynamics simulations and applications in computational toxicology and nanotoxicologyReaxFF Reactive Force Field for Molecular Dynamics Simulations of Hydrocarbon OxidationMg 2+ -sensing mechanism of Mg 2+ transporter MgtE probed by molecular dynamics studyOrdered water layers by interfacial charge decoration leading to an ultra-low Kapitza resistance between graphene and waterDisorder limits the coherent phonon transport in two-dimensional phononic crystal structuresNegative Gaussian curvature induces significant suppression of thermal conduction in carbon crystalsMolecular dynamics simulations of surfactant and nanoparticle self-assembly at liquid–liquid interfacesComputer simulation of local order in condensed phases of siliconOptimized Tersoff and Brenner empirical potential parameters for lattice dynamics and phonon thermal transport in carbon nanotubes and grapheneNew empirical approach for the structure and energy of covalent systemsModeling solid-state chemistry: Interatomic potentials for multicomponent systemsEmbedded-atom method: Derivation and application to impurities, surfaces, and other defects in metalsThermodynamic properties of fcc metals at high temperaturesA second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbonsExtension of the Brenner empirical interatomic potential to CSiH systemsAnomalous strain effect on the thermal conductivity of low-buckled two-dimensional siliceneMachine Learning Approaches for Thermoelectric Materials ResearchMaterials Informatics for Heat Transfer: Recent Progresses and PerspectivesRevisiting PbTe to identify how thermal conductivity is really limitedDesigning Nanostructures for Phonon Transport via Bayesian OptimizationFirst Principles Neural Network Potentials for Reactive Simulations of Large Molecular and Condensed SystemsSupport vector machine regression (LS-SVM)—an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?On representing chemical environmentsMachine learning based interatomic potential for amorphous carbonMachine Learning a General-Purpose Interatomic Potential for SiliconSpectral neighbor analysis method for automated generation of quantum-accurate interatomic potentialsMoment Tensor Potentials: A Class of Systematically Improvable Interatomic PotentialsAccuracy and transferability of Gaussian approximation potential models for tungstenConstruction of high-dimensional neural network potentials using environment-dependent atom pairsAcceleration of saddle-point searches with machine learningHigh-dimensional neural-network potentials for multicomponent systems: Applications to zinc oxideEfficient machine-learning based interatomic potentialsfor exploring thermal conductivity in two-dimensional materialsPerformance and Cost Assessment of Machine Learning Interatomic PotentialsTemperature effect on the phonon dispersion stability of zirconium by machine learning driven atomistic simulationsEfficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis setFrom ultrasoft pseudopotentials to the projector augmented-wave methodFast Parallel Algorithms for Short-Range Molecular DynamicsStrain Engineering of Kapitza Resistance in Few-Layer GrapheneTailoring Graphene to Achieve Negative Poisson's Ratio PropertiesNegative Poisson’s Ratio in Single-Layer Graphene RibbonsNegative Poisson's ratio in rippled grapheneMechanical properties of bilayer graphene sheets coupled by sp3 bondingEffect of defects on Young's modulus of graphene sheets: a molecular dynamics simulationFirst principles phonon calculations in materials scienceShengBTE: A solver of the Boltzmann transport equation for phononsUnderstanding the physical metallurgy of the CoCrFeMnNi high-entropy alloy: an atomistic simulation studyFirst-principles calculations of the ferroelastic transition between rutile-type and CaCl 2 -type SiO 2 at high pressuresThermal conductivity modeling using machine learning potentials: application to crystalline and amorphous siliconDevelopment of a machine learning potential for grapheneMeasurement of the Elastic Properties and Intrinsic Strength of Monolayer GrapheneAb initio study of the elastic properties of single-walled carbon nanotubes and grapheneAb initio calculation of ideal strength and phonon instability of graphene under tensionBand structure engineering of graphene by strain: First-principles calculationsAssessment on lattice thermal properties of two-dimensional honeycomb structures: Graphene, h -BN, h -MoS 2 , and h -MoSe 2 Negative thermal expansion of pure and doped grapheneThree-phonon phase space and lattice thermal conductivity in semiconductorsPhonon Scattering in Semiconductors From Thermal Conductivity StudiesReducing lattice thermal conductivity in schwarzites via engineering the hybridized phonon modesPhonon thermal transport in strained and unstrained graphene from first principlesThe conflicting role of buckled structure in phonon transport of 2D group-IV and group-V materialsFour-phonon scattering reduces intrinsic thermal conductivity of graphene and the contributions from flexural phonons
[1] Zhang Z et al. 2020 Phys. Rep. 860 1
[2] Xu W X and Liang X G 2020 Chin. Phys. Lett. 37 046601
[3] Zhang Z and Chen J 2018 Chin. Phys. B 27 035101
[4] Zhang Z et al. 2020 Phys. Rev. B 101 081402(R)
[5] Ouyang Y et al. 2019 Ann. Phys. (Berlin) 531 1800437
[6] Jiang P et al. 2020 J. Appl. Phys. 127 235101
[7] Karplus M and McCammon J A 2002 Nat. Struct. Biol. 9 646
[8] Durrant J D and McCammon J A 2011 BMC Syst. Biol. 9 71
[9] Feng T et al. 2015 Innovative Food Sci. Emerging Technol. 31 1
[10] Selvaraj C et al. 2018 Food Chem. Toxicol. 112 495
[11] Chenoweth K, van D A C T and Goddard W A 2008 J. Phys. Chem. A 112 1040
[12] Ishitani R et al. 2008 Proc. Natl. Acad. Sci. USA 105 15393
[13] Ma Y et al. 2018 Carbon 135 263
[14] Hu S et al. 2019 Nanoscale 11 11839
[15] Zhang Z, Chen J and Li B 2017 Nanoscale 9 14208
[16] Luo M and Dai L L 2007 J. Phys.: Condens. Matter 19 375109
[17] Stillinger F H and Weber T A 1985 Phys. Rev. B 31 5262
[18] Lindsay L and Broido D A 2010 Phys. Rev. B 81 205441
[19] Tersoff J 1988 Phys. Rev. B 37 6991
[20] Tersoff J 1989 Phys. Rev. B 39 5566
[21] Daw M S and Baskes M I 1984 Phys. Rev. B 29 6443
[22] MacDonald R A and MacDonald W M 1981 Phys. Rev. B 24 1715
[23] Brenner D W et al. 2002 J. Phys.: Condens. Matter 14 783
[24] Dyson A J and Smith P V 1996 Surf. Sci. 355 140
[25] Ding B et al. 2020 Natl. Sci. Rev. nwaa220 (accepted)
[26] Wang T et al. 2019 Adv. Funct. Mater. 30 1906041
[27] Ju S and Shiomi J 2019 Nanoscale Microscale Thermophys. Eng. 23 157
[28] Ju S et al. 2018 Phys. Rev. B 97 184305
[29] Ju S et al. 2017 Phys. Rev. X 7 021024
[30] Behler J 2017 Angew. Chem. Int. Ed. 56 12828
[31] Balabin R M and Lomakina E I 2011 Phys. Chem. Chem. Phys. 13 11710
[32] Bartók A P, Kondor R and Csányi G 2013 Phys. Rev. B 87 184115
[33] Deringer V L and Csányi G 2017 Phys. Rev. B 95 094203
[34] Bartók A P et al. 2018 Phys. Rev. X 8 041048
[35] Thompson A P et al. 2015 J. Comput. Phys. 285 316
[36] Shapeev A V 2016 Multiscale Model. Simul. 14 1153
[37] Szlachta W J, Bartók A P and Csányi G 2014 Phys. Rev. B 90 104108
[38] Jose K V J, Artrith N and Behler J 2012 J. Chem. Phys. 136 194111
[39] Peterson A A 2016 J. Chem. Phys. 145 074106
[40] Artrith N, Morawietz T and Behler J 2011 Phys. Rev. B 83 153101
[41] Mortazavi B et al. 2020 J. Phys.: Mater. 3 02LT02
[42] Zuo Y et al. 2020 J. Phys. Chem. A 124 731
[43] Qian X and Yang R 2018 Phys. Rev. B 98 224108
[44] Kresse G and Furthmuller J 1996 Comput. Mater. Sci. 6 15
[45] Kresse G and Joubert D 1999 Phys. Rev. B 59 1758
[46] Plimpton S 1995 J. Comput. Phys. 117 1
[47] Chen J, Walther J H and Koumoutsakos P 2014 Nano Lett. 14 819
[48] Grima J N et al. 2015 Adv. Mater. 27 1455
[49] Jiang J W and Park H S 2016 Nano Lett. 16 2657
[50] Qin H et al. 2017 Nanoscale 9 4135
[51] Zhang Y Y et al. 2011 Carbon 49 4511
[52] Jing N et al. 2012 RSC Adv. 2 9124
[53] Togo A and Tanaka I 2015 Scr. Mater. 108 1
[54] Li W et al. 2014 Comput. Phys. Commun. 185 1747
[55] Qin G and Hu M 2018 npj Comput. Mater. 4 1
[56] Togo A, Oba F and Tanaka I 2008 Phys. Rev. B 78 134106
[57]The QUIP package is open to academic users at http://www.libatoms.org/home/software
[58] Qian X et al. 2019 Mater. Today Phys. 10 100140
[59] Rowe P et al. 2018 Phys. Rev. B 97 054303
[60] Lee C et al. 2008 Science 321 385
[61] Van L G et al. 2000 Chem. Phys. Lett. 326 181
[62] Liu F, Ming P and Li J 2007 Phys. Rev. B 76 064120
[63] Gui G, Li J and Zhong J 2008 Phys. Rev. B 78 075435
[64] Sevik C 2014 Phys. Rev. B 89 035422
[65] Mann S, Kumar R and Jindal V K 2017 RSC Adv. 7 22378
[66] Lindsay L and Broido D A 2008 J. Phys.: Condens. Matter 20 165209
[67] Holland M G 1964 Phys. Rev. 134 A471
[68] Zhang Z et al. 2018 Carbon 139 289
[69] Lindsay L et al. 2014 Phys. Rev. B 89 155426
[70] Peng B et al. 2017 Nanoscale 9 7397
[71] Feng T and Ruan X 2018 Phys. Rev. B 97 045202