Recognition: unknown
Force-Aware Neural Tangent Kernels for Scalable and Robust Active Learning of MLIPs
Pith reviewed 2026-05-14 19:07 UTC · model grok-4.3
The pith
Mixed parameter-coordinate derivatives of the NTK yield natural similarity metrics for force-aware active learning in MLIPs.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We extend the Neural Tangent Kernel to a force-aware setting via mixed parameter-coordinate derivatives, yielding a force NTK and a joint energy-force NTK that provide natural similarity metrics for vector-field prediction. Paired with a linearly scaling acquisition framework based on chunked feature-space posterior-variance shortlisting, the method screens approximately 200k structures in hours, achieves the lowest energy and force MAE and RMSE on OC20, remains competitive on T1x/PMechDB/RGD, and exhibits lower variance than committee methods when candidate pools are shifted relative to the target distribution.
What carries the argument
The joint energy-force NTK formed by mixed derivatives of the NTK with respect to parameters and atomic coordinates; it supplies a similarity metric for both scalar energies and vector forces without extra fitting.
If this is right
- A single pretrained MLIP can drive the entire active-learning loop without maintaining an ensemble.
- Screening pools of 200k structures becomes feasible on modest hardware because only small kernel blocks are ever formed.
- Force-aware selection improves both energy and force accuracy on datasets where forces dominate the learning signal.
- Acquisition remains stable when the unlabeled pool is drawn from a different distribution than the target.
Where Pith is reading between the lines
- The same mixed-derivative construction could supply uncertainty estimates for any vector or tensor output in physics-informed models.
- Because the method re-uses a frozen pretrained network as a feature extractor, it lowers the barrier to applying active learning to large foundation-model fine-tuning campaigns.
- The chunked posterior-variance approach may transfer directly to other kernel-based acquisition strategies that currently materialise full candidate kernels.
Load-bearing premise
The mixed derivatives of the NTK produce similarity scores that correctly rank uncertainty in force predictions for structures outside the training distribution.
What would settle it
An experiment on a held-out test set in which force-aware NTK acquisition selects batches whose addition does not reduce force RMSE more than energy-only NTK or random selection.
Figures
read the original abstract
Active learning for machine-learning interatomic potentials (MLIPs) must address several challenges to be practical: scaling to large candidate pools, leveraging energy-force supervision, and maintaining robustness when candidate pools are biased relative to the target distribution. In this work, we jointly address these challenges. We first introduce a linearly scaling acquisition framework based on chunked feature-space posterior-variance shortlisting. By avoiding materialisation of the candidate and train set kernels, this approach enables screening of ~200k structures within hours and applies broadly to acquisition strategies that score candidates based on molecular similarity metrics. We then extend the Neural Tangent Kernel (NTK) to a force-aware setting via mixed parameter-coordinate derivatives, yielding a force NTK and a joint energy-force NTK that provide natural similarity metrics for vector-field prediction. We demonstrate the effectiveness of the joint energy-force NTK on the OC20 dataset, where force-aware acquisition is crucial: it achieves the lowest energy and force MAE and RMSE across all metrics and distribution splits. Across T1x, PMechDB, and RGD benchmarks, our force NTK methods remain competitive with established baselines while being significantly more efficient than committee-based approaches. Under a controlled candidate-pool shift case study on T1x, acquisition based on pretrained MLIP embeddings and NTKs remains robust, whereas committee-based methods exhibit higher variance. Overall, these results show that a single pretrained MLIP can enable scalable, force-aware, and distribution-robust active learning for foundation-model fine-tuning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces a linearly scaling acquisition framework for active learning of MLIPs based on chunked feature-space posterior-variance shortlisting that avoids materializing full kernels, enabling screening of ~200k structures. It extends the NTK to force-aware settings via mixed parameter-coordinate derivatives, defining a force NTK and joint energy-force NTK as natural similarity metrics for vector-field prediction in pretrained models without extra fitting. On OC20 the joint NTK yields the lowest energy and force MAE/RMSE across metrics and splits; on T1x/PMechDB/RGD it remains competitive with baselines while being more efficient than committees; under controlled distribution shift on T1x the NTK methods show greater robustness than committees.
Significance. If the force NTK construction is shown to reliably capture force-field similarity, the work supplies a practical, single-model, committee-free route to scalable and distribution-robust active learning for foundation-model fine-tuning of MLIPs. The linear scaling and explicit handling of force supervision address two central practical bottlenecks in the field.
major comments (3)
- [§3.2 (force NTK definition)] The central claim that mixed parameter-coordinate derivatives of the NTK supply effective natural similarity metrics for force prediction (without additional fitting or validation) is load-bearing for the OC20 superiority and robustness results, yet no correlation analysis between the derived kernel values and actual force errors or uncertainty estimates is reported.
- [§4 (experiments)] The abstract and results sections report lowest MAE/RMSE on OC20 and competitiveness elsewhere, but provide no details on baseline implementations, number of independent runs, statistical significance tests, or hyperparameter controls; these omissions leave open the possibility that observed gains arise from implementation specifics rather than the NTK construction itself.
- [§4.3] In the T1x distribution-shift case study, the claim that committee methods exhibit higher variance while NTK methods remain robust is stated qualitatively; quantitative variance or standard-error values across repeated trials are needed to substantiate the robustness advantage.
minor comments (2)
- [Abstract] The phrase 'significantly more efficient' in the abstract should be accompanied by concrete wall-clock or scaling numbers relative to the committee baselines.
- [§3.2] Notation for the mixed derivatives in the force NTK should be introduced with an explicit equation reference to avoid ambiguity when the joint energy-force kernel is later defined.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback and positive assessment of the significance of our work. We address each major comment point by point below, with planned revisions where appropriate to strengthen the manuscript.
read point-by-point responses
-
Referee: [§3.2 (force NTK definition)] The central claim that mixed parameter-coordinate derivatives of the NTK supply effective natural similarity metrics for force prediction (without additional fitting or validation) is load-bearing for the OC20 superiority and robustness results, yet no correlation analysis between the derived kernel values and actual force errors or uncertainty estimates is reported.
Authors: We agree that a direct correlation analysis between force NTK values and observed force errors would provide valuable supporting evidence for the interpretation of the joint NTK as a natural similarity metric. While the OC20 results (lowest MAE/RMSE across metrics) and T1x robustness already offer empirical validation of the construction, we will add a supplementary correlation plot and analysis in the revised manuscript to directly address this point. revision: yes
-
Referee: [§4 (experiments)] The abstract and results sections report lowest MAE/RMSE on OC20 and competitiveness elsewhere, but provide no details on baseline implementations, number of independent runs, statistical significance tests, or hyperparameter controls; these omissions leave open the possibility that observed gains arise from implementation specifics rather than the NTK construction itself.
Authors: We acknowledge the need for greater experimental transparency. In the revised manuscript we will expand §4 to include: (i) precise descriptions of all baseline implementations (including committee sizes and training protocols), (ii) the number of independent runs performed for each method, (iii) results of statistical significance tests (e.g., paired t-tests with p-values), and (iv) a consolidated table of hyperparameter settings to ensure the reported gains can be attributed to the NTK construction rather than implementation details. revision: yes
-
Referee: [§4.3] In the T1x distribution-shift case study, the claim that committee methods exhibit higher variance while NTK methods remain robust is stated qualitatively; quantitative variance or standard-error values across repeated trials are needed to substantiate the robustness advantage.
Authors: We concur that quantitative measures are required to substantiate the robustness claim. We will revise §4.3 to report standard deviations or standard errors across repeated trials for both NTK and committee methods, and will add error bars to the relevant figures so that the variance comparison is presented quantitatively rather than qualitatively. revision: yes
Circularity Check
No circularity: force NTK derived via standard mixed derivatives on NTK definition
full rationale
The paper's central construction applies mixed parameter-coordinate derivatives directly to the standard Neural Tangent Kernel definition to obtain the force NTK and joint energy-force NTK. This is a mathematical extension independent of any fitted parameters, empirical predictions, or self-citations within the work. The chunked variance shortlisting is a computational scaling method that does not reduce to the kernel values by construction. Empirical results on OC20, T1x, etc., function as external validation rather than inputs to the derivation. No load-bearing steps match the enumerated circularity patterns; the derivation chain is self-contained against the NTK literature.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Neural Tangent Kernel regime applies to the architectures of the pretrained MLIPs
invented entities (1)
-
force NTK
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Anstine, Roman Zubatyuk, and Olexandr Isayev
Dylan M. Anstine, Roman Zubatyuk, and Olexandr Isayev. Data generation for machine learning interatomic potentials and beyond. Chemical Reviews, 2025. doi:10.1021/acs.chemrev.4c00572
-
[2]
Bartók, Risi Kondor, and Gábor Csányi
Albert P. Bartók, Risi Kondor, and Gábor Csányi. On representing chemical environments. Physical Review B, 87 0 (18): 0 184115, 2013. doi:10.1103/PhysRevB.87.184115
-
[3]
arXiv preprint arXiv:2401.00096 , year=
Ilyes Batatia, Philipp Benner, Yuan Chiang, Alin M. Elena, Dávid P. Kovács, Janosh Riebesell, Xavier R. Advincula, Mark Asta, Matthew Avaylon, et al. A foundation model for atomistic materials chemistry, 2023 a . URL https://arxiv.org/abs/2401.00096
- [4]
-
[5]
Andreas Bender, Nadine Schneider, Marwin Segler, W
Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt, and Boris Kozinsky. E(3) -equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications, 13: 0 2453, 2022. doi:10.1038/s41467-022-29939-5
-
[6]
Schaaf, Ondrej Marsalek, and Christoph Schran
Hubert Beck, Pavol Simko, Lars L. Schaaf, Ondrej Marsalek, and Christoph Schran. Multi-head committees enable direct uncertainty prediction for atomistic foundation models. The Journal of Chemical Physics, 163 0 (23): 0 234103, 2025. doi:10.1063/5.0288994
-
[7]
Jörg Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical Review Letters, 98 0 (14): 0 146401, 2007. doi:10.1103/PhysRevLett.98.146401
-
[8]
JAX : composable transformations of P ython+ N um P y programs, 2018
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Yash Katariya, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake Vander P las, Skye Wanderman- M ilne, and Qiao Zhang. JAX : composable transformations of P ython+ N um P y programs, 2018. URL http://github.com/jax-ml/jax
work page 2018
-
[9]
Christoph Brunken, Olivier Peltre, Heloise Chomet, Lucien Walewski, Manus McAuliffe, Valentin Heyraud, Solal Attias, Martin Maarand, Yessine Khanfir, Edan Toledo, Fabio Falcioni, Marie Bluntzer, Silvia Acosta-Gutiérrez, and Jules Tilly. Machine learning interatomic potentials: library for efficient training, model development and simulation of molecular s...
-
[11]
Blips: Bayesian learned interatomic potentials, 2026
Dario Coscia, Pim de Haan, and Max Welling. Blips: Bayesian learned interatomic potentials, 2026. URL https://arxiv.org/abs/2508.14022
-
[12]
Taoyong Cui, Chenyu Tang, Dongzhan Zhou, Longtao Wang, Yu Zheng, Yu Wang, Liang Wang, Weizhi Yang, Lei Bai, and Wanli Ouyang. Online test-time adaptation for better generalization of interatomic potentials to out-of-distribution data. Nature Communications, 16: 0 1891, 2025. doi:10.1038/s41467-025-57101-4
-
[13]
Bartók, Gábor Csányi, and Michele Ceriotti
Sandip De, Albert P. Bartók, Gábor Csányi, and Michele Ceriotti. Machine learning unifies the modeling of materials and molecules. Science Advances, 3 0 (12): 0 e1701816, 2017. doi:10.1126/sciadv.1701816
-
[14]
Bowen Deng, Peichen Zhong, KyuJung Jun, Janosh Riebesell, Kevin Han, Christopher J. Bartel, and Gerbrand Ceder. CHGNet as a pretrained universal neural network potential for charge-informed atomistic modelling. Nature Machine Intelligence, 5: 0 1031--1041, 2023. doi:10.1038/s42256-023-00716-3
-
[15]
Bowen Deng, Yunyeong Choi, Peichen Zhong, Janosh Riebesell, Shashwat Anand, Zhuoying Li, KyuJung Jun, Kristin A. Persson, and Gerbrand Ceder. Systematic softening in universal machine learning interatomic potentials. npj Computational Materials, 11: 0 9, 2025. doi:10.1038/s41524-024-01500-6
-
[19]
Lauri Himanen, Marc O. J. J \"a ger, Eiaki V. Morooka, Filippo Federici Canova, Yashasvi S. Ranawat, David Z. Gao, Patrick Rinke, and Adam S. Foster. DScribe: Library of descriptors for machine learning in materials science . Computer Physics Communications, 247: 0 106949, 2020. ISSN 0010-4655. doi:10.1016/j.cpc.2019.106949. URL https://doi.org/10.1016/j....
-
[20]
A framework and benchmark for deep batch active learning for regression
David Holzmüller, Viktor Zaverkin, Johannes Kästner, and Ingo Steinwart. A framework and benchmark for deep batch active learning for regression. Journal of Machine Learning Research, 24 0 (164): 0 1--81, 2023. URL https://www.jmlr.org/papers/v24/22-0937.html
work page 2023
-
[23]
Neural tangent kernel: Convergence and generalization in neural networks
Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, 2018. URL https://arxiv.org/abs/1806.07572
-
[24]
On-the-fly machine learning force field generation: Application to melting points
Ryosuke Jinnouchi, Ferenc Karsai, and Georg Kresse. On-the-fly machine learning force field generation: Application to melting points. Physical Review B, 100 0 (1): 0 014105, 2019. doi:10.1103/PhysRevB.100.014105
-
[27]
Uncertainty quantification by direct propagation of shallow ensembles
Matthias Kellner and Michele Ceriotti. Uncertainty quantification by direct propagation of shallow ensembles. Machine Learning: Science and Technology, 5 0 (3): 0 035006, 2024. doi:10.1088/2632-2153/ad594a
-
[28]
Matthias Kellner and Michele Ceriotti. Enhanced representation-based sampling for the efficient generation of data sets for machine-learned interatomic potentials. Journal of Chemical Theory and Computation, 2025. doi:10.1021/acs.jctc.5c01767
-
[29]
Maksim Kulichenko, Kipton Barros, Nicholas Lubbers, Ying Wai Li, Richard Messerly, Sergei Tretiak, Justin S. Smith, and Benjamin Nebgen. Uncertainty-driven dynamics for active learning of interatomic potentials. Nature Computational Science, 3: 0 230--239, 2023. doi:10.1038/s43588-023-00406-5
-
[30]
How accurate are dft forces? unexpectedly large uncertainties in molecular datasets, 2025
Domantas Kuryla, Fabian Berger, Gábor Csányi, and Angelos Michaelides. How accurate are dft forces? unexpectedly large uncertainties in molecular datasets, 2025. URL https://arxiv.org/abs/2510.19774
-
[33]
Levine, Muhammad Shuaibi, Evan W
Daniel S. Levine, Muhammed Shuaibi, Evan Walter Clark Spotte-Smith, Michael G. Taylor, Muhammad R. Hasyim, Kyle Michel, Ilyes Batatia, Gábor Csányi, Misko Dzamba, Peter Eastman, Nathan C. Frey, Xiang Fu, Vahe Gharakhanyan, Aditi S. Krishnapriyan, Joshua A. Rackers, Sanjeev Raja, Ammar Rizvi, Andrew S. Rosen, Zachary Ulissi, Santiago Vargas, C. Lawrence Zi...
-
[34]
A critical review of machine learning interatomic potentials and hamiltonian
Yifan Li, Xiuying Zhang, Mingkang Liu, and Lei Shen. A critical review of machine learning interatomic potentials and hamiltonian. Journal of Materials Informatics, 5 0 (4), 2025. ISSN 2770-372X. doi:10.20517/jmi.2025.17. URL https://www.oaepublish.com/articles/jmi.2025.17
-
[36]
arXiv preprint arXiv:2504.06231 , year=
Mark Neumann, James Gin, Benjamin Rhodes, Steven Bennett, Zhiyi Li, Hitarth Choubisa, Arthur Hussey, and Jonathan Godwin. Orb-v3: atomistic simulation at scale, 2025. URL https://arxiv.org/abs/2504.06231
-
[39]
Peterson, Rune Christensen, and Alireza Khorshidi
Andrew A. Peterson, Rune Christensen, and Alireza Khorshidi. Addressing uncertainty in atomistic machine learning. Physical Chemistry Chemical Physics, 19: 0 10978--10985, 2017. doi:10.1039/C7CP00375G
-
[40]
Evgeny V. Podryabinkin and Alexander V. Shapeev. Active learning of linearly parametrized interatomic potentials. Computational Materials Science, 140: 0 171--180, 2017. doi:10.1016/j.commatsci.2017.08.031
-
[43]
Extended-connectivity fingerprints
David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Information and Modeling, 50 0 (5): 0 742--754, 2010. doi:10.1021/ci100050t
-
[44]
Sushree Jagriti Sahoo, Mikael Maraschin, Daniel S. Levine, Zachary Ulissi, C. Lawrence Zitnick, Joel B Varley, Joseph A. Gauthier, Nitish Govindarajan, and Muhammed Shuaibi. The open catalyst 2025 (oc25) dataset and models for solid-liquid interfaces, 2025. URL https://arxiv.org/abs/2509.17862
-
[45]
Committee neural network potentials control generalization errors and enable active learning
Christoph Schran, Krystof Brezina, and Ondrej Marsalek. Committee neural network potentials control generalization errors and enable active learning. The Journal of Chemical Physics, 153 0 (10): 0 104105, 2020. doi:10.1063/5.0016004
-
[47]
Kristof T. Schütt, Huziel E. Sauceda, Pieter-Jan Kindermans, Alexandre Tkatchenko, and Klaus-Robert Müller. SchNet -- a deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148 0 (24): 0 241722, 2018. doi:10.1063/1.5019779
-
[49]
Smith, Ben Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E
Justin S. Smith, Ben Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E. Roitberg. Less is more: Sampling chemical space with active learning. The Journal of Chemical Physics, 148 0 (24): 0 241733, 2018. doi:10.1063/1.5023802
-
[51]
Torrisi, Simon Batzner, Yu Xie, Lixin Sun, Alexie M
Jonathan Vandermause, Steven B. Torrisi, Simon Batzner, Yu Xie, Lixin Sun, Alexie M. Kolpak, and Boris Kozinsky. On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events. npj Computational Materials, 6: 0 20, 2020. doi:10.1038/s41524-020-0283-z
-
[52]
Pretrained Model Representations as Acquisition Signals for Active Learning of MLIPs
Eszter Varga-Umbrich, Shikha Surana, Paul Duckworth, Jules Tilly, Olivier Peltre, and Zachary Weller-Davies. Pretrained model representations as acquisition signals for active learning of mlips, 2026. URL https://arxiv.org/abs/2605.03964
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[54]
Uma: A family of universal models for atoms.arXiv preprint arXiv:2506.23971, 2025
Brandon M. Wood, Misko Dzamba, Xiang Fu, Meng Gao, Muhammed Shuaibi, Luis Barroso-Luque, Kareem Abdelmaqsoud, Vahe Gharakhanyan, John R. Kitchin, Daniel S. Levine, Kyle Michel, Anuroop Sriram, Taco Cohen, Abhishek Das, Ammar Rizvi, Sushree Jagriti Sahoo, Zachary W. Ulissi, and C. Lawrence Zitnick. Uma: A family of universal models for atoms, 2026. URL htt...
-
[55]
Exploring chemical and conformational spaces by batch mode deep active learning
Viktor Zaverkin, David Holzmüller, Ingo Steinwart, and Johannes Kästner. Exploring chemical and conformational spaces by batch mode deep active learning. Digital Discovery, 1: 0 605--620, 2022. doi:10.1039/D2DD00034B
-
[56]
Uncertainty-biased molecular dynamics for learning uniformly accurate interatomic potentials
Viktor Zaverkin, David Holzmüller, Henrik Christiansen, Federico Errica, Francesco Alesiani, Makoto Takamoto, Mathias Niepert, and Johannes Kästner. Uncertainty-biased molecular dynamics for learning uniformly accurate interatomic potentials. npj Computational Materials, 10: 0 83, 2024. doi:10.1038/s41524-024-01254-1
-
[57]
Viktor Zaverkin, David Holzmüller, and Johannes Kästner. Active learning meets metadynamics: automated workflow for reactive machine learning interatomic potentials. Digital Discovery, 2026. doi:10.1039/D5DD00261C
-
[58]
Active learning of uniformly accurate interatomic potentials for materials simulation
Linfeng Zhang, De-Ye Lin, Han Wang, Roberto Car, and Weinan E. Active learning of uniformly accurate interatomic potentials for materials simulation. Physical Review Materials, 3 0 (2): 0 023804, 2019. doi:10.1103/PhysRevMaterials.3.023804
-
[60]
Data curation for machine learning interatomic potentials by determinantal point processes, 2026
Joanna Zou and Youssef Marzouk. Data curation for machine learning interatomic potentials by determinantal point processes, 2026. URL https://arxiv.org/abs/2603.22160
-
[61]
Kohn, W. and Becke, A. D. and Parr, R. G. , title =. The Journal of Physical Chemistry , year =. doi:10.1021/jp960669l , url =
-
[62]
MACE: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields , author=. 2023 , eprint=
work page 2023
-
[63]
Machine Learning Interatomic Potentials: library for efficient training, model development and simulation of molecular systems , author=. 2025 , eprint=
work page 2025
-
[64]
Guanjie Wang and Changrui Wang and Xuanguang Zhang and Zefeng Li and Jian Zhou and Zhimei Sun , keywords =. Machine learning interatomic potential: Bridge the gap between small-scale models and realistic device-scale simulations , journal =. 2024 , issn =. doi:https://doi.org/10.1016/j.isci.2024.109673 , url =
-
[65]
Quality of uncertainty estimates from neural network potential ensembles , volume=
Kahle, Leonid and Zipoli, Federico , year=. Quality of uncertainty estimates from neural network potential ensembles , volume=. Physical Review E , publisher=. doi:10.1103/physreve.105.015311 , number=
-
[66]
Journal of Materials Informatics , VOLUME =
Yifan Li and Xiuying Zhang and Mingkang Liu and Lei Shen , TITLE =. Journal of Materials Informatics , VOLUME =. 2025 , NUMBER =
work page 2025
-
[67]
Jacobs, Ryan and Morgan, Dane and Attarian, Siamak and Meng, Jun and Shen, Chen and Wu, Zhenghao and Xie, Clare Yijia and Yang, Julia H. and Artrith, Nongnuch and Blaiszik, Ben and Ceder, Gerbrand and Choudhary, Kamal and Csanyi, Gabor and Cubuk, Ekin Dogus and Deng, Bowen and Drautz, Ralf and Fu, Xiang and Godwin, Jonathan and Honavar, Vasant and Isayev,...
-
[68]
Physical Review Letters , volume=
Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons , author=. Physical Review Letters , volume=. 2010 , doi=
work page 2010
-
[69]
Physical Review Letters , volume=
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics , author=. Physical Review Letters , volume=. 2018 , doi=
work page 2018
-
[70]
Schütt, Kristof T. and Sauceda, Huziel E. and Kindermans, Pieter-Jan and Tkatchenko, Alexandre and Müller, Klaus-Robert , journal=. 2018 , doi=
work page 2018
-
[71]
Atomic cluster expansion for accurate and transferable interatomic potentials , author=. Physical Review B , volume=. 2019 , doi=
work page 2019
-
[72]
and Kornbluth, Mordechai and Molinari, Nicola and Smidt, Tess E
Batzner, Simon and Musaelian, Albert and Sun, Lixin and Geiger, Mario and Mailoa, Jonathan P. and Kornbluth, Mordechai and Molinari, Nicola and Smidt, Tess E. and Kozinsky, Boris , journal=. 2022 , doi=
work page 2022
-
[73]
Nature Communications , volume=
Learning local equivariant representations for large-scale atomistic dynamics , author=. Nature Communications , volume=. 2023 , doi=
work page 2023
- [74]
-
[75]
Mazitov, Arslan and Bigi, Filippo and Kellner, Matthias and Pegolo, Paolo and Tisi, Davide and Fraux, Guillaume and Pozdnyakov, Sergey and Loche, Philip and Ceriotti, Michele , title =. Nature Communications , year =. doi:10.1038/s41467-025-65662-7 , url =
-
[76]
Physical Review Letters , volume=
Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces , author=. Physical Review Letters , volume=. 2007 , doi=
work page 2007
-
[77]
On representing chemical environments , author=. Physical Review B , volume=. 2013 , doi=
work page 2013
-
[78]
Machine learning unifies the modeling of materials and molecules , author=. Science Advances , volume=. 2017 , doi=
work page 2017
-
[79]
Journal of Chemical Information and Modeling , volume=
Extended-Connectivity Fingerprints , author=. Journal of Chemical Information and Modeling , volume=. 2010 , doi=
work page 2010
-
[80]
Data Generation for Machine Learning Interatomic Potentials and Beyond , author=. Chemical Reviews , year=
-
[81]
The Journal of Chemical Physics , volume=
Less is more: Sampling chemical space with active learning , author=. The Journal of Chemical Physics , volume=. 2018 , doi=
work page 2018
-
[82]
Computational Materials Science , volume=
Active learning of linearly parametrized interatomic potentials , author=. Computational Materials Science , volume=. 2017 , doi=
work page 2017
-
[83]
On-the-fly machine learning force field generation: Application to melting points , author=. Physical Review B , volume=. 2019 , doi=
work page 2019
-
[84]
and Batzner, Simon and Xie, Yu and Sun, Lixin and Kolpak, Alexie M
Vandermause, Jonathan and Torrisi, Steven B. and Batzner, Simon and Xie, Yu and Sun, Lixin and Kolpak, Alexie M. and Kozinsky, Boris , journal=. On-the-fly active learning of interpretable. 2020 , doi=
work page 2020
-
[85]
Physical Review Materials , volume=
Active learning of uniformly accurate interatomic potentials for materials simulation , author=. Physical Review Materials , volume=. 2019 , doi=
work page 2019
-
[86]
Bi, Jinghou and Xu, Yuanhao and Conrad, Felix and Wiemer, Hajo and Ihlenfeldt, Steffen , year =. A comprehensive benchmark of active learning strategies with AutoML for small-sample regression in materials science , volume =. Scientific Reports , publisher =. doi:10.1038/s41598-025-24613-4 , number =
-
[87]
Nature Computational Science , volume=
Uncertainty-driven dynamics for active learning of interatomic potentials , author=. Nature Computational Science , volume=. 2023 , doi=
work page 2023
-
[88]
npj Computational Materials , volume=
De novo exploration and self-guided learning of potential-energy surfaces , author=. npj Computational Materials , volume=. 2019 , doi=
work page 2019
-
[89]
The Journal of Chemical Physics , volume=
An entropy-maximization approach to automated training set generation for interatomic potentials , author=. The Journal of Chemical Physics , volume=. 2020 , doi=
work page 2020
-
[90]
Active learning meets metadynamics: automated workflow for reactive machine learning interatomic potentials , author=. Digital Discovery , year=
-
[91]
The Journal of Chemical Physics , volume=
Committee neural network potentials control generalization errors and enable active learning , author=. The Journal of Chemical Physics , volume=. 2020 , doi=
work page 2020
-
[92]
Physical Chemistry Chemical Physics , volume=
Addressing uncertainty in atomistic machine learning , author=. Physical Chemistry Chemical Physics , volume=. 2017 , doi=
work page 2017
-
[93]
The Journal of Chemical Physics , volume=
Fast uncertainty estimates in deep learning interatomic potentials , author=. The Journal of Chemical Physics , volume=. 2023 , doi=
work page 2023
-
[94]
Machine Learning: Science and Technology , volume=
Uncertainty quantification by direct propagation of shallow ensembles , author=. Machine Learning: Science and Technology , volume=. 2024 , doi=
work page 2024
-
[95]
The Journal of Chemical Physics , volume=
Multi-head committees enable direct uncertainty prediction for atomistic foundation models , author=. The Journal of Chemical Physics , volume=. 2025 , doi=
work page 2025
-
[96]
Physical Review Materials , volume =
Ouyang, Xinjian and Wang, Zhilong and Jie, Xiao and Zhang, Feng and Zhang, Yanxing and Liu, Laijun and Wang, Dawei , title =. Physical Review Materials , volume =. 2024 , month = oct, publisher =. doi:10.1103/PhysRevMaterials.8.103804 , url =
-
[97]
Cutting Through the Noise: On-the-fly Outlier Detection for Robust Training of Machine Learning Interatomic Potentials , author=. 2026 , eprint=
work page 2026
-
[98]
BLIPs: Bayesian Learned Interatomic Potentials , author=. 2026 , eprint=
work page 2026
- [99]
-
[100]
Tan, Aik Rui and Urata, Shingo and Goldman, Samuel and Dietschreit, Johannes C. B. and Gómez-Bombarelli, Rafael , year=. Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles , volume=. npj Computational Materials , publisher=. doi:10.1038/s41524-023-01180-8 , number=
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.