Force-aware NTKs and chunked acquisition enable scalable, robust active learning for MLIPs, achieving lowest energy and force errors on OC20 and remaining competitive on other benchmarks.
hub
E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials
9 Pith papers cite this work, alongside 1,470 external citations. Polarity classification is still indexing.
hub tools
years
2026 9verdicts
UNVERDICTED 9representative citing papers
Chem-GMNet uses sphere-native embeddings, DualSKA attention, and SH-FFN layers to match or beat ChemBERTa-2 on MoleculeNet tasks with fewer parameters and sometimes no pretraining.
PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems with unknown model form and sparse sensing.
Molecular dynamics simulations find that both I and MA defects in MAPbI3 diffuse rapidly at room temperature with barriers of 0.15-0.20 eV, with MA interstitials moving via concerted mechanisms and no MA vacancy migration observed.
DenSNet learns the Hohenberg-Kohn map to electron density with equivariant networks and delta-learning, then maps density to energy, producing stable MD trajectories whose infrared spectra match experiment and DFT on ethanol, ethanethiol, resorcinol, and polythiophene oligomers.
QCOF ML potentials tuned on COF data outperform general MACE models for defective systems and reveal higher thermal defect sensitivity in CTF-1 versus COF-LZU1 with nearly invariant low-strain mechanics.
A neural network trained on thermal snapshots of a frustrated spin system recovers its microscopic Hamiltonian parameters with 99.7% cosine similarity through linear inversion of the learned force field, without any energetic priors.
GROMACS now runs multi-GPU DeePMD inference for molecular dynamics, reaching 40-66% strong scaling efficiency up to 32 devices on a 15k-atom protein system with over 90% time in inference.
SMC-AI scales Monte Carlo simulations to 4 trillion atoms on AI hardware clusters, achieving 32 times larger systems and 1.3 times higher throughput than prior records while decoupling ML models from the simulation core.
citing papers explorer
-
Force-Aware Neural Tangent Kernels for Scalable and Robust Active Learning of MLIPs
Force-aware NTKs and chunked acquisition enable scalable, robust active learning for MLIPs, achieving lowest energy and force errors on OC20 and remaining competitive on other benchmarks.
-
Chem-GMNet: A Sphere-Native Geometric Transformer for Molecular Property Prediction
Chem-GMNet uses sphere-native embeddings, DualSKA attention, and SH-FFN layers to match or beat ChemBERTa-2 on MoleculeNet tasks with fewer parameters and sometimes no pretraining.
-
PiGGO: Physics-Guided Learnable Graph Kalman Filters for Virtual Sensing of Nonlinear Dynamic Structures under Uncertainty
PiGGO integrates a learned graph neural ODE as the continuous-time dynamics model within an extended Kalman filter to enable online virtual sensing and uncertainty-aware state estimation for nonlinear dynamic systems with unknown model form and sparse sensing.
-
A Unified microscopic picture of cation and anion migration in MAPbI$_3$
Molecular dynamics simulations find that both I and MA defects in MAPbI3 diffuse rapidly at room temperature with barriers of 0.15-0.20 eV, with MA interstitials moving via concerted mechanisms and no MA vacancy migration observed.
-
Enhancing molecular dynamics with equivariant machine-learned densities
DenSNet learns the Hohenberg-Kohn map to electron density with equivariant networks and delta-learning, then maps density to energy, producing stable MD trajectories whose infrared spectra match experiment and DFT on ethanol, ethanethiol, resorcinol, and polythiophene oligomers.
-
Data-Driven Thermal and Mechanical Modeling of Defective Covalent Organic Frameworks
QCOF ML potentials tuned on COF data outperform general MACE models for defective systems and reveal higher thermal defect sensitivity in CTF-1 versus COF-LZU1 with nearly invariant low-strain mechanics.
-
Autonomous Emergence of Hamiltonian in Deep Generative Models
A neural network trained on thermal snapshots of a frustrated spin system recovers its microscopic Hamiltonian parameters with 99.7% cosine similarity through linear inversion of the learned force field, without any energetic priors.
-
Making Room for AI: Multi-GPU Molecular Dynamics with Deep Potentials in GROMACS
GROMACS now runs multi-GPU DeePMD inference for molecular dynamics, reaching 40-66% strong scaling efficiency up to 32 devices on a 15k-atom protein system with over 90% time in inference.
-
SMC-AI: Scaling Monte Carlo Simulation to Four Trillion Atoms with AI Accelerators
SMC-AI scales Monte Carlo simulations to 4 trillion atoms on AI hardware clusters, achieving 32 times larger systems and 1.3 times higher throughput than prior records while decoupling ML models from the simulation core.