SLayerGen generates crystals invariant to any space or layer group via autoregressive lattice and Wyckoff sampling plus equivariant diffusion, achieving gains over bulk models on diperiodic materials after correcting a prior loss inconsistency for hexagonal groups.
Orb-v3: atomistic simulation at scale
8 Pith papers cite this work. Polarity classification is still indexing.
years
2026 8representative citing papers
Force-aware NTKs and chunked acquisition enable scalable, robust active learning for MLIPs, achieving lowest energy and force errors on OC20 and remaining competitive on other benchmarks.
Pretrained MLIP latent spaces yield NTK and activation kernels that outperform standard acquisition functions in active learning for reactive MLIP training, reducing required labels by 38% for energy and 28% for force errors.
Machine learning models, especially certain deep neural networks, can predict lattice thermal conductivity with useful accuracy across different generalization tests while being orders of magnitude faster than first-principles calculations.
CrystalREPA closes the representation gap between crystal generators and universal MLIPs via contrastive alignment, yielding more stable and valid generated crystals while revealing that MLIP teacher quality is better predicted by representation distinguishability than by leaderboard accuracy.
Structural pruning of SO(3) equivariant atomistic models from large checkpoints yields 1.5-4x fewer parameters and 2.5-4x less pre-training compute than small models trained from scratch, while outperforming them on most Matbench Discovery metrics and downstream tasks.
SevenNet-Nano is a lightweight universal ML interatomic potential distilled from a larger multi-task foundation model, delivering high accuracy, transferability, and over 10x computational speedup for scalable atomistic simulations.
Hackathon submissions indicate LLMs are moving from general assistants toward composable multi-agent systems for structuring scientific knowledge and automating tasks in materials science and chemistry.
citing papers explorer
-
SLayerGen: a Crystal Generative Model for all Space and Layer Groups
SLayerGen generates crystals invariant to any space or layer group via autoregressive lattice and Wyckoff sampling plus equivariant diffusion, achieving gains over bulk models on diperiodic materials after correcting a prior loss inconsistency for hexagonal groups.
-
Force-Aware Neural Tangent Kernels for Scalable and Robust Active Learning of MLIPs
Force-aware NTKs and chunked acquisition enable scalable, robust active learning for MLIPs, achieving lowest energy and force errors on OC20 and remaining competitive on other benchmarks.
-
Pretrained Model Representations as Acquisition Signals for Active Learning of MLIPs
Pretrained MLIP latent spaces yield NTK and activation kernels that outperform standard acquisition functions in active learning for reactive MLIP training, reducing required labels by 38% for energy and 28% for force errors.
-
Fast and Accurate Prediction of Lattice Thermal Conductivity via Machine Learning Surrogates
Machine learning models, especially certain deep neural networks, can predict lattice thermal conductivity with useful accuracy across different generalization tests while being orders of magnitude faster than first-principles calculations.
-
CrystalREPA: Transferring Physical Priors from Universal MLIPs to Crystal Generative Models
CrystalREPA closes the representation gap between crystal generators and universal MLIPs via contrastive alignment, yielding more stable and valid generated crystals while revealing that MLIP teacher quality is better predicted by representation distinguishability than by leaderboard accuracy.
-
Compact SO(3) Equivariant Atomistic Foundation Models via Structural Pruning
Structural pruning of SO(3) equivariant atomistic models from large checkpoints yields 1.5-4x fewer parameters and 2.5-4x less pre-training compute than small models trained from scratch, while outperforming them on most Matbench Discovery metrics and downstream tasks.
-
A Lightweight Universal Machine-Learning Interatomic Potential via Knowledge Distillation for Scalable Atomistic Simulations
SevenNet-Nano is a lightweight universal ML interatomic potential distilled from a larger multi-task foundation model, delivering high accuracy, transferability, and over 10x computational speedup for scalable atomistic simulations.
-
From Knowledge to Action: Outcomes of the 2025 Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry
Hackathon submissions indicate LLMs are moving from general assistants toward composable multi-agent systems for structuring scientific knowledge and automating tasks in materials science and chemistry.