pith. machine review for the scientific record. sign in

arxiv: 1710.10324 · v3 · submitted 2017-10-27 · ❄️ cond-mat.mtrl-sci

Recognition: unknown

Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

Authors on Pith no claims yet
classification ❄️ cond-mat.mtrl-sci
keywords crystalpropertiesinterpretablematerialsaccuratechemicalconvolutionalcrystalline
0
0 comments X
read the original abstract

The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with $10^4$ data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Scale-Dependent Input Representation and Confidence Estimation for LLMs in Materials Property Prediction

    cond-mat.mtrl-sci 2026-05 conditional novelty 5.0

    Larger LLMs handle detailed crystal descriptions better than small ones, and mean negative log-likelihood of predicted numbers tracks prediction error after fine-tuning.