authors
stringlengths 11
2.41k
| title
stringlengths 38
184
| journal-ref
stringclasses 115
values | doi
stringlengths 17
34
⌀ | report-no
stringclasses 3
values | categories
stringlengths 17
83
| abstract
stringlengths 124
1.92k
| versions
stringlengths 62
689
| update_date
stringdate 2007-09-13 00:00:00
2025-05-15 00:00:00
|
|---|---|---|---|---|---|---|---|---|
Siyu Liu, Tongqi Wen, A. S. L. Subrahmanyam Pattamatta, and David J.
Srolovitz
|
A Prompt-Engineered Large Language Model, Deep Learning Workflow for
Materials Classification
| null |
10.1016/j.mattod.2024.08.028
| null |
cond-mat.mtrl-sci
|
Large language models (LLMs) have demonstrated rapid progress across a wide
array of domains. Owing to the very large number of parameters and training
data in LLMs, these models inherently encompass an expansive and comprehensive
materials knowledge database, far exceeding the capabilities of individual
researcher. Nonetheless, devising methods to harness the knowledge embedded
within LLMs for the design and discovery of novel materials remains a
formidable challenge. We introduce a general approach for addressing materials
classification problems, which incorporates LLMs, prompt engineering, and deep
learning. Utilizing a dataset of metallic glasses as a case study, our
methodology achieved an improvement of up to 463% in prediction accuracy
compared to conventional classification models. These findings underscore the
potential of leveraging textual knowledge generated by LLMs for materials
especially in the common situation where datasets are sparse, thereby promoting
innovation in materials discovery and design.
|
[{'version': 'v1', 'created': 'Wed, 31 Jan 2024 12:31:52 GMT'}, {'version': 'v2', 'created': 'Wed, 27 Mar 2024 13:22:22 GMT'}]
|
2024-11-20
|
Hoang-Giang Nguyen, Thanh-Dung Le
|
Predictive Models based on Deep Learning Algorithms for Tensile
Deformation of AlCoCuCrFeNi High-entropy alloy
| null | null | null |
cond-mat.mtrl-sci eess.SP
|
High-entropy alloys (HEAs) stand out between multi-component alloys due to
their attractive microstructures and mechanical properties. In this
investigation, molecular dynamics (MD) simulation and machine learning were
used to ascertain the deformation mechanism of AlCoCuCrFeNi HEAs under the
influence of temperature, strain rate, and grain sizes. First, the MD
simulation shows that the yield stress decreases significantly as the strain
and temperature increase. In other cases, changes in strain rate and grain size
have less effect on mechanical properties than changes in strain and
temperature. The alloys exhibited superplastic behavior under all test
conditions. The deformity mechanism discloses that strain and temperature are
the main sources of beginning strain, and the shear bands move along the
uniaxial tensile axis inside the workpiece. Furthermore, the fast phase shift
of inclusion under mild strain indicates the relative instability of the
inclusion phase of HCP. Ultimately, the dislocation evolution mechanism shows
that the dislocations are transported to free surfaces under increased strain
when they nucleate around the grain boundary. Surprisingly, the ML prediction
results also confirm the same characteristics as those confirmed from the MD
simulation. Hence, the combination of MD and ML reinforces the confidence in
the findings of mechanical characteristics of HEA. Consequently, this
combination fills the gaps between MD and ML, which can significantly save time
human power and cost to conduct real experiments for testing HEA deformation in
practice.
|
[{'version': 'v1', 'created': 'Fri, 2 Feb 2024 17:17:30 GMT'}]
|
2024-02-05
|
Yutack Park, Jaesun Kim, Seungwoo Hwang, and Seungwu Han
|
Scalable Parallel Algorithm for Graph Neural Network Interatomic
Potentials in Molecular Dynamics Simulations
|
Journal of Chemical Theory and Computation 20 (2024) 4857-4868
|
10.1021/acs.jctc.4c00190
| null |
cond-mat.mtrl-sci
|
Message-passing graph neural network interatomic potentials (GNN-IPs),
particularly those with equivariant representations such as NequIP, are
attracting significant attention due to their data efficiency and high
accuracy. However, parallelizing GNN-IPs poses challenges because multiple
message-passing layers complicate data communication within the spatial
decomposition method, which is preferred by many molecular dynamics (MD)
packages. In this article, we propose an efficient parallelization scheme
compatible with GNN-IPs and develop a package, SevenNet (Scalable
EquiVariance-Enabled Neural NETwork), based on the NequIP architecture. For MD
simulations, SevenNet interfaces with the LAMMPS package. Through benchmark
tests on a 32-GPU cluster with examples of SiO$_2$, SevenNet achieves over 80%
parallel efficiency in weak-scaling scenarios and exhibits nearly ideal
strong-scaling performance as long as GPUs are fully utilized. However, the
strong-scaling performance significantly declines with suboptimal GPU
utilization, particularly affecting parallel efficiency in cases involving
lightweight models or simulations with small numbers of atoms. We also
pre-train SevenNet with a vast dataset from the Materials Project (dubbed
`SevenNet-0') and assess its performance on generating amorphous Si$_3$N$_4$
containing more than 100,000 atoms. By developing scalable GNN-IPs, this work
aims to bridge the gap between advanced machine learning models and large-scale
MD simulations, offering researchers a powerful tool to explore complex
material systems with high accuracy and efficiency.
|
[{'version': 'v1', 'created': 'Tue, 6 Feb 2024 08:10:02 GMT'}]
|
2024-06-13
|
Zilong Yuan, Zhiming Xu, He Li, Xinle Cheng, Honggeng Tao, Zechen
Tang, Zhiyuan Zhou, Wenhui Duan, Yong Xu
|
Equivariant Neural Network Force Fields for Magnetic Materials
| null | null | null |
cond-mat.mtrl-sci
|
Neural network force fields have significantly advanced ab initio atomistic
simulations across diverse fields. However, their application in the realm of
magnetic materials is still in its early stage due to challenges posed by the
subtle magnetic energy landscape and the difficulty of obtaining training data.
Here we introduce a data-efficient neural network architecture to represent
density functional theory total energy, atomic forces, and magnetic forces as
functions of atomic and magnetic structures. Our approach incorporates the
principle of equivariance under the three-dimensional Euclidean group into the
neural network model. Through systematic experiments on various systems,
including monolayer magnets, curved nanotube magnets, and moir\'e-twisted
bilayer magnets of $\text{CrI}_{3}$, we showcase the method's high efficiency
and accuracy, as well as exceptional generalization ability. The work creates
opportunities for exploring magnetic phenomena in large-scale materials
systems.
|
[{'version': 'v1', 'created': 'Wed, 7 Feb 2024 13:59:47 GMT'}]
|
2024-02-08
|
Elena Stellino, Beatrice D'Al\`o, Elena Blundo, Paolo Postorino,
Antonio Polimeni
|
Fine-Tuning of the Excitonic Response in Monolayer WS2 Domes via Coupled
Pressure and Strain Variation
| null | null | null |
cond-mat.mtrl-sci
|
We present a spectroscopic investigation into the vibrational and
optoelectronic properties of WS2 domes in the 0-0.65 GPa range. The pressure
evolution of the system morphology, deduced by the combined analysis of Raman
and photoluminescence spectra, revealed a significant variation in the dome's
aspect ratio. The modification of the dome shape caused major changes in the
mechanical properties of the system resulting in a sizable increase of the
out-of-plane compressive strain while keeping the in-plane tensile strain
unchanged. The variation of the strain gradients drives a non-linear behavior
in both the exciton energy and radiative recombination intensity, interpreted
as the consequence of a hybridization mechanism between the electronic states
of two distinct minima in the conduction band. Our results indicate that
pressure and strain can be efficiently combined in low dimensional systems with
unconventional morphology to obtain modulations of the electronic band
structure not achievable in planar crystals.
|
[{'version': 'v1', 'created': 'Wed, 7 Feb 2024 14:09:44 GMT'}]
|
2024-02-08
|
Miao Liu, Sheng Meng
|
Recent Breakthrough in AI-Driven Materials Science: Tech Giants
Introduce Groundbreaking Models
|
Mater. Futures 3 027501 (2024)
|
10.1088/2752-5724/ad2e0c
| null |
cond-mat.mtrl-sci
|
A close look of Google's GNoME inorganic materials dataset [Nature 624, 80
(2023)], and 11 things you would like to know.
|
[{'version': 'v1', 'created': 'Thu, 8 Feb 2024 16:39:26 GMT'}]
|
2024-03-13
|
Francis G. VanGessel, Efrem Perry, Salil Mohan, Oliver M. Barham, Mark
Cavolowsky
|
NLP for Knowledge Discovery and Information Extraction from Energetics
Corpora
| null | null | null |
cs.CL cond-mat.mtrl-sci
|
We present a demonstration of the utility of NLP for aiding research into
energetic materials and associated systems. The NLP method enables machine
understanding of textual data, offering an automated route to knowledge
discovery and information extraction from energetics text. We apply three
established unsupervised NLP models: Latent Dirichlet Allocation, Word2Vec, and
the Transformer to a large curated dataset of energetics-related scientific
articles. We demonstrate that each NLP algorithm is capable of identifying
energetic topics and concepts, generating a language model which aligns with
Subject Matter Expert knowledge. Furthermore, we present a document
classification pipeline for energetics text. Our classification pipeline
achieves 59-76\% accuracy depending on the NLP model used, with the highest
performing Transformer model rivaling inter-annotator agreement metrics. The
NLP approaches studied in this work can identify concepts germane to energetics
and therefore hold promise as a tool for accelerating energetics research
efforts and energetics material development.
|
[{'version': 'v1', 'created': 'Sat, 10 Feb 2024 14:43:08 GMT'}]
|
2024-02-13
|
Xiang Huang, C. Y. Zhao, Hong Wang, Shenghong Ju
|
AI-assisted inverse design of sequence-ordered high intrinsic thermal
conductivity polymers
|
Materials Today Physics 44, 101438, 2024
|
10.1016/j.mtphys.2024.101438
| null |
cond-mat.soft cond-mat.mtrl-sci physics.app-ph physics.comp-ph
|
Artificial intelligence (AI) promotes the polymer design paradigm from a
traditional trial-and-error approach to a data-driven style. Achieving high
thermal conductivity (TC) for intrinsic polymers is urgent because of their
importance in the thermal management of many industrial applications such as
microelectronic devices and integrated circuits. In this work, we have proposed
a robust AI-assisted workflow for the inverse design of high TC polymers. By
using 1144 polymers with known computational TCs, we construct a surrogate deep
neural network model for TC prediction and extract a polymer-unit library with
32 sequences. Two state-of-the-art multi-objective optimization algorithms of
unified non-dominated sorting genetic algorithm III (U-NSGA-III) and q-noisy
expected hypervolume improvement (qNEHVI) are employed for sequence-ordered
polymer design with both high TC and synthetic possibility. For triblock
polymer design, the result indicates that qNHEVI is capable of exploring a
diversity of optimal polymers at the Pareto front, but the uncertainty in
Quasi-Monte Carlo sampling makes the trials costly. The performance of
U-NSGA-III is affected by the initial random structures and usually falls into
a locally optimal solution, but it takes fewer attempts with lower costs. 20
parallel U-NSGA-III runs are conducted to design the pentablock polymers with
high TC, and half of the candidates among 1921 generated polymers achieve the
targets (TC > 0.4 W/(mK) and SA < 3.0). Ultimately, we check the TC of 50
promising polymers through molecular dynamics simulations and reveal the
intrinsic connections between microstructures and TCs. Our developed
AI-assisted inverse design approach for polymers is flexible and universal, and
can be extended to the design of polymers with other target properties.
|
[{'version': 'v1', 'created': 'Sun, 18 Feb 2024 14:34:57 GMT'}]
|
2024-05-01
|
Binh Duong Nguyen, Johannes Steiner, Peter Wellmann, Stefan Sandfeld
|
Combining unsupervised and supervised learning in microscopy enables
defect analysis of a full 4H-SiC wafer
| null | null | null |
cs.CV cond-mat.mtrl-sci cs.LG
|
Detecting and analyzing various defect types in semiconductor materials is an
important prerequisite for understanding the underlying mechanisms as well as
tailoring the production processes. Analysis of microscopy images that reveal
defects typically requires image analysis tasks such as segmentation and object
detection. With the permanently increasing amount of data that is produced by
experiments, handling these tasks manually becomes more and more impossible. In
this work, we combine various image analysis and data mining techniques for
creating a robust and accurate, automated image analysis pipeline. This allows
for extracting the type and position of all defects in a microscopy image of a
KOH-etched 4H-SiC wafer that was stitched together from approximately 40,000
individual images.
|
[{'version': 'v1', 'created': 'Tue, 20 Feb 2024 20:04:23 GMT'}]
|
2024-02-22
|
Bashir Kazimi and Karina Ruzaeva and Stefan Sandfeld
|
Self-Supervised Learning with Generative Adversarial Networks for
Electron Microscopy
| null | null | null |
cs.CV cond-mat.mtrl-sci cs.AI cs.LG
|
In this work, we explore the potential of self-supervised learning with
Generative Adversarial Networks (GANs) for electron microscopy datasets. We
show how self-supervised pretraining facilitates efficient fine-tuning for a
spectrum of downstream tasks, including semantic segmentation, denoising, noise
\& background removal, and super-resolution. Experimentation with varying model
complexities and receptive field sizes reveals the remarkable phenomenon that
fine-tuned models of lower complexity consistently outperform more complex
models with random weight initialization. We demonstrate the versatility of
self-supervised pretraining across various downstream tasks in the context of
electron microscopy, allowing faster convergence and better performance. We
conclude that self-supervised pretraining serves as a powerful catalyst, being
especially advantageous when limited annotated data are available and efficient
scaling of computational cost is important.
|
[{'version': 'v1', 'created': 'Wed, 28 Feb 2024 12:25:01 GMT'}, {'version': 'v2', 'created': 'Thu, 18 Jul 2024 09:58:03 GMT'}]
|
2024-07-19
|
Dongchen Huang, Junde Liu, Tian Qian, and Hongming Weng
|
Training-set-free two-stage deep learning for spectroscopic data
de-noising
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.data-an
|
De-noising is a prominent step in the spectra post-processing procedure.
Previous machine learning-based methods are fast but mostly based on supervised
learning and require a training set that may be typically expensive in real
experimental measurements. Unsupervised learning-based algorithms are slow and
require many iterations to achieve convergence. Here, we bridge this gap by
proposing a training-set-free two-stage deep learning method. We show that the
fuzzy fixed input in previous methods can be improved by introducing an
adaptive prior. Combined with more advanced optimization techniques, our
approach can achieve five times acceleration compared to previous work.
Theoretically, we study the landscape of a corresponding non-convex linear
problem, and our results indicates that this problem has benign geometry for
first-order algorithms to converge.
|
[{'version': 'v1', 'created': 'Thu, 29 Feb 2024 03:31:41 GMT'}, {'version': 'v2', 'created': 'Tue, 5 Mar 2024 12:39:23 GMT'}]
|
2024-03-06
|
Fankai Xie, Tenglong Lu, Sheng Meng, Miao Liu
|
GPTFF: A high-accuracy out-of-the-box universal AI force field for
arbitrary inorganic materials
|
Science Bulletin, 10.1016/j.scib.2024.08.039
|
10.1016/j.scib.2024.08.039
| null |
cond-mat.mtrl-sci
|
This study introduces a novel AI force field, namely graph-based pre-trained
transformer force field (GPTFF), which can simulate arbitrary inorganic systems
with good precision and generalizability. Harnessing a large trove of the data
and the attention mechanism of transformer algorithms, the model can accurately
predict energy, atomic forces, and stress with Mean Absolute Error (MAE) values
of 32 meV/atom, 71 meV/{\AA}, and 0.365 GPa, respectively. The dataset used to
train the model includes 37.8 million single-point energies, 11.7 billion force
pairs, and 340.2 million stresses. We also demonstrated that GPTFF can be
universally used to simulate various physical systems, such as crystal
structure optimization, phase transition simulations, and mass transport.
|
[{'version': 'v1', 'created': 'Thu, 29 Feb 2024 16:30:07 GMT'}]
|
2024-09-04
|
Vahe Gharakhanyan, Luke J. Wirth, Jose A. Garrido Torres, Ethan
Eisenberg, Ting Wang, Dallas R. Trinkle, Snigdhansu Chatterjee and Alexander
Urban
|
Discovering Melting Temperature Prediction Models of Inorganic Solids by
Combining Supervised and Unsupervised Learning
| null | null | null |
cond-mat.mtrl-sci
|
The melting temperature is important for materials design because of its
relationship with thermal stability, synthesis, and processing conditions.
Current empirical and computational melting point estimation techniques are
limited in scope, computational feasibility, or interpretability. We report the
development of a machine learning methodology for predicting melting
temperatures of binary ionic solid materials. We evaluated different
machine-learning models trained on a data set of the melting points of 476
non-metallic crystalline binary compounds, using materials embeddings
constructed from elemental properties and density-functional theory
calculations as model inputs. A direct supervised-learning approach yields a
mean absolute error of around 180~K but suffers from low interpretability. We
find that the fidelity of predictions can further be improved by introducing an
additional unsupervised-learning step that first classifies the materials
before the melting-point regression. Not only does this two-step model exhibit
improved accuracy, but the approach also provides a level of interpretability
with insights into feature importance and different types of melting that
depend on the specific atomic bonding inside a material. Motivated by this
finding, we used a symbolic learning approach to find interpretable physical
models for the melting temperature, which recovered the best-performing
features from both prior models and provided additional interpretability.
|
[{'version': 'v1', 'created': 'Tue, 5 Mar 2024 16:23:37 GMT'}]
|
2024-03-06
|
Yingjie Zhao and Hongbo Zhou and Zian Zhang and Zhenxing Bo and Baoan
Sun and Minqiang Jiang and Zhiping Xu
|
Discovering High-Strength Alloys via Physics-Transfer Learning
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.comp-ph
|
Predicting the strength of materials requires considering various length and
time scales, striking a balance between accuracy and efficiency. Peierls stress
measures material strength by evaluating dislocation resistance to plastic
flow, reliant on elastic lattice responses and crystal slip energy landscape.
Computational challenges due to the non-local and non-equilibrium nature of
dislocations prohibit Peierls stress evaluation from state-of-the-art material
databases. We propose a data-driven framework that leverages neural networks
trained on force field simulations to understand crystal plasticity physics,
predicting Peierls stress from material parameters derived via density
functional theory computations, which are otherwise computationally intensive
for direct dislocation modeling. This physics transfer approach successfully
screen the strength of metallic alloys from a limited number of single-point
calculations with chemical accuracy. Guided by these predictions, we fabricate
high-strength binary alloys previously unexplored, utilizing high-throughput
ion beam deposition techniques. The framework extends to problems facing the
accuracy-performance dilemma in general by harnessing the hierarchy of physics
of multiscale models in materials sciences.
|
[{'version': 'v1', 'created': 'Tue, 12 Mar 2024 11:05:05 GMT'}, {'version': 'v2', 'created': 'Sun, 26 Jan 2025 07:32:07 GMT'}]
|
2025-01-28
|
Matteo Masto, Vincent Favre-Nicolin, Steven Leake, Tobias Sch\"ulli,
Marie-Ingrid Richard, Ewen Bellec
|
Patching-based Deep Learning model for the Inpainting of Bragg Coherent
Diffraction patterns affected by detectors' gaps
| null | null | null |
cond-mat.mtrl-sci
|
We propose a deep learning algorithm for the inpainting of Bragg Coherent
Diffraction Imaging (BCDI) patterns affected by detector gaps. These regions of
missing intensity can compromise the accuracy of reconstruction algorithms,
inducing artifacts in the final result. It is thus desirable to restore the
intensity in these regions in order to ensure more reliable reconstructions.
The key aspect of our method lies in the choice of training the neural network
with cropped sections of both experimental diffraction data and simulated data
and subsequently patching the predictions generated by the model along the gap,
thus completing the full diffraction peak. This provides us with more
experimental training data and allows for a faster model training due to the
limited size, while the neural network can be applied to arbitrarily larger
BCDI datasets. Moreover, our method not only broadens the scope of application
but also ensures the preservation of data integrity and reliability in the face
of challenging experimental conditions.
|
[{'version': 'v1', 'created': 'Wed, 13 Mar 2024 15:03:13 GMT'}]
|
2024-03-14
|
Zhiqiang Zhao, Wanlin Guo, and Zhuhua Zhang
|
A general-purpose neural network potential for Ti-Al-Nb alloys towards
large-scale molecular dynamics with ab initio accuracy
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
High Nb-containing TiAl alloys exhibit exceptional high-temperature strength
and room-temperature ductility, making them widely used in hot-section
components of automotive and aerospace engines. However, the lack of accurate
interatomic interaction potentials for large-scale modeling severely hampers a
comprehensive understanding of the failure mechanism of Ti-Al-Nb alloys and the
development of strategies to enhance the mechanical properties. Here, we
develop a general-purpose machine-learned potential (MLP) for the Ti-Al-Nb
ternary system by combining the neural evolution potentials framework with an
active learning scheme. The developed MLP, trained on extensive
first-principles datasets, demonstrates remarkable accuracy in predicting
various lattice and defect properties, as well as high-temperature
characteristics such as thermal expansion and melting point for TiAl systems.
Notably, this potential can effectively describe the key effect of Nb doping on
stacking fault energies and formation energies. Of practical importance is that
our MLP enables large-scale molecular dynamics simulations involving tens of
millions of atoms with ab initio accuracy, achieving an outstanding balance
between computational speed and accuracy. These results pave the way for
studying micro-mechanical behaviors in TiAl lamellar structures and developing
high-performance TiAl alloys towards applications at elevated temperatures.
|
[{'version': 'v1', 'created': 'Thu, 14 Mar 2024 16:11:14 GMT'}]
|
2024-03-15
|
Vincent Bl\"umer, Celal Soyarslan, Ton van den Boogaard
|
Generative reconstruction of 3D volume elements for Ti-6Al-4V
basketweave microstructure by optimization of CNN-based microstructural
descriptors
| null | null | null |
cond-mat.mtrl-sci
|
We present a methodology for the generative reconstruction of 3D Volume
Elements (VE) for numerical multiscale analysis of Ti-6Al-4V processed by
Additive Manufacturing (AM). The basketweave morphology, which is typically
dominant in AM-processed Ti-6Al-4V, is analyzed in conventional Electron
Backscatter Diffusion (EBSD) micrographs. Prior \b{eta}-grain reconstruction is
performed to obtain the out-of-plane orientation of the observed grains
leveraging Burgers orientation relationship. Convolutional Neural Network (CNN)
- based microstructure descriptors are extracted from the 2D data, and used for
cross-section-based optimization of pixel values on orthogonal planes in 3D,
using the Microstructure Characterization and Reconstruction (MCR)
implementation MCRpy [16]. In order to utilize MCRpy, which performs best for
binary systems, the basketweave microstructure, which consists of up to twelve
distinct grain orientations, is decomposed into several separate two-phase
systems. Our reconstructions capture key characteristics of the titanium
basketweave morphology and show qualitative resemblance to experimentally
obtained 3D data. The preservation of volume fraction during assembly of the
reconstruction remains an unadressed challenge at this stage.
|
[{'version': 'v1', 'created': 'Thu, 14 Mar 2024 17:50:24 GMT'}]
|
2024-03-15
|
Ryo Murakami, Taisuke T. Sasaki, Hideki Yoshikawa, Yoshitaka
Matsushita, Keitaro Sodeyama, Tadakatsu Ohkubo, Hiroshi Shinotsuka, Kenji
Nagata
|
Rapid and Robust construction of an ML-ready peak feature table from
X-ray diffraction data using Bayesian peak-top fitting
| null | null | null |
cond-mat.mtrl-sci stat.AP
|
To advance the development of materials through data-driven scientific
methods, appropriate methods for building machine learning (ML)-ready feature
tables from measured and computed data must be established. In materials
development, X-ray diffraction (XRD) is an effective technique for analysing
crystal structures and other microstructural features that have information
that can explain material properties. Therefore, the fully automated extraction
of peak features from XRD data without the bias of an analyst is a significant
challenge. This study aimed to establish an efficient and robust approach for
constructing peak feature tables that follow ML standards (ML-ready) from XRD
data. We challenge peak feature extraction in the situation where only the peak
function profile is known a priori, without knowledge of the measurement
material or crystal structure factor. We utilized Bayesian estimation to
extract peak features from XRD data and subsequently performed Bayesian
regression analysis with feature selection to predict the material property.
The proposed method focused only on the tops of peaks within localized regions
of interest (ROIs) and extracted peak features quickly and accurately. This
process facilitated the rapid extracting of major peak features from the XRD
data and the construction of an ML-ready feature table. We then applied
Bayesian linear regression to the maximum energy product $(BH)_{max}$, using
the extracted peak features as the explanatory variable. The outcomes yielded
reasonable and robust regression results. Thus, the findings of this study
indicated that \textit{004} peak height and area were important features for
predicting $(BH)_{max}$.
|
[{'version': 'v1', 'created': 'Wed, 7 Feb 2024 01:24:39 GMT'}]
|
2024-03-18
|
Xiaoshan Luo, Zhenyu Wang, Pengyue Gao, Jian Lv, Yanchao Wang,
Changfeng Chen and Yanming Ma
|
Deep learning generative model for crystal structure prediction
|
npj Comput. Mater., 10, 254 (2024)
|
10.1038/s41524-024-01443-y
| null |
cond-mat.mtrl-sci physics.comp-ph
|
Recent advances in deep learning generative models (GMs) have created high
capabilities in accessing and assessing complex high-dimensional data, allowing
superior efficiency in navigating vast material configuration space in search
of viable structures. Coupling such capabilities with physically significant
data to construct trained models for materials discovery is crucial to moving
this emerging field forward. Here, we present a universal GM for crystal
structure prediction (CSP) via a conditional crystal diffusion variational
autoencoder (Cond-CDVAE) approach, which is tailored to allow user-defined
material and physical parameters such as composition and pressure. This model
is trained on an expansive dataset containing over 670,000 local minimum
structures, including a rich spectrum of high-pressure structures, along with
ambient-pressure structures in Materials Project database. We demonstrate that
the Cond-CDVAE model can generate physically plausible structures with high
fidelity under diverse pressure conditions without necessitating local
optimization, accurately predicting 59.3% of the 3,547 unseen ambient-pressure
experimental structures within 800 structure samplings, with the accuracy rate
climbing to 83.2% for structures comprising fewer than 20 atoms per unit cell.
These results meet or exceed those achieved via conventional CSP methods based
on global optimization. The present findings showcase substantial potential of
GMs in the realm of CSP.
|
[{'version': 'v1', 'created': 'Sat, 16 Mar 2024 07:54:19 GMT'}, {'version': 'v2', 'created': 'Sat, 10 Aug 2024 07:02:27 GMT'}]
|
2024-11-13
|
An Chen, Zhilong Wang, Karl Luigi Loza Vidaurre, Yanqiang Han, Simin
Ye, Kehao Tao, Shiwei Wang, Jing Gao, and Jinjin Li
|
Knowledge-Reuse Transfer Learning Methods in Molecular and Material
Science
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.chem-ph
|
Molecules and materials are the foundation for the development of modern
advanced industries such as energy storage systems and semiconductor devices.
However, traditional trial-and-error methods or theoretical calculations are
highly resource-intensive, and extremely long R&D (Research and Development)
periods cannot meet the urgent need for molecules/materials in industrial
development. Machine learning (ML) methods based on big data are expected to
break this dilemma. However, the difficulty in constructing large-scale
datasets of new molecules/materials due to the high cost of data acquisition
and annotation limits the development of machine learning. The application of
transfer learning lowers the data requirements for model training, which makes
transfer learning stand out in researches addressing data quality issues. In
this review, we summarize recent advances in transfer learning related to
molecular and materials science. We focus on the application of transfer
learning methods for the discovery of advanced molecules/materials,
particularly, the construction of transfer learning frameworks for different
systems, and how transfer learning can enhance the performance of models. In
addition, the challenges of transfer learning are also discussed.
|
[{'version': 'v1', 'created': 'Sat, 2 Mar 2024 12:41:25 GMT'}]
|
2024-03-21
|
Yubo Qi, Weiyi Gong, Qimin Yan
|
Bridging deep learning force fields and electronic structures with a
physics-informed approach
| null | null | null |
cond-mat.mtrl-sci
|
This work presents a physics-informed neural network approach bridging
deep-learning force field and electronic structure simulations, illustrated
through twisted two-dimensional large-scale material systems. The deep
potential molecular dynamics model is adopted as the backbone, and electronic
structure simulation is integrated. Using Wannier functions as the basis, we
categorize Wannier Hamiltonian elements based on physical principles to
incorporate diverse information from a deep-learning force field model. This
information-sharing mechanism streamlines the architecture of our
multifunctional model, enhancing its efficiency and effectiveness. Utilizing
Wannier functions as the basis lays the groundwork for predicting more physical
quantities. This approach serves as a powerful tool to explore both the
structural and electronic properties of large-scale systems characterized by
low periodicities. By endowing an existing well-developed machine-learning
force field with electronic structure simulation capabilities, the study marks
a significant advancement in developing multimodal machine-learning-based
computational methods that can achieve multiple functionalities traditionally
exclusive to first-principles calculations.
|
[{'version': 'v1', 'created': 'Wed, 20 Mar 2024 15:33:46 GMT'}, {'version': 'v2', 'created': 'Mon, 1 Apr 2024 03:28:47 GMT'}]
|
2024-04-02
|
Orlando A. Mendible, Jonathan K. Whitmer, and Yamil J. Col\'on
|
Considerations in the use of ML interaction potentials for free energy
calculations
| null |
10.1063/5.0252043
| null |
physics.chem-ph cond-mat.mtrl-sci cs.LG
|
Machine learning force fields (MLFFs) promise to accurately describe the
potential energy surface of molecules at the ab initio level of theory with
improved computational efficiency. Within MLFFs, equivariant graph neural
networks (EQNNs) have shown great promise in accuracy and performance and are
the focus of this work. The capability of EQNNs to recover free energy surfaces
(FES) remains to be thoroughly investigated. In this work, we investigate the
impact of collective variables (CVs) distribution within the training data on
the accuracy of EQNNs predicting the FES of butane and alanine dipeptide (ADP).
A generalizable workflow is presented in which training configurations are
generated with classical molecular dynamics simulations, and energies and
forces are obtained with ab initio calculations. We evaluate how bond and angle
constraints in the training data influence the accuracy of EQNN force fields in
reproducing the FES of the molecules at both classical and ab initio levels of
theory. Results indicate that the model's accuracy is unaffected by the
distribution of sampled CVs during training, given that the training data
includes configurations from characteristic regions of the system's FES.
However, when the training data is obtained from classical simulations, the
EQNN struggles to extrapolate the free energy for configurations with high free
energy. In contrast, models trained with the same configurations on ab initio
data show improved extrapolation accuracy. The findings underscore the
difficulties in creating a comprehensive training dataset for EQNNs to predict
FESs and highlight the importance of prior knowledge of the system's FES.
|
[{'version': 'v1', 'created': 'Wed, 20 Mar 2024 19:49:21 GMT'}, {'version': 'v2', 'created': 'Tue, 13 May 2025 13:22:54 GMT'}, {'version': 'v3', 'created': 'Wed, 14 May 2025 14:50:01 GMT'}]
|
2025-05-15
|
Brian H. Lee, James P. Larentzos, John K. Brennan, and Alejandro
Strachan
|
Graph neural network coarse-grain force field for the molecular crystal
RDX
| null | null | null |
cond-mat.mes-hall cond-mat.mtrl-sci
|
Condense phase molecular systems organize in wide range of distinct molecular
configurations, including amorphous melt and glass as well as crystals often
exhibiting polymorphism, that originate from their intricate intra- and
intermolecular forces. While accurate coarse-grain (CG) models for these
materials are critical to understand phenomena beyond the reach of all-atom
simulations, current models cannot capture the diversity of molecular
structures. We introduce a generally applicable approach to develop CG force
fields for molecular crystals combining graph neural networks (GNN) and data
from an all-atom simulations and apply it to the high-energy density material
RDX. We address the challenge of expanding the training data with relevant
configurations via an iterative procedure that performs CG molecular dynamics
of processes of interest and reconstructs the atomistic configurations using a
pre-trained neural network decoder. The multi-site CG model uses a GNN
architecture constructed to satisfy translational invariance and rotational
covariance for forces. The resulting model captures both crystalline and
amorphous states for a wide range of temperatures and densities.
|
[{'version': 'v1', 'created': 'Fri, 22 Mar 2024 15:06:06 GMT'}]
|
2024-03-25
|
Zhendong Cao, Xiaoshan Luo, Jian Lv and Lei Wang
|
Space Group Informed Transformer for Crystalline Materials Generation
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.comp-ph
|
We introduce CrystalFormer, a transformer-based autoregressive model
specifically designed for space group-controlled generation of crystalline
materials. The incorporation of space group symmetry significantly simplifies
the crystal space, which is crucial for data and compute efficient generative
modeling of crystalline materials. Leveraging the prominent discrete and
sequential nature of the Wyckoff positions, CrystalFormer learns to generate
crystals by directly predicting the species and locations of
symmetry-inequivalent atoms in the unit cell. We demonstrate the advantages of
CrystalFormer in standard tasks such as symmetric structure initialization and
element substitution compared to conventional methods implemented in popular
crystal structure prediction software. Moreover, we showcase the application of
CrystalFormer of property-guided materials design in a plug-and-play manner.
Our analysis shows that CrystalFormer ingests sensible solid-state chemistry
knowledge and heuristics by compressing the material dataset, thus enabling
systematic exploration of crystalline materials. The simplicity, generality,
and flexibility of CrystalFormer position it as a promising architecture to be
the foundational model of the entire crystalline materials space, heralding a
new era in materials modeling and discovery.
|
[{'version': 'v1', 'created': 'Sat, 23 Mar 2024 06:01:45 GMT'}, {'version': 'v2', 'created': 'Fri, 16 Aug 2024 02:57:35 GMT'}]
|
2024-08-19
|
Xiang Huang and Shenghong Ju
|
Tutorial: AI-assisted exploration and active design of polymers with
high intrinsic thermal conductivity
|
Journal of Applied Physics 135, 171101, 2024
|
10.1063/5.0201522
| null |
cond-mat.soft cond-mat.mtrl-sci physics.app-ph physics.chem-ph physics.comp-ph
|
Designing polymers with high intrinsic thermal conductivity (TC) is
critically important for the thermal management of organic electronics and
photonics. However, this is a challenging task owing to the diversity of the
chemical space and the barriers to advanced synthetic
experiments/characterization techniques for polymers. In this Tutorial, the
fundamentals and implementation of combining classical molecular dynamics
simulation and machine learning (ML) for the development of polymers with high
TC are comprehensively introduced. We begin by describing the core components
of a universal ML framework, involving polymer datasets, property calculators,
feature engineering and informatics algorithms. Then, the process of
constructing interpretable regression algorithms for TC prediction is
introduced, aiming to extract the underlying relationships between
microstructures and TCs for polymers. We also explore the design of
sequence-ordered polymers with high TC using lightweight and mainstream active
learning algorithms. Lastly, we conclude by addressing the current limitations
and suggesting potential avenues for future research on this topic.
|
[{'version': 'v1', 'created': 'Sat, 23 Mar 2024 16:52:56 GMT'}]
|
2024-05-09
|
Yuqi Song, Rongzhi Dong, Lai Wei, Qin Li, Jianjun Hu
|
AlphaCrystal-II: Distance matrix based crystal structure prediction
using deep learning
| null | null | null |
cond-mat.mtrl-sci cs.LG
|
Computational prediction of stable crystal structures has a profound impact
on the large-scale discovery of novel functional materials. However, predicting
the crystal structure solely from a material's composition or formula is a
promising yet challenging task, as traditional ab initio crystal structure
prediction (CSP) methods rely on time-consuming global searches and
first-principles free energy calculations. Inspired by the recent success of
deep learning approaches in protein structure prediction, which utilize
pairwise amino acid interactions to describe 3D structures, we present
AlphaCrystal-II, a novel knowledge-based solution that exploits the abundant
inter-atomic interaction patterns found in existing known crystal structures.
AlphaCrystal-II predicts the atomic distance matrix of a target crystal
material and employs this matrix to reconstruct its 3D crystal structure. By
leveraging the wealth of inter-atomic relationships of known crystal
structures, our approach demonstrates remarkable effectiveness and reliability
in structure prediction through comprehensive experiments. This work highlights
the potential of data-driven methods in accelerating the discovery and design
of new materials with tailored properties.
|
[{'version': 'v1', 'created': 'Sun, 7 Apr 2024 05:17:43 GMT'}]
|
2024-04-09
|
Tomoya Shiota, Kenji Ishihara, Wataru Mizukami
|
Lowering the Exponential Wall: Accelerating High-Entropy Alloy Catalysts
Screening using Local Surface Energy Descriptors from Neural Network
Potentials
| null | null | null |
quant-ph cond-mat.mtrl-sci
|
Computational screening is indispensable for the efficient design of
high-entropy alloys (HEAs), which hold considerable potential for catalytic
applications. However, the chemical space of HEAs is exponentially vast with
respect to the number of constituent elements, making even machine
learning-based screening calculations time-intensive. To address this
challenge, we propose a rapid method for predicting HEA properties using data
from monometallic systems (or few-component alloys). Central to our approach is
the newly introduced local surface energy (LSE) descriptor, which captures
local surface reactivity at atomic resolution. We established a correlation
between LSE and adsorption energies using monometallic systems. Using this
correlation in a linear regression model, we successfully estimated molecular
adsorption energies on HEAs with significantly higher accuracy than a
conventional descriptor (i.e., generalized coordination numbers). Furthermore,
we developed high-precision models by employing both classical and quantum
machine learning. Our method enabled CO adsorption-energy calculations for 1000
quinary nanoparticles, comprising 201 atoms each, within a few days,
considerably faster than density functional theory, which would require
hundreds of years or neural network potentials, which would have taken hundreds
of days. The proposed approach accelerates the exploration of the vast HEA
chemical space, facilitating the design of novel catalysts.
|
[{'version': 'v1', 'created': 'Fri, 12 Apr 2024 11:54:06 GMT'}, {'version': 'v2', 'created': 'Sun, 6 Oct 2024 10:28:27 GMT'}, {'version': 'v3', 'created': 'Mon, 27 Jan 2025 08:54:38 GMT'}]
|
2025-01-28
|
Zhuo Diao, Keiichi Ueda, Linfeng Hou, Fengxuan Li, Hayato Yamashita,
Masayuki Abe
|
AI-equipped scanning probe microscopy for autonomous site-specific
atomic-level characterization at room temperature
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci
|
We present an advanced scanning probe microscopy system enhanced with
artificial intelligence (AI-SPM) designed for self-driving atomic-scale
measurements. This system expertly identifies and manipulates atomic positions
with high precision, autonomously performing tasks such as spectroscopic data
acquisition and atomic adjustment. An outstanding feature of AI-SPM is its
ability to detect and adapt to surface defects, targeting or avoiding them as
necessary. It's also engineered to address typical challenges such as
positional drift and tip apex atomic variations due to the thermal effect,
ensuring accurate, site-specific surface analyses. Our tests under the
demanding conditions of room temperature have demonstrated the robustness of
the system, successfully navigating thermal drift and tip fluctuations. During
these tests on the Si(111)-(7x7) surface, AI-SPM autonomously identified
defect-free regions and performed a large number of current-voltage
spectroscopy measurements at different adatom sites, while autonomously
compensating for thermal drift and monitoring probe health. These experiments
produce extensive data sets that are critical for reliable materials
characterization and demonstrate the potential of AI-SPM to significantly
improve data acquisition. The integration of AI into SPM technologies
represents a step toward more effective, precise and reliable atomic-level
surface analysis, revolutionizing materials characterization methods.
|
[{'version': 'v1', 'created': 'Wed, 17 Apr 2024 08:25:42 GMT'}]
|
2024-04-18
|
Shinnosuke Hattori and Qiang Zhu
|
Study of Entropy-Driven Polymorphic Stability for Aspirin Using Accurate
Neural Network Interatomic Potential
| null | null | null |
cond-mat.mtrl-sci
|
In this study, we present a systematic computational investigation to analyze
the long debated crystal stability of two well known aspirin polymorphs,
labeled as Form I and Form II. Specifically, we developed a strategy to collect
training configurations covering diverse interatomic interactions between
representative functional groups in the aspirin crystals. Utilizing a
state-of-the-art neural network interatomic potential (NNIP) model, we
developed an accurate machine learning potential to simulate aspirin crystal
dynamics under finite temperature conditions with $\sim$0.46 kJ/mol/molecule
accuracy. Employing the trained NNIP model, we performed thermodynamic
integration to assess the free energy difference between aspirin Forms I and
II, accounting for the anharmonic effects in a large supercell consisting of
512 molecules. For the first time, our results convincingly demonstrated that
Form I is more stable than Form II at 300 K, ranging from 0.74 to 1.83
kJ/mol/molecule, aligning with the experimental observations. Unlike the
majority of previous simulations based on (quasi)harmonic approximations in a
small super cell, which often found the degenerate energies between aspirin I
and II, our findings underscore the importance of anharmonic effects in
determining polymorphic stability ranking. Furthermore, we proposed the use of
rotational degrees of freedom of methyl and ester/phenyl groups in the aspirin
crystal, as characteristic motions to highlight rotational entropic
contribution that favors the stability of Form I. Beyond the aspirin
polymorphism, we anticipate that such entropy-driven stabilization can be
broadly applicable to many other organic systems and thus our approach,
suggesting our approach holds a great promise for stability studies in small
molecule drug design.
|
[{'version': 'v1', 'created': 'Wed, 17 Apr 2024 17:34:52 GMT'}, {'version': 'v2', 'created': 'Fri, 19 Apr 2024 16:12:58 GMT'}]
|
2024-04-22
|
Adva Baratz, Galit Cohen, Sivan Refaely-Abramson
|
Unsupervised learning approach to quantum wavepacket dynamics from
coupled temporal-spatial correlations
| null | null | null |
cond-mat.mtrl-sci
|
Understanding complex quantum dynamics in realistic materials requires
insight into the underlying correlations dominating the interactions between
the participating particles. Due to the wealth of information involved in these
processes, applying artificial intelligence methods is compelling. Yet,
unsupervised data-driven approaches typically focus on maximal variations of
the individual components, rather than considering the correlations between
them. Here we present an approach that recognizes correlation patterns to
explore convoluted dynamical processes. Our scheme is using singular value
decomposition (SVD) to extract dynamical features, unveiling the internal
temporal-spatial interrelations that generate the dynamical mechanisms. We
apply our approach to study light-induced wavepacket propagation in organic
crystals, of interest for applications in material based quantum computing and
quantum information science. We show how transformation from the input momentum
and time coordinates onto a new correlation-induced coordinate space allows
direct recognition of the relaxation and dephasing components dominating the
dynamics and demonstrate their dependence on the initial pulse shape.
Entanglement of the dynamical features is suggested as a pathway to reproduce
the information required for further explainability of these mechanisms. Our
method offers a route for elucidating complex dynamical processes using
unsupervised AI-based analysis in multi-component systems.
|
[{'version': 'v1', 'created': 'Thu, 18 Apr 2024 08:20:30 GMT'}]
|
2024-04-19
|
Wonseok Lee, Yeonghun Kang, Taeun Bae, Jihan Kim
|
Harnessing Large Language Model to collect and analyze Metal-organic
framework property dataset
| null | null | null |
cond-mat.mtrl-sci
|
This research was focused on the efficient collection of experimental
Metal-Organic Framework (MOF) data from scientific literature to address the
challenges of accessing hard-to-find data and improving the quality of
information available for machine learning studies in materials science.
Utilizing a chain of advanced Large Language Models (LLMs), we developed a
systematic approach to extract and organize MOF data into a structured format.
Our methodology successfully compiled information from more than 40,000
research articles, creating a comprehensive and ready-to-use dataset. The
findings highlight the significant advantage of incorporating experimental data
over relying solely on simulated data for enhancing the accuracy of machine
learning predictions in the field of MOF research.
|
[{'version': 'v1', 'created': 'Sun, 31 Mar 2024 12:47:24 GMT'}]
|
2024-04-23
|
Bowen Hou, Jinyuan Wu, Diana Y. Qiu
|
Unsupervised Learning of Individual Kohn-Sham States: Interpretable
Representations and Consequences for Downstream Predictions of Many-Body
Effects
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
Representation learning for the electronic structure problem is a major
challenge of machine learning in computational condensed matter and materials
physics. Within quantum mechanical first principles approaches, Kohn-Sham
density functional theory (DFT) is the preeminent tool for understanding
electronic structure, and the high-dimensional wavefunctions calculated in this
approach serve as the building block for downstream calculations of correlated
many-body excitations and related physical observables. Here, we use
variational autoencoders (VAE) for the unsupervised learning of
high-dimensional DFT wavefunctions and show that these wavefunctions lie in a
low-dimensional manifold within the latent space. Our model autonomously
determines the optimal representation of the electronic structure, avoiding
limitations due to manual feature engineering and selection in prior work. To
demonstrate the utility of the latent space representation of the DFT
wavefunction, we use it for the supervised training of neural networks (NN) for
downstream prediction of the quasiparticle bandstructures within the GW
formalism, which includes many-electron correlations beyond DFT. The GW
prediction achieves a low error of 0.11 eV for a combined test set of metals
and semiconductors drawn from the Computational 2D Materials Database (C2DB),
suggesting that latent space representation captures key physical information
from the original data. Finally, we explore the interpretability of the VAE
representation and show that the successful representation learning and
downstream prediction by our model is derived from the smoothness of the VAE
latent space, which also enables the generation of wavefunctions on arbitrary
points in latent space. Our work provides a novel and general machine-learning
framework for investigating electronic structure and many-body physics.
|
[{'version': 'v1', 'created': 'Mon, 22 Apr 2024 21:50:50 GMT'}]
|
2024-04-24
|
Rajni Chahal, Michael D. Toomey, Logan T. Kearney, Ada Sedova, Joshua
T. Damron, Amit K. Naskar, Santanu Roy
|
Deep Learning Interatomic Potential Connects Molecular Structural
Ordering to Macroscale Properties of Polyacrylonitrile (PAN) Polymer
| null | null | null |
cond-mat.mtrl-sci
|
Polyacrylonitrile (PAN) is an important commercial polymer, bearing atactic
stereochemistry resulting from nonselective radical polymerization. As such, an
accurate, fundamental understanding of governing interactions among PAN
molecular units are indispensable to advance the design principles of final
products at reduced processability costs. While ab initio molecular dynamics
(AIMD) simulations can provide the necessary accuracy for treating key
interactions in polar polymers such as dipole-dipole interactions and hydrogen
bonding, and analyzing their influence on molecular orientation, their
implementation is limited to small molecules only. Herein, we show that the
neural network interatomic potentials (NNIP) that are trained on the
small-scale AIMD data (acquired for oligomers) can be efficiently employed to
examine the structures/properties at large scales (polymers). NNIP provides
critical insight into intra- and interchain hydrogen bonding and dipolar
correlations, and accurately predicts the amorphous bulk PAN structure
validated by modeling the experimental X-ray structure factor. Furthermore, the
NNIP-predicted PAN properties such as density and elastic modulus are in good
agreement with their experimental values. Overall, the trend in the elastic
modulus is found to correlate strongly with the PAN structural orientations
encoded in Hermans orientation factor. This study enables the ability to
predict the structure-property relations for PAN and analogs with sustainable
ab initio accuracy across scales.
|
[{'version': 'v1', 'created': 'Wed, 24 Apr 2024 20:21:54 GMT'}]
|
2024-04-26
|
Jiwei Yu, Zhangwei Wang, Aparna Saksena, Shaolou Wei, Ye Wei, Timoteo
Colnaghi, Andreas Marek, Markus Rampp, Min Song, Baptiste Gault, Yue Li
|
3D deep learning for enhanced atom probe tomography analysis of
nanoscale microstructures
| null | null | null |
cond-mat.mtrl-sci physics.data-an
|
Quantitative analysis of microstructural features on the nanoscale, including
precipitates, local chemical orderings (LCOs) or structural defects (e.g.
stacking faults) plays a pivotal role in understanding the mechanical and
physical responses of engineering materials. Atom probe tomography (APT), known
for its exceptional combination of chemical sensitivity and sub-nanometer
resolution, primarily identifies microstructures through compositional
segregations. However, this fails when there is no significant segregation, as
can be the case for LCOs and stacking faults. Here, we introduce a 3D deep
learning approach, AtomNet, designed to process APT point cloud data at the
single-atom level for nanoscale microstructure extraction, simultaneously
considering compositional and structural information. AtomNet is showcased in
segmenting L12-type nanoprecipitates from the matrix in an AlLiMg alloy,
irrespective of crystallographic orientations, which outperforms previous
methods. AtomNet also allows for 3D imaging of L10-type LCOs in an AuCu alloy,
a challenging task for conventional analysis due to their small size and subtle
compositional differences. Finally, we demonstrate the use of AtomNet for
revealing 2D stacking faults in a Co-based superalloy, without any defected
training data, expanding the capabilities of APT for automated exploration of
hidden microstructures. AtomNet pushes the boundaries of APT analysis, and
holds promise in establishing precise quantitative microstructure-property
relationships across a diverse range of metallic materials.
|
[{'version': 'v1', 'created': 'Thu, 25 Apr 2024 11:36:10 GMT'}]
|
2024-04-26
|
M. A. Maia, I. B. C. M. Rocha, D. Kova\v{c}evi\'c, F. P. van der Meer
|
Physically recurrent neural network for rate and path-dependent
heterogeneous materials in a finite strain framework
| null | null | null |
cond-mat.mtrl-sci cs.LG cs.NA math.NA
|
In this work, a hybrid physics-based data-driven surrogate model for the
microscale analysis of heterogeneous material is investigated. The proposed
model benefits from the physics-based knowledge contained in the constitutive
models used in the full-order micromodel by embedding them in a neural network.
Following previous developments, this paper extends the applicability of the
physically recurrent neural network (PRNN) by introducing an architecture
suitable for rate-dependent materials in a finite strain framework. In this
model, the homogenized deformation gradient of the micromodel is encoded into a
set of deformation gradients serving as input to the embedded constitutive
models. These constitutive models compute stresses, which are combined in a
decoder to predict the homogenized stress, such that the internal variables of
the history-dependent constitutive models naturally provide physics-based
memory for the network. To demonstrate the capabilities of the surrogate model,
we consider a unidirectional composite micromodel with transversely isotropic
elastic fibers and elasto-viscoplastic matrix material. The extrapolation
properties of the surrogate model trained to replace such micromodel are tested
on loading scenarios unseen during training, ranging from different
strain-rates to cyclic loading and relaxation. Speed-ups of three orders of
magnitude with respect to the runtime of the original micromodel are obtained.
|
[{'version': 'v1', 'created': 'Fri, 5 Apr 2024 12:40:03 GMT'}]
|
2024-04-30
|
Adela Habib and Joshua Finkelstein and Anders M. N. Niklasson
|
Efficient Mixed-Precision Matrix Factorization of the Inverse Overlap
Matrix in Electronic Structure Calculations with AI-Hardware and GPUs
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci math-ph math.MP
|
In recent years, a new kind of accelerated hardware has gained popularity in
the Artificial Intelligence (AI) and Machine Learning (ML) communities which
enables extremely high-performance tensor contractions in reduced precision for
deep neural network calculations. In this article, we exploit Nvidia Tensor
cores, a prototypical example of such AI/ML hardware, to develop a mixed
precision approach for computing a dense matrix factorization of the inverse
overlap matrix in electronic structure theory, $S^{-1}$. This factorization of
$S^{-1}$, written as $ZZ^T=S^{-1}$, is used to transform the general matrix
eigenvalue problem into a standard matrix eigenvalue problem. Here we present a
mixed precision iterative refinement algorithm where $Z$ is given recursively
using matrix-matrix multiplications and can be computed with high performance
on Tensor cores. To understand the performance and accuracy of Tensor cores,
comparisons are made to GPU-only implementations in single and double
precision. Additionally, we propose a non-parametric stopping criteria which is
robust in the face of lower precision floating point operations. The algorithm
is particularly useful when we have a good initial guess to $Z$, for example,
from previous time steps in quantum-mechanical molecular dynamics simulations
or from a previous iteration in a geometry optimization.
|
[{'version': 'v1', 'created': 'Mon, 29 Apr 2024 23:53:16 GMT'}]
|
2024-05-01
|
Sungwoo Kang
|
How Graph Neural Network Interatomic Potentials Extrapolate: Role of the
Message-Passing Algorithm
|
J. Chem. Phys. 161, 244102 (2024)
|
10.1063/5.0234287
| null |
cond-mat.mtrl-sci
|
Graph neural network interatomic potentials (GNN-IPs) are gaining significant
attention due to their capability of learning from large datasets.
Specifically, universal interatomic potentials based on GNN, usually trained
with crystalline geometries, often exhibit remarkable extrapolative behavior
towards untrained domains, such as surfaces or amorphous configurations.
However, the origin of this extrapolation capability is not well understood.
This work provides a theoretical explanation of how GNN-IPs extrapolate to
untrained geometries. First, we demonstrate that GNN-IPs can capture non-local
electrostatic interactions through the message-passing algorithm, as evidenced
by tests on toy models and DFT data. We find that GNN-IP models, SevenNet and
MACE, accurately predict electrostatic forces in untrained domains, indicating
that they have learned the exact functional form of the Coulomb interaction.
Based on these results, we suggest that the ability to learn non-local
electrostatic interactions, coupled with the embedding nature of GNN-IPs,
explains their extrapolation ability. We find that the universal GNN-IP,
SevenNet-0, effectively infers non-local Coulomb interactions in untrained
domains but fails to extrapolate the non-local forces arising from the kinetic
term, which supports the suggested theory. Finally, we address the impact of
hyperparameters on the extrapolation performance of universal potentials, such
as SevenNet-0 and MACE-MP-0, and discuss the limitations of the extrapolation
capabilities.
|
[{'version': 'v1', 'created': 'Wed, 1 May 2024 02:55:15 GMT'}, {'version': 'v2', 'created': 'Tue, 13 Aug 2024 13:50:55 GMT'}, {'version': 'v3', 'created': 'Thu, 5 Dec 2024 06:49:06 GMT'}]
|
2025-01-08
|
Jihua Chen, Yue Yuan, Amir Koushyar Ziabari, Xuan Xu, Honghai Zhang,
Panagiotis Christakopoulos, Peter V. Bonnesen, Ilia N. Ivanov, Panchapakesan
Ganesh, Chen Wang, Karen Patino Jaimes, Guang Yang, Rajeev Kumar, Bobby G.
Sumpter, Rigoberto Advincula
|
AI for Manufacturing and Healthcare: a chemistry and engineering
perspective
| null | null | null |
cond-mat.mtrl-sci
|
Artificial Intelligence (AI) approaches are increasingly being applied to
more and more domains of Science, Engineering, Chemistry, and Industries to not
only improve efficiencies and enhance productivity, but also enable new
capabilities. The new opportunities range from automated molecule design and
screening, properties prediction, gaining insights of chemical reactions, to
computer-aided design, predictive maintenance of systems, robotics, and
autonomous vehicles. This review focuses on the new applications of AI in
manufacturing and healthcare. For the Manufacturing Industries, we focus on AI
and algorithms for (1) Battery, (2) Flow Chemistry, (3) Additive Manufacturing,
(4) Sensors, and (5) Machine Vision. For Healthcare applications, we focus on:
(1) Medical Vision (2) Diagnosis, (3) Protein Design, and (4) Drug Discovery.
In the end, related topics are discussed, including physics integrated machine
learning, model explainability, security, and governance during model
deployment.
|
[{'version': 'v1', 'created': 'Thu, 2 May 2024 17:50:05 GMT'}]
|
2024-05-03
|
Nakul Rampal, Kaiyu Wang, Matthew Burigana, Lingxiang Hou, Juri
Al-Johani, Anna Sackmann, Hanan S. Murayshid, Walaa Abdullah Al-Sumari, Arwa
M. Al-Abdulkarim, Nahla Eid Al-Hazmi, Majed O. Al-Awad, Christian Borgs,
Jennifer T. Chayes, Omar M. Yaghi
|
Single and Multi-Hop Question-Answering Datasets for Reticular Chemistry
with GPT-4-Turbo
| null | null | null |
cs.CL cond-mat.mtrl-sci
|
The rapid advancement in artificial intelligence and natural language
processing has led to the development of large-scale datasets aimed at
benchmarking the performance of machine learning models. Herein, we introduce
'RetChemQA,' a comprehensive benchmark dataset designed to evaluate the
capabilities of such models in the domain of reticular chemistry. This dataset
includes both single-hop and multi-hop question-answer pairs, encompassing
approximately 45,000 Q&As for each type. The questions have been extracted from
an extensive corpus of literature containing about 2,530 research papers from
publishers including NAS, ACS, RSC, Elsevier, and Nature Publishing Group,
among others. The dataset has been generated using OpenAI's GPT-4 Turbo, a
cutting-edge model known for its exceptional language understanding and
generation capabilities. In addition to the Q&A dataset, we also release a
dataset of synthesis conditions extracted from the corpus of literature used in
this study. The aim of RetChemQA is to provide a robust platform for the
development and evaluation of advanced machine learning algorithms,
particularly for the reticular chemistry community. The dataset is structured
to reflect the complexities and nuances of real-world scientific discourse,
thereby enabling nuanced performance assessments across a variety of tasks. The
dataset is available at the following link:
https://github.com/nakulrampal/RetChemQA
|
[{'version': 'v1', 'created': 'Fri, 3 May 2024 14:29:54 GMT'}]
|
2024-05-06
|
Luis Mart\'in Encinar, Daniele Lanzoni, Andrea Fantasia, Fabrizio
Rovaris, Roberto Bergamaschini, Francesco Montalenti
|
Quantitative analysis of the prediction performance of a Convolutional
Neural Network evaluating the surface elastic energy of a strained film
| null |
10.1016/j.commatsci.2024.113657
| null |
physics.comp-ph cond-mat.mtrl-sci
|
A Deep Learning approach is devised to estimate the elastic energy density
$\rho$ at the free surface of an undulated stressed film. About 190000
arbitrary surface profiles h(x) are randomly generated by Perlin noise and
paired with the corresponding elastic energy density profiles $\rho(x)$,
computed by a semi-analytical Green's function approximation, suitable for
small-slope morphologies. The resulting dataset and smaller subsets of it are
used for the training of a Fully Convolutional Neural Network. The trained
models are shown to return quantitative predictions of $\rho$, not only in
terms of convergence of the loss function during training, but also in
validation and testing, with better results in the case of the larger dataset.
Extensive tests are performed to assess the generalization capability of the
Neural Network model when applied to profiles with localized features or
assigned geometries not included in the original dataset. Moreover, its
possible exploitation on domain sizes beyond the one used in the training is
also analyzed in-depth. The conditions providing a one-to-one reproduction of
the ground-truth $\rho(x)$ profiles computed by the Green's approximation are
highlighted along with critical cases. The accuracy and robustness of the
deep-learned $\rho(x)$ are further demonstrated in the time-integration of
surface evolution problems described by simple partial differential equations
of evaporation/condensation and surface diffusion.
|
[{'version': 'v1', 'created': 'Sun, 5 May 2024 20:34:16 GMT'}]
|
2025-03-04
|
Kamal Choudhary
|
AtomGPT: Atomistic Generative Pre-trained Transformer for Forward and
Inverse Materials Design
| null | null | null |
cond-mat.mtrl-sci
|
Large language models (LLMs) such as generative pretrained transformers
(GPTs) have shown potential for various commercial applications, but their
applicability for materials design remains underexplored. In this article, we
introduce AtomGPT, a model specifically developed for materials design based on
transformer architectures, to demonstrate the capability for both atomistic
property prediction and structure generation. We show that a combination of
chemical and structural text descriptions can efficiently predict material
properties with accuracy comparable to graph neural network models, including
formation energies, electronic bandgaps from two different methods and
superconducting transition temperatures. Furthermore, we demonstrate that
AtomGPT can generate atomic structures for tasks such as designing new
superconductors, with the predictions validated through density functional
theory calculations. This work paves the way for leveraging LLMs in forward and
inverse materials design, offering an efficient approach to the discovery and
optimization of materials.
|
[{'version': 'v1', 'created': 'Mon, 6 May 2024 17:54:54 GMT'}, {'version': 'v2', 'created': 'Sat, 29 Jun 2024 06:24:30 GMT'}]
|
2024-07-02
|
Han Yang, Chenxi Hu, Yichi Zhou, Xixian Liu, Yu Shi, Jielan Li,
Guanzhi Li, Zekun Chen, Shuizhou Chen, Claudio Zeni, Matthew Horton, Robert
Pinsler, Andrew Fowler, Daniel Z\"ugner, Tian Xie, Jake Smith, Lixin Sun,
Qian Wang, Lingyu Kong, Chang Liu, Hongxia Hao, Ziheng Lu
|
MatterSim: A Deep Learning Atomistic Model Across Elements, Temperatures
and Pressures
| null | null | null |
cond-mat.mtrl-sci
|
Accurate and fast prediction of materials properties is central to the
digital transformation of materials design. However, the vast design space and
diverse operating conditions pose significant challenges for accurately
modeling arbitrary material candidates and forecasting their properties. We
present MatterSim, a deep learning model actively learned from large-scale
first-principles computations, for efficient atomistic simulations at
first-principles level and accurate prediction of broad material properties
across the periodic table, spanning temperatures from 0 to 5000 K and pressures
up to 1000 GPa. Out-of-the-box, the model serves as a machine learning force
field, and shows remarkable capabilities not only in predicting ground-state
material structures and energetics, but also in simulating their behavior under
realistic temperatures and pressures, signifying an up to ten-fold enhancement
in precision compared to the prior best-in-class. This enables MatterSim to
compute materials' lattice dynamics, mechanical and thermodynamic properties,
and beyond, to an accuracy comparable with first-principles methods.
Specifically, MatterSim predicts Gibbs free energies for a wide range of
inorganic solids with near-first-principles accuracy and achieves a 15 meV/atom
resolution for temperatures up to 1000K compared with experiments. This opens
an opportunity to predict experimental phase diagrams of materials at minimal
computational cost. Moreover, MatterSim also serves as a platform for
continuous learning and customization by integrating domain-specific data. The
model can be fine-tuned for atomistic simulations at a desired level of theory
or for direct structure-to-property predictions, achieving high data efficiency
with a reduction in data requirements by up to 97%.
|
[{'version': 'v1', 'created': 'Wed, 8 May 2024 11:13:30 GMT'}, {'version': 'v2', 'created': 'Fri, 10 May 2024 16:49:52 GMT'}]
|
2024-05-13
|
Michael Vitz, Hamed Mohammadbagherpoor, Samarth Sandeep, Andrew
Vlasic, Richard Padbury, and Anh Pham
|
Hybrid Quantum Graph Neural Network for Molecular Property Prediction
| null | null | null |
quant-ph cond-mat.mtrl-sci cs.LG
|
To accelerate the process of materials design, materials science has
increasingly used data driven techniques to extract information from collected
data. Specially, machine learning (ML) algorithms, which span the ML
discipline, have demonstrated ability to predict various properties of
materials with the level of accuracy similar to explicit calculation of quantum
mechanical theories, but with significantly reduced run time and computational
resources. Within ML, graph neural networks have emerged as an important
algorithm within the field of machine learning, since they are capable of
predicting accurately a wide range of important physical, chemical and
electronic properties due to their higher learning ability based on the graph
representation of material and molecular descriptors through the aggregation of
information embedded within the graph. In parallel with the development of
state of the art classical machine learning applications, the fusion of quantum
computing and machine learning have created a new paradigm where classical
machine learning model can be augmented with quantum layers which are able to
encode high dimensional data more efficiently. Leveraging the structure of
existing algorithms, we developed a unique and novel gradient free hybrid
quantum classical convoluted graph neural network (HyQCGNN) to predict
formation energies of perovskite materials. The performance of our hybrid
statistical model is competitive with the results obtained purely from a
classical convoluted graph neural network, and other classical machine learning
algorithms, such as XGBoost. Consequently, our study suggests a new pathway to
explore how quantum feature encoding and parametric quantum circuits can yield
drastic improvements of complex ML algorithm like graph neural network.
|
[{'version': 'v1', 'created': 'Wed, 8 May 2024 16:43:25 GMT'}]
|
2024-05-09
|
Bowen Deng, Yunyeong Choi, Peichen Zhong, Janosh Riebesell, Shashwat
Anand, Zhuohan Li, KyuJung Jun, Kristin A. Persson, Gerbrand Ceder
|
Overcoming systematic softening in universal machine learning
interatomic potentials by fine-tuning
| null | null | null |
cond-mat.mtrl-sci cs.AI cs.LG
|
Machine learning interatomic potentials (MLIPs) have introduced a new
paradigm for atomic simulations. Recent advancements have seen the emergence of
universal MLIPs (uMLIPs) that are pre-trained on diverse materials datasets,
providing opportunities for both ready-to-use universal force fields and robust
foundations for downstream machine learning refinements. However, their
performance in extrapolating to out-of-distribution complex atomic environments
remains unclear. In this study, we highlight a consistent potential energy
surface (PES) softening effect in three uMLIPs: M3GNet, CHGNet, and MACE-MP-0,
which is characterized by energy and force under-prediction in a series of
atomic-modeling benchmarks including surfaces, defects, solid-solution
energetics, phonon vibration modes, ion migration barriers, and general
high-energy states.
We find that the PES softening behavior originates from a systematic
underprediction error of the PES curvature, which derives from the biased
sampling of near-equilibrium atomic arrangements in uMLIP pre-training
datasets. We demonstrate that the PES softening issue can be effectively
rectified by fine-tuning with a single additional data point. Our findings
suggest that a considerable fraction of uMLIP errors are highly systematic, and
can therefore be efficiently corrected. This result rationalizes the
data-efficient fine-tuning performance boost commonly observed with
foundational MLIPs. We argue for the importance of a comprehensive materials
dataset with improved PES sampling for next-generation foundational MLIPs.
|
[{'version': 'v1', 'created': 'Sat, 11 May 2024 22:30:47 GMT'}]
|
2024-05-14
|
Ashley Lenau, Dennis M. Dimiduk, and Stephen R. Niezgoda
|
Importance of hyper-parameter optimization during training of
physics-informed deep learning networks
| null | null | null |
cond-mat.mtrl-sci physics.data-an
|
Incorporating scientific knowledge into deep learning (DL) models for
materials-based simulations can constrain the network's predictions to be
within the boundaries of the material system. Altering loss functions or adding
physics-based regularization (PBR) terms to reflect material properties informs
a network about the physical constraints the simulation should obey. The
training and tuning process of a DL network greatly affects the quality of the
model, but how this process differs when using physics-based loss functions or
regularization terms is not commonly discussed. In this manuscript, several PBR
methods are implemented to enforce stress equilibrium on a network predicting
the stress fields of a high elastic contrast composite. Models with PBR
enforced the equilibrium constraint more accurately than a model without PBR,
and the stress equilibrium converged more quickly. More importantly, it was
observed that independently fine-tuning each implementation resulted in more
accurate models. More specifically, each loss formulation and dataset required
different learning rates and loss weights for the best performance. This result
has important implications on assessing the relative effectiveness of different
DL models and highlights important considerations when making a comparison
between DL methods.
|
[{'version': 'v1', 'created': 'Tue, 14 May 2024 13:21:00 GMT'}, {'version': 'v2', 'created': 'Tue, 21 May 2024 21:31:46 GMT'}]
|
2024-05-24
|
Patxi Fernandez-Zelaia, Jason Mayeur, Jiahao Cheng, Yousub Lee, Kevin
Knipe, Kai Kadau
|
Self-supervised feature distillation and design of experiments for
efficient training of micromechanical deep learning surrogates
| null | null | null |
cs.CE cond-mat.mtrl-sci
|
Machine learning surrogate emulators are needed in engineering design and
optimization tasks to rapidly emulate computationally expensive physics-based
models. In micromechanics problems the local full-field response variables are
desired at microstructural length scales. While there has been a great deal of
work on establishing architectures for these tasks there has been relatively
little work on establishing microstructural experimental design strategies.
This work demonstrates that intelligent selection of microstructural volume
elements for subsequent physics simulations enables the establishment of more
accurate surrogate models. There exist two key challenges towards establishing
a suitable framework: (1) microstructural feature quantification and (2)
establishment of a criteria which encourages construction of a diverse training
data set. Three feature extraction strategies are used as well as three design
criteria. A novel contrastive feature extraction approach is established for
automated self-supervised extraction of microstructural summary statistics.
Results indicate that for the problem considered up to a 8\% improvement in
surrogate performance may be achieved using the proposed design and training
strategy. Trends indicate this approach may be even more beneficial when scaled
towards larger problems. These results demonstrate that the selection of an
efficient experimental design is an important consideration when establishing
machine learning based surrogate models.
|
[{'version': 'v1', 'created': 'Thu, 16 May 2024 14:31:30 GMT'}]
|
2024-05-17
|
Stephen T. Lam, Shubhojit Banerjee, Rajni Chahal
|
Uncertainty and Exploration of Deep Learning-based Atomistic Models for
Screening Molten Salt Properties and Compositions
| null | null | null |
cond-mat.mtrl-sci physics.chem-ph
|
Due to extreme chemical, thermal, and radiation environments, existing molten
salt property databases lack the necessary experimental thermal properties of
reactor-relevant salt compositions. Meanwhile, simulating these properties
directly is typically either computationally expensive or inaccurate. In recent
years, deep learning (DL)-based atomistic simulations have emerged as a method
for achieving both efficiency and accuracy. However, there remain significant
challenges in assessing model reliability in DL models when simulating
properties and screening new systems. In this work, structurally complex
LiF-NaF-ZrF$_4$ salt is studied. We show that neural network (NN) uncertainty
can be quantified using ensemble learning to provide a 95% confidence interval
(CI) for NN-based predictions. We show that DL models can successfully
extrapolate to new compositions, temperatures, and timescales, but fail for
significant changes in density, which is captured by ensemble-based uncertainty
predictions. This enables improved confidence in utilizing simulated data for
realistic reactor conditions, and guidelines for training deployable DL models.
|
[{'version': 'v1', 'created': 'Tue, 30 Apr 2024 21:20:55 GMT'}]
|
2024-05-20
|
Zijian Du, Luozhijie Jin, Le Shu, Yan Cen, Yuanfeng Xu, Yongfeng Mei
and Hao Zhang
|
CTGNN: Crystal Transformer Graph Neural Network for Crystal Material
Property Prediction
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
The combination of deep learning algorithm and materials science has made
significant progress in predicting novel materials and understanding various
behaviours of materials. Here, we introduced a new model called as the Crystal
Transformer Graph Neural Network (CTGNN), which combines the advantages of
Transformer model and graph neural networks to address the complexity of
structure-properties relation of material data. Compared to the
state-of-the-art models, CTGNN incorporates the graph network structure for
capturing local atomic interactions and the dual-Transformer structures to
model intra-crystal and inter-atomic relationships comprehensively. The
benchmark carried on by the proposed CTGNN indicates that CTGNN significantly
outperforms existing models like CGCNN and MEGNET in the prediction of
formation energy and bandgap properties. Our work highlights the potential of
CTGNN to enhance the performance of properties prediction and accelerates the
discovery of new materials, particularly for perovskite materials.
|
[{'version': 'v1', 'created': 'Sun, 19 May 2024 10:00:06 GMT'}]
|
2024-05-21
|
Chinedu Ekuma
|
Computational toolkit for predicting thickness of 2D materials using
machine learning and autogenerated dataset by large language model
| null | null | null |
cond-mat.mtrl-sci cond-mat.str-el
|
The thickness of 2D materials not only plays a crucial role in determining
the performance of nanoelectronic and optoelectronic devices but also
introduces complexities in predicting volume-dependent properties such as
energy storage capacity, due to the intrinsic vacuum within these materials.
Although a plethora of experimental techniques, including but not limited to
optical contrast, Raman spectroscopy, nonlinear optical spectroscopy,
near-field optical imaging, and hyperspectral imaging, facilitate the
measurement of 2D material thickness, comprehensive data for many materials
remains elusive. Over the last decade, the exponential proliferation of 2D
materials and their heterostructures has outstripped the capabilities of
conventional experimental and computational approaches. In this evolving
landscape, machine learning (ML) has emerged as an indispensable tool, offering
novel avenues to augment these traditional methodologies. Addressing the
critical gap, we introduce THICK2D - Thickness Hierarchy Inference and
Calculation Kit for 2D Materials. This Python-based computational framework
harnesses an autogenerated thickness database, developed using large language
models (LLMs), and advanced ML algorithms to facilitate the rapid and scalable
estimation of material thickness, relying solely on crystallographic data. To
demonstrate the utility and robustness of THICK2D, we successfully employed the
toolkit to predict the thickness of more than 8000 2D-based materials, sourced
from two extensive 2D material databases. THICK2D is disseminated as an
open-source utility, accessible on GitHub https://github.com/gmp007/THICK2D,
and archived on Zenodo at
https://doi.org/10.5281/zenodo.11216648}{10.5281/zenodo.11216648.
|
[{'version': 'v1', 'created': 'Fri, 24 May 2024 01:05:47 GMT'}]
|
2024-05-27
|
M. Sipil\"a, F. Mehryary, S. Pyysalo, F. Ginter and Milica Todorovi\'c
|
Question Answering models for information extraction from perovskite
materials science literature
| null | null | null |
cond-mat.mtrl-sci
|
Scientific text is a promising source of data in materials science, with
ongoing research into utilising textual data for materials discovery. In this
study, we developed and tested a novel approach to extract material-property
relationships from scientific publications using the Question Answering (QA)
method. QA performance was evaluated for information extraction of perovskite
bandgaps based on a human query. We observed considerable variation in results
with five different large language models fine-tuned for the QA task. Best
extraction accuracy was achieved with the QA MatBERT and F1-scores improved on
the current state-of-the-art. This work demonstrates the QA workflow and paves
the way towards further applications. The simplicity, versatility and accuracy
of the QA approach all point to its considerable potential for text-driven
discoveries in materials research.
|
[{'version': 'v1', 'created': 'Fri, 24 May 2024 07:24:21 GMT'}, {'version': 'v2', 'created': 'Fri, 13 Sep 2024 11:27:16 GMT'}]
|
2024-09-16
|
Avishek Singh and Nirmal Ganguli
|
Unsupervised Deep Neural Network Approach To Solve Bosonic Systems
| null | null | null |
cond-mat.mtrl-sci cond-mat.quant-gas
|
The simulation of quantum many-body systems poses a significant challenge in
physics due to the exponential scaling of Hilbert space with the number of
particles. Traditional methods often struggle with large system sizes and
frustrated lattices. In this research article, we present a novel algorithm
that leverages the power of deep neural networks combined with Markov Chain
Monte Carlo simulation to address these limitations. Our method introduces a
neural network architecture specifically designed to represent bosonic quantum
states on a 1D lattice chain. We successfully achieve the ground state of the
Bose-Hubbard model, demonstrating the superiority of the adaptive momentum
optimizer for convergence speed and stability. Notably, our approach offers
flexibility in simulating various lattice geometries and potentially larger
system sizes, making it a valuable tool for exploring complex quantum
phenomena. This work represents a substantial advancement in the field of
quantum simulation, opening new possibilities for investigating previously
challenging systems.
|
[{'version': 'v1', 'created': 'Fri, 24 May 2024 12:09:20 GMT'}]
|
2024-05-27
|
Avishek Singh and Nirmal Ganguli
|
Unsupervised Deep Neural Network Approach To Solve Fermionic Systems
| null | null | null |
cond-mat.mtrl-sci cond-mat.str-el
|
Solving the Schr\"{o}dinger equation for interacting many-body quantum
systems faces computational challenges due to exponential scaling with system
size. This complexity limits the study of important phenomena in materials
science and physics. We develop an Artificial Neural Network (ANN)-driven
algorithm to simulate fermionic systems on lattices. Our method uses Pauli
matrices to represent quantum states, incorporates Markov Chain Monte Carlo
sampling, and leverages an adaptive momentum optimizer. We demonstrate the
algorithm's accuracy by simulating the Heisenberg Hamiltonian on a
one-dimensional lattice, achieving results with an error in the order of
$10^{-4}$ compared to exact diagonalization. Furthermore, we successfully model
a magnetic phase transition in a two-dimensional lattice under an applied
magnetic field. Importantly, our approach avoids the sign problem common to
traditional Fermionic Monte Carlo methods, enabling the investigation of
frustrated systems. This work demonstrates the potential of ANN-based
algorithms for efficient simulation of complex quantum systems, opening avenues
for discoveries in condensed matter physics and materials science.
|
[{'version': 'v1', 'created': 'Fri, 24 May 2024 12:41:02 GMT'}]
|
2024-05-27
|
Haosheng Xu, Dongheng Qian, and Jing Wang
|
Predicting Many Crystal Properties via an Adaptive Transformer-based
Framework
| null | null | null |
cond-mat.mtrl-sci cond-mat.mes-hall cs.LG
|
Machine learning has revolutionized many fields, including materials science.
However, predicting properties of crystalline materials using machine learning
faces challenges in input encoding, output versatility, and interpretability.
We introduce CrystalBERT, an adaptable transformer-based framework integrating
space group, elemental, and unit cell information. This novel structure can
seamlessly combine diverse features and accurately predict various physical
properties, including topological properties, superconducting transition
temperatures, dielectric constants, and more. CrystalBERT provides insightful
interpretations of features influencing target properties. Our results indicate
that space group and elemental information are crucial for predicting
topological and superconducting properties, underscoring their intricate
nature. By incorporating these features, we achieve 91\% accuracy in
topological classification, surpassing prior studies and identifying previously
misclassified materials. This research demonstrates that integrating diverse
material information enhances the prediction of complex material properties,
paving the way for more accurate and interpretable machine learning models in
materials science.
|
[{'version': 'v1', 'created': 'Wed, 29 May 2024 09:56:00 GMT'}, {'version': 'v2', 'created': 'Fri, 13 Dec 2024 06:23:03 GMT'}]
|
2024-12-16
|
Harveen Kaur, Flaviano Della Pia, Ilyes Batatia, Xavier R. Advincula,
Benjamin X. Shi, Jinggang Lan, G\'abor Cs\'anyi, Angelos Michaelides, and
Venkat Kapil
|
Data-efficient fine-tuning of foundational models for first-principles
quality sublimation enthalpies
| null | null | null |
cond-mat.mtrl-sci physics.chem-ph
|
Calculating sublimation enthalpies of molecular crystal polymorphs is
relevant to a wide range of technological applications. However, predicting
these quantities at first-principles accuracy -- even with the aid of machine
learning potentials -- is a challenge that requires sub-kJ/mol accuracy in the
potential energy surface and finite-temperature sampling. We present an
accurate and data-efficient protocol based on fine-tuning of the foundational
MACE-MP-0 model and showcase its capabilities on sublimation enthalpies and
physical properties of ice polymorphs. Our approach requires only a few tens of
training structures to achieve sub-kJ/mol accuracy in the sublimation
enthalpies and sub 1 % error in densities for polymorphs at finite temperature
and pressure. Exploiting this data efficiency, we explore simulations of
hexagonal ice at the random phase approximation level of theory at experimental
temperatures and pressures, calculating its physical properties, like pair
correlation function and density, with good agreement with experiments. Our
approach provides a way forward for predicting the stability of molecular
crystals at finite thermodynamic conditions with the accuracy of correlated
electronic structure theory.
|
[{'version': 'v1', 'created': 'Thu, 30 May 2024 16:18:29 GMT'}]
|
2024-05-31
|
Malte Grunert, Max Gro{\ss}mann, Erich Runge
|
Deep learning of spectra: Predicting the dielectric function of
semiconductors
|
Phys. Rev. Materials 8, L122201 (2024)
|
10.1103/PhysRevMaterials.8.L122201
| null |
cond-mat.mtrl-sci
|
Predicting spectra and related properties such as the dielectric function of
crystalline materials based on machine learning has a huge, hitherto
unexplored, technological potential. For this reason, we create an ab initio
database of 9915 dielectric tensors of semiconductors and insulators calculated
in the independent-particle approximation (IPA). In addition, we present the
OptiMate family of machine learning models, a series of graph attention neural
networks (GAT) trained to predict the dielectric function and refractive index.
OptiMate yields accurate prediction of spectra of semiconductors using only
their crystal structure. Smooth, artifact-free curves are obtained without
these properties being enforced by penalties.
|
[{'version': 'v1', 'created': 'Wed, 12 Jun 2024 13:21:29 GMT'}, {'version': 'v2', 'created': 'Fri, 20 Dec 2024 12:39:13 GMT'}]
|
2024-12-23
|
Huazhang Zhang, Hao-Cheng Thong, Louis Bastogne, Churen Gui, Xu He,
Philippe Ghosez
|
Finite-temperature properties of antiferroelectric perovskite $\rm
PbZrO_3$ from deep learning interatomic potential
| null | null | null |
cond-mat.mtrl-sci
|
The prototypical antiferroelectric perovskite $\rm PbZrO_3$ (PZO) has
garnered considerable attentions in recent years due to its significance in
technological applications and fundamental research. Many unresolved issues in
PZO are associated with large length- and time-scales, as well as finite
temperatures, presenting significant challenges for first-principles density
functional theory studies. Here, we introduce a deep learning interatomic
potential of PZO, enabling investigation of finite-temperature properties
through large-scale atomistic simulations. Trained using an elaborately
designed dataset, the model successfully reproduces a large number of phases,
in particular, the recently discovered 80-atom antiferroelectric $Pnam$ phase
and ferrielectric $Ima2$ phase, providing precise predictions for their
structural and dynamical properties. Using this model, we investigated phase
transitions of multiple phases, including $Pbam$/$Pnam$, $Ima2$ and $R3c$,
which show high similarity to the experimental observation. Our simulation
results also highlight the crucial role of free-energy in determining the
low-temperature phase of PZO, reconciling the apparent contradiction: $Pbam$ is
the most commonly observed phase in experiments, while theoretical calculations
predict other phases exhibiting even lower energy. Furthermore, in the
temperature range where the $Pbam$ phase is thermodynamically stable, typical
double polarization hysteresis loops for antiferroelectrics were obtained,
along with a detailed elucidation of the structural evolution during the
electric-field induced transitions between the non-polar $Pbam$ and polar $R3c$
phases.
|
[{'version': 'v1', 'created': 'Thu, 13 Jun 2024 11:32:16 GMT'}, {'version': 'v2', 'created': 'Wed, 31 Jul 2024 10:22:29 GMT'}, {'version': 'v3', 'created': 'Wed, 21 Aug 2024 11:51:17 GMT'}]
|
2024-08-22
|
Davi M F\'ebba, Kingsley Egbo, William A. Callahan, Andriy Zakutayev
|
From Text to Test: AI-Generated Control Software for Materials Science
Instruments
| null |
10.1039/D4DD00143E
| null |
cond-mat.mtrl-sci cs.AI
|
Large language models (LLMs) are transforming the landscape of chemistry and
materials science. Recent examples of LLM-accelerated experimental research
include virtual assistants for parsing synthesis recipes from the literature,
or using the extracted knowledge to guide synthesis and characterization.
Despite these advancements, their application is constrained to labs with
automated instruments and control software, leaving much of materials science
reliant on manual processes. Here, we demonstrate the rapid deployment of a
Python-based control module for a Keithley 2400 electrical source measure unit
using ChatGPT-4. Through iterative refinement, we achieved effective instrument
management with minimal human intervention. Additionally, a user-friendly
graphical user interface (GUI) was created, effectively linking all instrument
controls to interactive screen elements. Finally, we integrated this AI-crafted
instrument control software with a high-performance stochastic optimization
algorithm to facilitate rapid and automated extraction of electronic device
parameters related to semiconductor charge transport mechanisms from
current-voltage (IV) measurement data. This integration resulted in a
comprehensive open-source toolkit for semiconductor device characterization and
analysis using IV curve measurements. We demonstrate the application of these
tools by acquiring, analyzing, and parameterizing IV data from a
Pt/Cr$_2$O$_3$:Mg/$\beta$-Ga$_2$O$_3$ heterojunction diode, a novel stack for
high-power and high-temperature electronic devices. This approach underscores
the powerful synergy between LLMs and the development of instruments for
scientific inquiry, showcasing a path for further acceleration in materials
science.
|
[{'version': 'v1', 'created': 'Sun, 23 Jun 2024 21:32:57 GMT'}, {'version': 'v2', 'created': 'Tue, 25 Jun 2024 11:34:15 GMT'}]
|
2024-11-12
|
Nguyen Tuan Hung, Ryotaro Okabe, Abhijatmedhi Chotrattanapituk, Mingda
Li
|
Ensemble-Embedding Graph Neural Network for Direct Prediction of Optical
Spectra from Crystal Structure
| null | null | null |
cond-mat.mtrl-sci physics.app-ph
|
Optical properties in solids, such as refractive index and absorption, hold
vast applications ranging from solar panels to sensors, photodetectors, and
transparent displays. However, first-principles computation of optical
properties from crystal structures is a complex task due to the high
convergence criteria and computational cost. Recent progress in machine
learning shows promise in predicting material properties, yet predicting
optical properties from crystal structures remains challenging due to the lack
of efficient atomic embeddings. Here, we introduce GNNOpt, an equivariance
graph-neural-network architecture featuring automatic embedding optimization.
This enables high-quality optical predictions with a dataset of only 944
materials. GNNOpt predicts all optical properties based on the
Kramers-Kr{\"o}nig relations, including absorption coefficient, complex
dielectric function, complex refractive index, and reflectance. We apply the
trained model to screen photovoltaic materials based on spectroscopic limited
maximum efficiency and search for quantum materials based on quantum weight.
First-principles calculations validate the efficacy of the GNNOpt model,
demonstrating excellent agreement in predicting the optical spectra of unseen
materials. The discovery of new quantum materials with high predicted quantum
weight, such as SiOs which hosts exotic quasiparticles, demonstrates GNNOpt's
potential in predicting optical properties across a broad range of materials
and applications.
|
[{'version': 'v1', 'created': 'Mon, 24 Jun 2024 14:02:29 GMT'}]
|
2024-06-25
|
Zechen Tang, Nianlong Zou, He Li, Yuxiang Wang, Zilong Yuan, Honggeng
Tao, Yang Li, Zezhou Chen, Boheng Zhao, Minghui Sun, Hong Jiang, Wenhui Duan,
Yong Xu
|
Improving density matrix electronic structure method by deep learning
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci
|
The combination of deep learning and ab initio materials calculations is
emerging as a trending frontier of materials science research, with
deep-learning density functional theory (DFT) electronic structure being
particularly promising. In this work, we introduce a neural-network method for
modeling the DFT density matrix, a fundamental yet previously unexplored
quantity in deep-learning electronic structure. Utilizing an advanced neural
network framework that leverages the nearsightedness and equivariance
properties of the density matrix, the method demonstrates high accuracy and
excellent generalizability in multiple example studies, as well as capability
to precisely predict charge density and reproduce other electronic structure
properties. Given the pivotal role of the density matrix in DFT as well as
other computational methods, the current research introduces a novel approach
to the deep-learning study of electronic structure properties, opening up new
opportunities for deep-learning enhanced computational materials study.
|
[{'version': 'v1', 'created': 'Tue, 25 Jun 2024 13:55:40 GMT'}]
|
2024-06-26
|
Michael Moran, Vladimir V. Gusev, Michael W. Gaultois, Dmytro Antypov,
Matthew J. Rosseinsky
|
Establishing Deep InfoMax as an effective self-supervised learning
methodology in materials informatics
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
The scarcity of property labels remains a key challenge in materials
informatics, whereas materials data without property labels are abundant in
comparison. By pretraining supervised property prediction models on
self-supervised tasks that depend only on the "intrinsic information" available
in any Crystallographic Information File (CIF), there is potential to leverage
the large amount of crystal data without property labels to improve property
prediction results on small datasets. We apply Deep InfoMax as a
self-supervised machine learning framework for materials informatics that
explicitly maximises the mutual information between a point set (or graph)
representation of a crystal and a vector representation suitable for downstream
learning. This allows the pretraining of supervised models on large materials
datasets without the need for property labels and without requiring the model
to reconstruct the crystal from a representation vector. We investigate the
benefits of Deep InfoMax pretraining implemented on the Site-Net architecture
to improve the performance of downstream property prediction models with small
amounts (<10^3) of data, a situation relevant to experimentally measured
materials property databases. Using a property label masking methodology, where
we perform self-supervised learning on larger supervised datasets and then
train supervised models on a small subset of the labels, we isolate Deep
InfoMax pretraining from the effects of distributional shift. We demonstrate
performance improvements in the contexts of representation learning and
transfer learning on the tasks of band gap and formation energy prediction.
Having established the effectiveness of Deep InfoMax pretraining in a
controlled environment, our findings provide a foundation for extending the
approach to address practical challenges in materials informatics.
|
[{'version': 'v1', 'created': 'Sun, 30 Jun 2024 11:33:49 GMT'}]
|
2024-07-02
|
Somnath Bharech, Yangyiwei Yang, Michael Selzer, Britta Nestler,
Bai-Xiang Xu
|
ML-extendable framework for multiphysics-multiscale simulation workflow
and data management using Kadi4Mat
| null | null | null |
cond-mat.mtrl-sci
|
As material modeling and simulation has become vital for modern materials
science, research data with distinctive physical principles and extensive
volume are generally required for full elucidation of the material behavior
across all relevant scales. Effective workflow and data management, with
corresponding metadata descriptions, helps leverage the full potential of
data-driven analyses for computer-aided material design. In this work, we
propose a research workflow and data management (RWDM) framework to manage
complex workflows and resulting research (meta)data, while following FAIR
principles. Multiphysics multiscale simulations for additive manufacturing
investigations are treated as showcase and implemented on Kadi4Mat: an open
source research data infrastructure. The input and output data of the
simulations, together with the associated setups and scripts realizing the
simulation workflow, are curated in corresponding standardized Kadi4Mat records
with extendibility for further research and data-driven analyses. These records
are interlinked to indicate information flow and form an ontology based
knowledge graph. Automation scheme for performing high-throughput simulation
and post-processing integrated with the proposed RWDM framework is also
presented.
|
[{'version': 'v1', 'created': 'Tue, 2 Jul 2024 11:13:41 GMT'}]
|
2024-07-03
|
Seifallah Elfetni and Reza Darvishi Kamachali
|
PINNs-MPF: A Physics-Informed Neural Network Framework for
Multi-Phase-Field Simulation of Interface Dynamics
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
We present an application of Physics-Informed Neural Networks to handle
MultiPhase-Field simulations of microstructure evolution. It has been showcased
that a combination of optimization techniques extended and adapted from the
PINNs literature, and the introduction of specific techniques inspired by the
MPF Method background, is required. The numerical resolution is realized
through a multi-variable time-series problem by using fully discrete
resolution. Within each interval, space, time, and phases are treated
separately, constituting discrete subdomains. An extended multi-networking
concept is implemented to subdivide the simulation domain into multiple
batches, with each batch associated with an independent Neural Network trained
to predict the solution. To ensure efficient interaction across different
phasesand in the spatio-temporal-phasic subdomain, a Master NN handles
efficient interaction among the multiple networks, as well as the transfer of
learning in different directions. A set of systematic simulations with
increasing complexity was performed, that benchmarks various critical aspects
of MPF simulations, including different geometries, types of interface dynamics
and the evolution of an interfacial triple junction. A comprehensive approach
is adopted to specifically focus the attention on the interfacial regions
through an automatic and dynamic meshing process, significantly simplifying the
tuning of hyper-parameters and serving as a fundamental key for addressing MPF
problems using Machine Learning. The pyramidal training approach is proposed to
the PINN community as a dual-impact method: it facilitates the initialization
of training and allows an extended transfer of learning. The proposed PINNs-MPF
framework successfully reproduces benchmark tests with high fidelity and Mean
Squared Error loss values ranging from 10$^{-4}$ to 10$^{-6}$ compared to
ground truth solutions.
|
[{'version': 'v1', 'created': 'Tue, 2 Jul 2024 12:55:01 GMT'}, {'version': 'v2', 'created': 'Fri, 30 Aug 2024 18:07:34 GMT'}]
|
2024-09-04
|
Ji Wei Yoon, Bangjian Zhou, J Senthilnath
|
SG-NNP: Species-separated Gaussian Neural Network Potential with Linear
Elemental Scaling and Optimized Dimensions for Multi-component Materials
| null | null | null |
cond-mat.mtrl-sci
|
Accurate simulations of materials at long-time and large-length scales have
increasingly been enabled by Machine-learned Interatomic Potentials (MLIPs).
There have been increasing interest on improving the robustness of such models.
To this end, we engineer a novel set of Gaussian-type descriptors that scale
linearly with the number of atoms, reduce informational degeneracy for
multi-component atomic environments and apply them in Species-separated
Gaussian Neural Network Potentials (SG-NNPs). The robustness of our method was
tested by analyzing the impact of various design choices and hyperparameters on
Molybdenum (Mo) SG-NNP performance during training and inference/simulation.
With less dimensions, SG-NNPs are shown to have superior atomic forces and
total energy predictions than other traditional and ML descriptor-based
interatomic potentials on diverse set of materials - Ni, Cu, Li, Mo, Si, Ge,
NiMo, Li3N and NbMoTaW. From the obtained results we can observe that the
proposed method improves the performance of atomic descriptors of complex
environments with multiple species.
|
[{'version': 'v1', 'created': 'Tue, 9 Jul 2024 07:46:34 GMT'}]
|
2024-07-10
|
Zhilong Song, Shuaihua Lu, Minggang Ju, Qionghua Zhou and Jinlan Wang
|
Is Large Language Model All You Need to Predict the Synthesizability and
Precursors of Crystal Structures?
| null | null | null |
cond-mat.mtrl-sci
|
Accessing the synthesizability of crystal structures is pivotal for advancing
the practical application of theoretical material structures designed by
machine learning or high-throughput screening. However, a significant gap
exists between the actual synthesizability and thermodynamic or kinetic
stability, which is commonly used for screening theoretical structures for
experiments. To address this, we develop the Crystal Synthesis Large Language
Models (CSLLM) framework, which includes three LLMs for predicting the
synthesizability, synthesis methods, and precursors. We create a comprehensive
synthesizability dataset including 140,120 crystal structures and develop an
efficient text representation method for crystal structures to fine-tune the
LLMs. The Synthesizability LLM achieves a remarkable 98.6% accuracy,
significantly outperforming traditional synthesizability screening based on
thermodynamic and kinetic stability by 106.1% and 44.5%, respectively. The
Methods LLM achieves a classification accuracy of 91.02%, and the Precursors
LLM has an 80.2% success rate in predicting synthesis precursors. Furthermore,
we develop a user-friendly graphical interface that enables automatic
predictions of synthesizability and precursors from uploaded crystal structure
files. Through these contributions, CSLLM bridges the gap between theoretical
material design and experimental synthesis, paving the way for the rapid
discovery of novel and synthesizable functional materials.
|
[{'version': 'v1', 'created': 'Tue, 9 Jul 2024 16:35:12 GMT'}]
|
2024-07-10
|
Joseph Musielewicz, Janice Lan, Matt Uyttendaele, and John R. Kitchin
|
Improved Uncertainty Estimation of Graph Neural Network Potentials Using
Engineered Latent Space Distances
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
Graph neural networks (GNNs) have been shown to be astonishingly capable
models for molecular property prediction, particularly as surrogates for
expensive density functional theory calculations of relaxed energy for novel
material discovery. However, one limitation of GNNs in this context is the lack
of useful uncertainty prediction methods, as this is critical to the material
discovery pipeline. In this work, we show that uncertainty quantification for
relaxed energy calculations is more complex than uncertainty quantification for
other kinds of molecular property prediction, due to the effect that structure
optimizations have on the error distribution. We propose that distribution-free
techniques are more useful tools for assessing calibration, recalibrating, and
developing uncertainty prediction methods for GNNs performing relaxed energy
calculations. We also develop a relaxed energy task for evaluating uncertainty
methods for equivariant GNNs, based on distribution-free recalibration and
using the Open Catalyst Project dataset. We benchmark a set of popular
uncertainty prediction methods on this task, and show that latent distance
methods, with our novel improvements, are the most well-calibrated and
economical approach for relaxed energy calculations. Finally, we demonstrate
that our latent space distance method produces results which align with our
expectations on a clustering example, and on specific equation of state and
adsorbate coverage examples from outside the training dataset.
|
[{'version': 'v1', 'created': 'Mon, 15 Jul 2024 15:59:39 GMT'}, {'version': 'v2', 'created': 'Mon, 26 Aug 2024 17:31:16 GMT'}]
|
2024-08-27
|
Erwin Cazares and Brian E. Schuster
|
Deep Learning for Quantitative Dynamic Fragmentation Analysis
| null | null | null |
cond-mat.mtrl-sci
|
We have developed an image-based convolutional neural network (CNN) that is
applicable for quantitative time-resolved measurements of the fragmentation
behavior of opaque brittle materials using ultra-high speed optical imaging.
This model extends previous work on the U-net model, where we trained binary, 3
and 5 class models using supervised learning on experimentally measured dynamic
fracture experiments on various opaque structural ceramic materials that were
adhered on transparent polymer (polycarbonate or acrylic) backing materials.
Full details of the experimental investigations are outside the scope of this
manuscript but briefly, several different ceramics were loaded using spatially
and time-varying mechanical loads to induce inelastic deformation and fracture
processes that were recorded at frequencies as high as 5 MHz using high speed
optical imaging. These experiments provided a rich and diverse dataset that
includes many of the common fracture modes found in static and dynamic fracture
including cone cracking, median cracking, comminution, and combined complex
failure modes that involve effectively simultaneous activation and propagation
of multiple fragmentation modes. While the training data presented here was
obtained from dynamic fragmentation experiments, this study is applicable to
static loading of these materials as the crack speeds typically higher a
kilometer per second in these materials are on the order of 1-10 km/s
regardless of the loading rate. We believe the methodologies presented here
will be useful in quantifying the failure processes in structural materials for
protection applications and can be used for direct validation of engineering
models used in design.
|
[{'version': 'v1', 'created': 'Wed, 17 Jul 2024 19:35:57 GMT'}]
|
2024-07-19
|
Zilong Yuan, Zechen Tang, Honggeng Tao, Xiaoxun Gong, Zezhou Chen,
Yuxiang Wang, He Li, Yang Li, Zhiming Xu, Minghui Sun, Boheng Zhao, Chong
Wang, Wenhui Duan, Yong Xu
|
Deep learning density functional theory Hamiltonian in real space
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci
|
Deep learning electronic structures from ab initio calculations holds great
potential to revolutionize computational materials studies. While existing
methods proved success in deep-learning density functional theory (DFT)
Hamiltonian matrices, they are limited to DFT programs using localized
atomic-like bases and heavily depend on the form of the bases. Here, we propose
the DeepH-r method for deep-learning DFT Hamiltonians in real space,
facilitating the prediction of DFT Hamiltonian in a basis-independent manner.
An equivariant neural network architecture for modeling the real-space DFT
potential is developed, targeting a more fundamental quantity in DFT. The
real-space potential exhibits simplified principles of equivariance and
enhanced nearsightedness, further boosting the performance of deep learning.
When applied to evaluate the Hamiltonian matrix, this method significantly
improved in accuracy, as exemplified in multiple case studies. Given the
abundance of data in the real-space potential, this work may pave a novel
pathway for establishing a ``large materials model" with increased accuracy.
|
[{'version': 'v1', 'created': 'Fri, 19 Jul 2024 15:07:22 GMT'}]
|
2024-07-22
|
Nihang Fu, Sadman Sadeed Omee, Jianjun Hu
|
Physical Encoding Improves OOD Performance in Deep Learning Materials
Property Prediction
| null | null | null |
cond-mat.mtrl-sci
|
Deep learning (DL) models have been widely used in materials property
prediction with great success, especially for properties with large datasets.
However, the out-of-distribution (OOD) performances of such models are
questionable, especially when the training set is not large enough. Here we
showed that using physical encoding rather than the widely used one-hot
encoding can significantly improve the OOD performance by increasing the
models' generalization performance, which is especially true for models trained
with small datasets. Our benchmark results of both composition- and
structure-based deep learning models over six datasets including formation
energy, band gap, refractive index, and elastic properties predictions
demonstrated the importance of physical encoding to OOD generalization for
models trained on small datasets.
|
[{'version': 'v1', 'created': 'Sun, 21 Jul 2024 16:40:28 GMT'}]
|
2024-07-23
|
Alexander Gorfer and David Heuser and Rainer Abart and Christoph
Dellago
|
Thermodynamics of alkali feldspar solid solutions with varying Al-Si
order: atomistic simulations using a neural network potential
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph physics.geo-ph
|
The thermodynamic mixing properties of alkali feldspar solid solutions
between the Na and K end members were computed through atomistic simulations
using a neural network potential. We performed combined molecular dynamics and
Monte Carlo simulations in the semi-grand canonical ensemble at 800 {\deg}C and
considered three quenched disorder states in the Al-Si-O framework ranging from
fully ordered to fully disordered. The excess Gibbs energy of mixing, excess
enthalpy of mixing and excess entropy of mixing are in good agreement with
literature data. In particular, the notion that increasing disorder in the
Al-Si-O framework correlates with increasing ideality of Na-K mixing is
successfully predicted. Finally, a recently proposed short range ordering of Na
and K in the alkali sublattice is observed, which may be considered as a
precursor to exsolution lamellae, a characteristic phenomenon in alkali
feldspar of intermediate composition leading to perthite formation during
cooling.
|
[{'version': 'v1', 'created': 'Wed, 24 Jul 2024 17:34:03 GMT'}]
|
2024-07-25
|
Suchona Akter, Yong Li, Minbum Kim, Md Omar Faruque, Zhonghua Peng,
Praveen K. Thallapally, and Mohammad R. Momeni
|
Fine-tuning Microporosity of Crystalline Vanadomolybdate Frameworks for
Selective Adsorptive Separation of Kr from Xe
|
Langmuir 2024 40 (47), 24934-24944
|
10.1021/acs.langmuir.4c02910
| null |
cond-mat.mtrl-sci
|
Selective adsorptive capture and separation of chemically inert Kr and Xe
noble gases with very low ppmv concentrations in air and industrial off-gases
constitute an important technological challenge. Here, using a synergistic
combination of experiment and theory, the microporous crystalline
vanadomolybdates (MoVOx) as highly selective Kr sorbents are studied in detail.
By varying the Mo/V ratios, we show for the first time that their
one-dimensional pores can be fine-tuned for the size-selective adsorption of Kr
over the larger Xe with selectivities reaching >100. Using extensive electronic
structure calculations and grand canonical Monte-Carlo simulations, the
competition between Kr uptake with CO2 and N2 was also investigated. As most
materials reported so far are selective toward the larger, more polarizable Xe
than Kr, this work constitutes an important step toward robust Kr-selective
sorbent materials. This work highlights the potential use of porous crystalline
transition metal oxides as energy-efficient and selective noble gas capture
sorbents for industrial applications.
|
[{'version': 'v1', 'created': 'Sat, 27 Jul 2024 12:54:17 GMT'}]
|
2025-05-08
|
Zihan Wang, Anindya Bhaduri, Hongyi Xu, Liping Wang
|
An Uncertainty-aware Deep Learning Framework-based Robust Design
Optimization of Metamaterial Units
| null | null | null |
eess.SP cond-mat.mtrl-sci cs.LG
|
Mechanical metamaterials represent an innovative class of artificial
structures, distinguished by their extraordinary mechanical characteristics,
which are beyond the scope of traditional natural materials. The use of deep
generative models has become increasingly popular in the design of metamaterial
units. The effectiveness of using deep generative models lies in their capacity
to compress complex input data into a simplified, lower-dimensional latent
space, while also enabling the creation of novel optimal designs through
sampling within this space. However, the design process does not take into
account the effect of model uncertainty due to data sparsity or the effect of
input data uncertainty due to inherent randomness in the data. This might lead
to the generation of undesirable structures with high sensitivity to the
uncertainties in the system. To address this issue, a novel uncertainty-aware
deep learning framework-based robust design approach is proposed for the design
of metamaterial units with optimal target properties. The proposed approach
utilizes the probabilistic nature of the deep learning framework and quantifies
both aleatoric and epistemic uncertainties associated with surrogate-based
design optimization. We demonstrate that the proposed design approach is
capable of designing high-performance metamaterial units with high reliability.
To showcase the effectiveness of the proposed design approach, a
single-objective design optimization problem and a multi-objective design
optimization problem are presented. The optimal robust designs obtained are
validated by comparing them to the designs obtained from the topology
optimization method as well as the designs obtained from a deterministic deep
learning framework-based design optimization where none of the uncertainties in
the system are explicitly considered.
|
[{'version': 'v1', 'created': 'Fri, 19 Jul 2024 22:21:27 GMT'}]
|
2024-07-31
|
Christian Venturella, Jiachen Li, Christopher Hillenbrand, Ximena
Leyva Peralta, Jessica Liu, Tianyu Zhu
|
Unified Deep Learning Framework for Many-Body Quantum Chemistry via
Green's Functions
| null | null | null |
physics.chem-ph cond-mat.mtrl-sci physics.comp-ph
|
Quantum many-body methods provide a systematic route to computing electronic
properties of molecules and materials, but high computational costs restrict
their use in large-scale applications. Due to the complexity in many-electron
wavefunctions, machine learning models capable of capturing fundamental
many-body physics remain limited. Here, we present a deep learning framework
targeting the many-body Green's function, which unifies predictions of
electronic properties in ground and excited states, while offering deep
physical insights into electron correlation effects. By learning the $GW$ or
coupled-cluster self-energy from mean-field features, our graph neural network
achieves competitive performance in predicting one- and two-particle
excitations and quantities derivable from one-particle density matrix. We
demonstrate its high data efficiency and good transferability across chemical
species, system sizes, molecular conformations, and correlation strengths in
bond breaking, through multiple molecular and nanomaterial benchmarks. This
work opens up new opportunities for utilizing machine learning to solve
many-electron problems.
|
[{'version': 'v1', 'created': 'Mon, 29 Jul 2024 19:20:52 GMT'}]
|
2024-07-31
|
Isaiah A. Moses, Wesley F. Reinhart
|
Transfer Learning for Multi-material Classification of Transition Metal
Dichalcogenides with Atomic Force Microscopy
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
Deep learning models are widely used for the data-driven design of materials
based on atomic force microscopy (AFM) and other scanning probe microscopy.
These tools enhance efficiency in inverse design and characterization of
materials. However, limited and imbalanced experimental materials data
typically available is a major challenge. Also important is the need to
interpret trained models, which have typically been complex enough to be
uninterpretable by humans. Here, we present a systemic evaluation of transfer
learning strategies to accommodate low-data scenarios in materials synthesis
and a model latent feature analysis to draw connections to the
human-interpretable characteristics of the samples. Our models show accurate
predictions in five classes of transition metal dichalcogenides (TMDs)
(MoS$_2$, WS$_2$, WSe$_2$, MoSe$_2$, and Mo-WSe$_2$) with up to 89$\%$ accuracy
on held-out test samples. Analysis of the latent features reveals a correlation
with physical characteristics such as grain density, DoG blob, and local
variation. The transfer learning optimization modality and the exploration of
the correlation between the latent and physical features provide important
frameworks that can be applied to other classes of materials beyond TMDs to
enhance the models' performance and explainability which can accelerate the
inverse design of materials for technological applications.
|
[{'version': 'v1', 'created': 'Tue, 30 Jul 2024 17:06:42 GMT'}, {'version': 'v2', 'created': 'Tue, 10 Dec 2024 22:27:58 GMT'}]
|
2024-12-12
|
Shunya Minami, Yoshihiro Hayashi, Stephen Wu, Kenji Fukumizu, Hiroki
Sugisawa, Masashi Ishii, Isao Kuwajima, Kazuya Shiratori, Ryo Yoshida
|
Scaling Law of Sim2Real Transfer Learning in Expanding Computational
Materials Databases for Real-World Predictions
| null | null | null |
cond-mat.mtrl-sci cs.LG
|
To address the challenge of limited experimental materials data, extensive
physical property databases are being developed based on high-throughput
computational experiments, such as molecular dynamics simulations. Previous
studies have shown that fine-tuning a predictor pretrained on a computational
database to a real system can result in models with outstanding generalization
capabilities compared to learning from scratch. This study demonstrates the
scaling law of simulation-to-real (Sim2Real) transfer learning for several
machine learning tasks in materials science. Case studies of three prediction
tasks for polymers and inorganic materials reveal that the prediction error on
real systems decreases according to a power-law as the size of the
computational data increases. Observing the scaling behavior offers various
insights for database development, such as determining the sample size
necessary to achieve a desired performance, identifying equivalent sample sizes
for physical and computational experiments, and guiding the design of data
production protocols for downstream real-world tasks.
|
[{'version': 'v1', 'created': 'Wed, 7 Aug 2024 18:47:58 GMT'}]
|
2024-08-09
|
Ali Riza Durmaz, Akhil Thomas, Lokesh Mishra, Rachana Niranjan Murthy,
Thomas Straub
|
MaterioMiner -- An ontology-based text mining dataset for extraction of
process-structure-property entities
| null | null | null |
cs.CL cond-mat.mtrl-sci
|
While large language models learn sound statistical representations of the
language and information therein, ontologies are symbolic knowledge
representations that can complement the former ideally. Research at this
critical intersection relies on datasets that intertwine ontologies and text
corpora to enable training and comprehensive benchmarking of neurosymbolic
models. We present the MaterioMiner dataset and the linked materials mechanics
ontology where ontological concepts from the mechanics of materials domain are
associated with textual entities within the literature corpus. Another
distinctive feature of the dataset is its eminently fine-granular annotation.
Specifically, 179 distinct classes are manually annotated by three raters
within four publications, amounting to a total of 2191 entities that were
annotated and curated. Conceptual work is presented for the symbolic
representation of causal composition-process-microstructure-property
relationships. We explore the annotation consistency between the three raters
and perform fine-tuning of pre-trained models to showcase the feasibility of
named-entity recognition model training. Reusing the dataset can foster
training and benchmarking of materials language models, automated ontology
construction, and knowledge graph generation from textual data.
|
[{'version': 'v1', 'created': 'Mon, 5 Aug 2024 21:42:59 GMT'}]
|
2024-08-12
|
A. K. Shargh, C. D. Stiles, J. A. El-Awady
|
Deep Learning Accelerated Phase Prediction of Refractory Multi-Principal
Element Alloys
| null | null | null |
cond-mat.mtrl-sci
|
The tunability of the mechanical properties of refractory
multi-principal-element alloys (RMPEAs) make them attractive for numerous
high-temperature applications. It is well-established that the phase stability
of RMPEAs control their mechanical properties. In this study, we develop a deep
learning framework that is trained on a CALPHAD-derived database that is
predictive of RMPEAs phases with high accuracy up to eight phases within the
elemental space of Ti, Fe, Al, V, Ni, Nb, and Zr with an accuracy of
approximately 90%. We further investigate the causes for the low out of domain
performance of the deep learning models in predicting phases of RMPEA with new
elemental sets and propose a strategy to mitigate this performance shortfall.
|
[{'version': 'v1', 'created': 'Mon, 12 Aug 2024 15:42:52 GMT'}]
|
2024-08-13
|
Yan Chen, Xueru Wang, Xiaobin Deng, Yilun Liu, Xi Chen, Yunwei Zhang,
Lei Wang, Hang Xiao
|
MatterGPT: A Generative Transformer for Multi-Property Inverse Design of
Solid-State Materials
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
Inverse design of solid-state materials with desired properties represents a
formidable challenge in materials science. Although recent generative models
have demonstrated potential, their adoption has been hindered by limitations
such as inefficiency, architectural constraints and restricted open-source
availability. The representation of crystal structures using the SLICES
(Simplified Line-Input Crystal-Encoding System) notation as a string of
characters enables the use of state-of-the-art natural language processing
models, such as Transformers, for crystal design. Drawing inspiration from the
success of GPT models in generating coherent text, we trained a generative
Transformer on the next-token prediction task to generate solid-state materials
with targeted properties. We demonstrate MatterGPT's capability to generate de
novo crystal structures with targeted single properties, including both
lattice-insensitive (formation energy) and lattice-sensitive (band gap)
properties. Furthermore, we extend MatterGPT to simultaneously target multiple
properties, addressing the complex challenge of multi-objective inverse design
of crystals. Our approach showcases high validity, uniqueness, and novelty in
generated structures, as well as the ability to generate materials with
properties beyond the training data distribution. This work represents a
significant step forward in computational materials discovery, offering a
powerful and open tool for designing materials with tailored properties for
various applications in energy, electronics, and beyond.
|
[{'version': 'v1', 'created': 'Wed, 14 Aug 2024 15:12:05 GMT'}]
|
2024-08-15
|
Qinyang Li, Nicholas Miklaucic, Jianjun Hu
|
Out-of-distribution materials property prediction using adversarial
learning based fine-tuning
| null | null | null |
cond-mat.mtrl-sci cs.LG
|
The accurate prediction of material properties is crucial in a wide range of
scientific and engineering disciplines. Machine learning (ML) has advanced the
state of the art in this field, enabling scientists to discover novel materials
and design materials with specific desired properties. However, one major
challenge that persists in material property prediction is the generalization
of models to out-of-distribution (OOD) samples,i.e., samples that differ
significantly from those encountered during training. In this paper, we explore
the application of advancements in OOD learning approaches to enhance the
robustness and reliability of material property prediction models. We propose
and apply the Crystal Adversarial Learning (CAL) algorithm for OOD materials
property prediction,which generates synthetic data during training to bias the
training towards those samples with high prediction uncertainty. We further
propose an adversarial learning based targeting finetuning approach to make the
model adapted to a particular OOD dataset, as an alternative to traditional
fine-tuning. Our experiments demonstrate the success of our CAL algorithm with
its high effectiveness in ML with limited samples which commonly occurs in
materials science. Our work represents a promising direction toward better OOD
learning and materials property prediction.
|
[{'version': 'v1', 'created': 'Sat, 17 Aug 2024 21:22:21 GMT'}]
|
2024-08-20
|
Salvatore Romano, Pablo Montero de Hijes, Matthias Meier, Georg
Kresse, Cesare Franchini, Christoph Dellago
|
Structure and dynamics of the magnetite(001)/water interface from
molecular dynamics simulations based on a neural network potential
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci physics.chem-ph
|
The magnetite/water interface is commonly found in nature and plays a crucial
role in various technological applications. However, our understanding of its
structural and dynamical properties at the molecular scale remains still
limited. In this study, we develop an efficient Behler-Parrinello neural
network potential (NNP) for the magnetite/water system, paying particular
attention to the accurate generation of reference data with density functional
theory. Using this NNP, we performed extensive molecular dynamics simulations
of the magnetite (001) surface across a wide range of water coverages, from the
single molecule to bulk water. Our simulations revealed several new ground
states of low coverage water on the Subsurface Cation Vacancy (SCV) model and
yielded a density profile of water at the surface that exhibits marked
layering. By calculating mean square displacements, we obtained quantitative
information on the diffusion of water molecules on the SCV for different
coverages, revealing significant anisotropy. Additionally, our simulations
provided qualitative insights into the dissociation mechanisms of water
molecules at the surface.
|
[{'version': 'v1', 'created': 'Wed, 21 Aug 2024 11:33:24 GMT'}, {'version': 'v2', 'created': 'Fri, 6 Sep 2024 09:22:33 GMT'}]
|
2024-09-09
|
Xiangxiang Shen, Zheng Wan, Lingfeng Wen, Licheng Sun, Ou Yang Ming
Jie, JiJUn Cheng, Xuan Tang, Xian Wei
|
PDDFormer: Pairwise Distance Distribution Graph Transformer for Crystal
Material Property Prediction
| null | null | null |
cond-mat.mtrl-sci cs.AI
|
The crystal structure can be simplified as a periodic point set repeating
across the entire three-dimensional space along an underlying lattice.
Traditionally, methods for representing crystals rely on descriptors like
lattice parameters, symmetry, and space groups to characterize the structure.
However, in reality, atoms in material always vibrate above absolute zero,
causing continuous fluctuations in their positions. This dynamic behavior
disrupts the underlying periodicity of the lattice, making crystal graphs based
on static lattice parameters and conventional descriptors discontinuous under
even slight perturbations. To this end, chemists proposed the Pairwise Distance
Distribution (PDD) method, which has been used to distinguish all periodic
structures in the world's largest real materials collection, the Cambridge
Structural Database. However, achieving the completeness of PDD requires
defining a large number of neighboring atoms, resulting in high computational
costs. Moreover, it does not account for atomic information, making it
challenging to directly apply PDD to crystal material property prediction
tasks. To address these challenges, we propose the atom-Weighted Pairwise
Distance Distribution (WPDD) and Unit cell Pairwise Distance Distribution
(UPDD) for the first time, incorporating them into the construction of
multi-edge crystal graphs. Based on this, we further developed WPDDFormer and
UPDDFormer, graph transformer architecture constructed using WPDD and UPDD
crystal graphs. We demonstrate that this method maintains the continuity and
completeness of crystal graphs even under slight perturbations in atomic
positions.
|
[{'version': 'v1', 'created': 'Fri, 23 Aug 2024 11:05:48 GMT'}, {'version': 'v2', 'created': 'Mon, 26 Aug 2024 02:42:23 GMT'}, {'version': 'v3', 'created': 'Sun, 22 Sep 2024 13:35:30 GMT'}, {'version': 'v4', 'created': 'Sun, 24 Nov 2024 08:10:52 GMT'}]
|
2024-11-26
|
Saurabh Tiwari, Prathamesh Satpute, Supriyo Ghosh
|
Time series forecasting of multiphase microstructure evolution using
deep learning
|
Computational Materials Science 247, 113518, 2025
|
10.1016/j.commatsci.2024.113518
| null |
cond-mat.mtrl-sci
|
Microstructure evolution, which plays a critical role in determining
materials properties, is commonly simulated by the high-fidelity but
computationally expensive phase-field method. To address this, we approximate
microstructure evolution as a time series forecasting problem within the domain
of deep learning. Our approach involves implementing a cost-effective surrogate
model that accurately predicts the spatiotemporal evolution of microstructures,
taking an example of spinodal decomposition in binary and ternary mixtures. Our
surrogate model combines a convolutional autoencoder to reduce the dimensional
representation of these microstructures with convolutional recurrent neural
networks to forecast their temporal evolution. We use different variants of
recurrent neural networks to compare their efficacy in developing surrogate
models for phase-field predictions. On average, our deep learning framework
demonstrates excellent accuracy and speedup relative to the "ground truth"
phase-field simulations. We use quantitative measures to demonstrate how
surrogate model predictions can effectively replace the phase-field timesteps
without compromising accuracy in predicting the long-term evolution trajectory.
Additionally, by emulating a transfer learning approach, our framework performs
satisfactorily in predicting new microstructures resulting from alloy
composition and physics unknown to the model. Therefore, our approach offers a
useful data-driven alternative and accelerator to the materials microstructure
simulation workflow.
|
[{'version': 'v1', 'created': 'Thu, 22 Aug 2024 06:14:06 GMT'}, {'version': 'v2', 'created': 'Thu, 21 Nov 2024 11:32:58 GMT'}]
|
2024-11-22
|
Harikrishnan Vijayakumaran, Jonathan B. Russ, Glaucio H. Paulino,
Miguel A. Bessa
|
Consistent machine learning for topology optimization with
microstructure-dependent neural network material models
| null | null | null |
cond-mat.mtrl-sci cs.LG cs.NA math.NA
|
Additive manufacturing methods together with topology optimization have
enabled the creation of multiscale structures with controlled spatially-varying
material microstructure. However, topology optimization or inverse design of
such structures in the presence of nonlinearities remains a challenge due to
the expense of computational homogenization methods and the complexity of
differentiably parameterizing the microstructural response. A solution to this
challenge lies in machine learning techniques that offer efficient,
differentiable mappings between the material response and its microstructural
descriptors. This work presents a framework for designing multiscale
heterogeneous structures with spatially varying microstructures by merging a
homogenization-based topology optimization strategy with a consistent machine
learning approach grounded in hyperelasticity theory. We leverage neural
architectures that adhere to critical physical principles such as
polyconvexity, objectivity, material symmetry, and thermodynamic consistency to
supply the framework with a reliable constitutive model that is dependent on
material microstructural descriptors. Our findings highlight the potential of
integrating consistent machine learning models with density-based topology
optimization for enhancing design optimization of heterogeneous hyperelastic
structures under finite deformations.
|
[{'version': 'v1', 'created': 'Sun, 25 Aug 2024 14:17:43 GMT'}, {'version': 'v2', 'created': 'Tue, 27 Aug 2024 14:24:52 GMT'}]
|
2024-08-28
|
Fanjie Xu, Wentao Guo, Feng Wang, Lin Yao, Hongshuai Wang, Fujie Tang,
Zhifeng Gao, Linfeng Zhang, Weinan E, Zhong-Qun Tian, Jun Cheng
|
Towards a Unified Benchmark and Framework for Deep Learning-Based
Prediction of Nuclear Magnetic Resonance Chemical Shifts
| null | null | null |
physics.comp-ph cond-mat.dis-nn cond-mat.mtrl-sci physics.chem-ph
|
The study of structure-spectrum relationships is essential for spectral
interpretation, impacting structural elucidation and material design.
Predicting spectra from molecular structures is challenging due to their
complex relationships. Herein, we introduce NMRNet, a deep learning framework
using the SE(3) Transformer for atomic environment modeling, following a
pre-training and fine-tuning paradigm. To support the evaluation of NMR
chemical shift prediction models, we have established a comprehensive benchmark
based on previous research and databases, covering diverse chemical systems.
Applying NMRNet to these benchmark datasets, we achieve state-of-the-art
performance in both liquid-state and solid-state NMR datasets, demonstrating
its robustness and practical utility in real-world scenarios. This marks the
first integration of solid and liquid state NMR within a unified model
architecture, highlighting the need for domainspecific handling of different
atomic environments. Our work sets a new standard for NMR prediction, advancing
deep learning applications in analytical and structural chemistry.
|
[{'version': 'v1', 'created': 'Wed, 28 Aug 2024 10:11:00 GMT'}]
|
2024-08-29
|
Xiuying Zhang, Linqiang Xu, Jing Lu, Zhaofu Zhang, and Lei Shen
|
Physics-integrated Neural Network for Quantum Transport Prediction of
Field-effect Transistor
| null | null | null |
cond-mat.dis-nn cond-mat.mtrl-sci physics.comp-ph
|
Quantum-mechanics-based transport simulation is of importance for the design
of ultra-short channel field-effect transistors (FETs) with its capability of
understanding the physical mechanism, while facing the primary challenge of the
high computational intensity. Traditional machine learning is expected to
accelerate the optimization of FET design, yet its application in this field is
limited by the lack of both high-fidelity datasets and the integration of
physical knowledge. Here, we introduced a physics-integrated neural network
framework to predict the transport curves of sub-5-nm gate-all-around (GAA)
FETs using an in-house developed high-fidelity database. The transport curves
in the database are collected from literature and our first-principles
calculations. Beyond silicon, we included indium arsenide, indium phosphide,
and selenium nanowires with different structural phases as the FET channel
materials. Then, we built a physical-knowledge-integrated hyper vector neural
network (PHVNN), in which five new physical features were added into the inputs
for prediction transport characteristics, achieving a sufficiently low mean
absolute error of 0.39. In particular, ~98% of the current prediction residuals
are within one order of magnitude. Using PHVNN, we efficiently screened out the
symmetric p-type GAA FETs that possess the same figures of merit with the
n-type ones, which are crucial for the fabrication of homogeneous CMOS
circuits. Finally, our automatic differentiation analysis provides
interpretable insights into the PHVNN, which highlights the important
contributions of our new input parameters and improves the reliability of
PHVNN. Our approach provides an effective method for rapidly screening
appropriate GAA FETs with the prospect of accelerating the design process of
next-generation electronic devices.
|
[{'version': 'v1', 'created': 'Fri, 30 Aug 2024 05:38:12 GMT'}]
|
2024-09-02
|
Alexander New, Nam Q. Le, Michael J. Pekala, Christopher D. Stiles
|
Self-supervised learning for crystal property prediction via denoising
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
Accurate prediction of the properties of crystalline materials is crucial for
targeted discovery, and this prediction is increasingly done with data-driven
models. However, for many properties of interest, the number of materials for
which a specific property has been determined is much smaller than the number
of known materials. To overcome this disparity, we propose a novel
self-supervised learning (SSL) strategy for material property prediction. Our
approach, crystal denoising self-supervised learning (CDSSL), pretrains
predictive models (e.g., graph networks) with a pretext task based on
recovering valid material structures when given perturbed versions of these
structures. We demonstrate that CDSSL models out-perform models trained without
SSL, across material types, properties, and dataset sizes.
|
[{'version': 'v1', 'created': 'Fri, 30 Aug 2024 12:53:40 GMT'}]
|
2024-09-02
|
Tsz Wai Ko and Shyue Ping Ong
|
Data-Efficient Construction of High-Fidelity Graph Deep Learning
Interatomic Potentials
| null | null | null |
physics.comp-ph cond-mat.mtrl-sci physics.chem-ph
|
Machine learning potentials (MLPs) have become an indispensable tool in
large-scale atomistic simulations because of their ability to reproduce ab
initio potential energy surfaces (PESs) very accurately at a fraction of
computational cost. For computational efficiency, the training data for most
MLPs today are computed using relatively cheap density functional theory (DFT)
methods such as the Perdew-Burke-Ernzerhof (PBE) generalized gradient
approximation (GGA) functional. Meta-GGAs such as the recently developed
strongly constrained and appropriately normed (SCAN) functional have been shown
to yield significantly improved descriptions of atomic interactions for
diversely bonded systems, but their higher computational cost remains an
impediment to their use in MLP development. In this work, we outline a
data-efficient multi-fidelity approach to constructing Materials 3-body Graph
Network (M3GNet) interatomic potentials that integrate different levels of
theory within a single model. Using silicon and water as examples, we show that
a multi-fidelity M3GNet model trained on a combined dataset of low-fidelity GGA
calculations with 10% of high-fidelity SCAN calculations can achieve accuracies
comparable to a single-fidelity M3GNet model trained on a dataset comprising 8x
the number of SCAN calculations. This work paves the way for the development of
high-fidelity MLPs in a cost-effective manner by leveraging existing
low-fidelity datasets.
|
[{'version': 'v1', 'created': 'Mon, 2 Sep 2024 05:57:32 GMT'}]
|
2024-09-04
|
Zirui Zhao, Xiaoke Wang, Si Wu, Pengfei Zhou, Qian Zhao, Guanping Xu,
Kaitong Sun, Hai-Feng Li
|
Deep learning-driven evaluation and prediction of ion-doped NASICON
materials for enhanced solid-state battery performance
|
AAPPS Bulletin, 2024, 34(1): 26
|
10.1007/s43673-024-00131-9
| null |
cond-mat.mtrl-sci
|
We developed a convolutional neural network (CNN) model capable of predicting
the performance of various ion-doped NASICON compounds by leveraging extensive
datasets from prior experimental investigation.The model demonstrated high
accuracy and efficiency in predicting ionic conductivity and electrochemical
properties. Key findings include the successful synthesis and validation of
three NASICON materials predicted by the model, with experimental results
closely matching the model predictions. This research not only enhances the
understanding of ion-doping effects in NASICON materials but also establishes a
robust framework for material design and practical applications. It bridges the
gap between theoretical predictions and experimental validations.
|
[{'version': 'v1', 'created': 'Mon, 2 Sep 2024 02:20:44 GMT'}, {'version': 'v2', 'created': 'Mon, 9 Sep 2024 02:46:15 GMT'}]
|
2025-01-13
|
Koki Ueno, Satoru Ohuchi, Kazuhide Ichikawa, Kei Amii, Kensuke
Wakasugi
|
SpinMultiNet: Neural Network Potential Incorporating Spin Degrees of
Freedom with Multi-Task Learning
| null | null | null |
cond-mat.mtrl-sci cs.LG
|
Neural Network Potentials (NNPs) have attracted significant attention as a
method for accelerating density functional theory (DFT) calculations. However,
conventional NNP models typically do not incorporate spin degrees of freedom,
limiting their applicability to systems where spin states critically influence
material properties, such as transition metal oxides. This study introduces
SpinMultiNet, a novel NNP model that integrates spin degrees of freedom through
multi-task learning. SpinMultiNet achieves accurate predictions without relying
on correct spin values obtained from DFT calculations. Instead, it utilizes
initial spin estimates as input and leverages multi-task learning to optimize
the spin latent representation while maintaining both $E(3)$ and time-reversal
equivariance. Validation on a dataset of transition metal oxides demonstrates
the high predictive accuracy of SpinMultiNet. The model successfully reproduces
the energy ordering of stable spin configurations originating from
superexchange interactions and accurately captures the rhombohedral distortion
of the rocksalt structure. These results pave the way for new possibilities in
materials simulations that consider spin degrees of freedom, promising future
applications in large-scale simulations of various material systems, including
magnetic materials.
|
[{'version': 'v1', 'created': 'Thu, 5 Sep 2024 05:13:28 GMT'}, {'version': 'v2', 'created': 'Sun, 8 Sep 2024 23:58:44 GMT'}]
|
2024-09-10
|
Wei Lu and Rachel K. Luu and Markus J. Buehler
|
Fine-tuning large language models for domain adaptation: Exploration of
training strategies, scaling, model merging and synergistic capabilities
| null | null | null |
cs.CL cond-mat.mtrl-sci cs.AI
|
The advancement of Large Language Models (LLMs) for domain applications in
fields such as materials science and engineering depends on the development of
fine-tuning strategies that adapt models for specialized, technical
capabilities. In this work, we explore the effects of Continued Pretraining
(CPT), Supervised Fine-Tuning (SFT), and various preference-based optimization
approaches, including Direct Preference Optimization (DPO) and Odds Ratio
Preference Optimization (ORPO), on fine-tuned LLM performance. Our analysis
shows how these strategies influence model outcomes and reveals that the
merging of multiple fine-tuned models can lead to the emergence of capabilities
that surpass the individual contributions of the parent models. We find that
model merging leads to new functionalities that neither parent model could
achieve alone, leading to improved performance in domain-specific assessments.
Experiments with different model architectures are presented, including Llama
3.1 8B and Mistral 7B models, where similar behaviors are observed. Exploring
whether the results hold also for much smaller models, we use a tiny LLM with
1.7 billion parameters and show that very small LLMs do not necessarily feature
emergent capabilities under model merging, suggesting that model scaling may be
a key component. In open-ended yet consistent chat conversations between a
human and AI models, our assessment reveals detailed insights into how
different model variants perform and show that the smallest model achieves a
high intelligence score across key criteria including reasoning depth,
creativity, clarity, and quantitative precision. Other experiments include the
development of image generation prompts based on disparate biological material
design concepts, to create new microstructures, architectural concepts, and
urban design based on biological materials-inspired construction principles.
|
[{'version': 'v1', 'created': 'Thu, 5 Sep 2024 11:49:53 GMT'}]
|
2024-09-06
|
Abdelwahab Kawafi, Lars K\"urten, Levke Ortlieb, Yushi Yang, Abraham
Mauleon Amieva, James E. Hallett and C.Patrick Royall
|
Colloidoscope: Detecting Dense Colloids in 3d with Deep Learning
| null | null | null |
cond-mat.soft cond-mat.mtrl-sci cond-mat.stat-mech
|
Colloidoscope is a deep learning pipeline employing a 3D residual Unet
architecture, designed to enhance the tracking of dense colloidal suspensions
through confocal microscopy. This methodology uses a simulated training dataset
that reflects a wide array of real-world imaging conditions, specifically
targeting high colloid volume fraction and low-contrast scenarios where
traditional detection methods struggle. Central to our approach is the use of
experimental signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and
point-spread-functions (PSFs) to accurately quantify and simulate the
experimental data. Our findings reveal that Colloidoscope achieves superior
recall in particle detection (finds more particles) compared to conventional
heuristic methods. Simultaneously, high precision is maintained (high fraction
of true positives.) The model demonstrates a notable robustness to
photobleached samples, thereby prolonging the imaging time and number of frames
than may be acquired. Furthermore, Colloidoscope maintains small scale
resolution sufficient to classify local structural motifs. Evaluated across
both simulated and experimental datasets, Colloidoscope brings the advancements
in computer vision offered by deep learning to particle tracking at high volume
fractions. We offer a promising tool for researchers in the soft matter
community, this model is deployed and available to use pretrained:
https://github.com/wahabk/colloidoscope.
|
[{'version': 'v1', 'created': 'Fri, 6 Sep 2024 20:21:33 GMT'}]
|
2024-09-10
|
Ayush Jain, Rishi Gurnani, Arunkumar Rajan, H. Jerry Qi, Rampi
Ramprasad
|
A Physics-Enforced Neural Network to Predict Polymer Melt Viscosity
| null |
10.1038/s41524-025-01532-6
| null |
cs.CE cond-mat.mtrl-sci
|
Achieving superior polymeric components through additive manufacturing (AM)
relies on precise control of rheology. One key rheological property
particularly relevant to AM is melt viscosity ($\eta$). Melt viscosity is
influenced by polymer chemistry, molecular weight ($M_w$), polydispersity,
induced shear rate ($\dot\gamma$), and processing temperature ($T$). The
relationship of $\eta$ with $M_w$, $\dot\gamma$, and $T$ may be captured by
parameterized equations. Several physical experiments are required to fit the
parameters, so predicting $\eta$ of a new polymer material in unexplored
physical domains is a laborious process. Here, we develop a Physics-Enforced
Neural Network (PENN) model that predicts the empirical parameters and encodes
the parametrized equations to calculate $\eta$ as a function of polymer
chemistry, $M_w$, polydispersity, $\dot\gamma$, and $T$. We benchmark our PENN
against physics-unaware Artificial Neural Network (ANN) and Gaussian Process
Regression (GPR) models. Finally, we demonstrate that the PENN offers superior
values of $\eta$ when extrapolating to unseen values of $M_w$, $\dot\gamma$,
and $T$ for sparsely seen polymers.
|
[{'version': 'v1', 'created': 'Sun, 8 Sep 2024 22:52:24 GMT'}]
|
2025-04-25
|
Nicholas Beaver, Aniruddha Dive, Marina Wong, Keita Shimanuki, Ananya
Patil, Anthony Ferrell, Mohsen B. Kivy
|
Rapid Assessment of Stable Crystal Structures in Single Phase High
Entropy Alloys Via Graph Neural Network Based Surrogate Modelling
| null | null | null |
cond-mat.mtrl-sci cond-mat.dis-nn
|
In an effort to develop a rapid, reliable, and cost-effective method for
predicting the structure of single-phase high entropy alloys, a Graph Neural
Network (ALIGNN-FF) based approach was introduced. This method was successfully
tested on 132 different high entropy alloys, and the results were analyzed and
compared with density functional theory and valence electron concentration
calculations. Additionally, the effects of various factors, including lattice
parameters and the number of supercells with unique atomic configurations, on
the prediction accuracy were investigated. The ALIGNN-FF based approach was
subsequently used to predict the structure of a novel cobalt-free 3d high
entropy alloy, and the result was experimentally verified.
|
[{'version': 'v1', 'created': 'Wed, 11 Sep 2024 23:34:48 GMT'}]
|
2024-09-13
|
Jun Li, Wenqi Fang, Shangjian Jin, Tengdong Zhang, Yanling Wu, Xiaodan
Xu, Yong Liu and Dao-Xin Yao
|
A deep learning approach to search for superconductors from electronic
bands
| null | null | null |
cond-mat.supr-con cond-mat.mtrl-sci
|
Energy band theory is a foundational framework in condensed matter physics.
In this work, we employ a deep learning method, BNAS, to find a direct
correlation between electronic band structure and superconducting transition
temperature. Our findings suggest that electronic band structures can act as
primary indicators of superconductivity. To avoid overfitting, we utilize a
relatively simple deep learning neural network model, which, despite its
simplicity, demonstrates predictive capabilities for superconducting
properties. By leveraging the attention mechanism within deep learning, we are
able to identify specific regions of the electronic band structure most
correlated with superconductivity. This novel approach provides new insights
into the mechanisms driving superconductivity from an alternative perspective.
Moreover, we predict several potential superconductors that may serve as
candidates for future experimental synthesis.
|
[{'version': 'v1', 'created': 'Thu, 12 Sep 2024 03:02:59 GMT'}]
|
2024-09-13
|
Xiao-Qi Han, Zhenfeng Ouyang, Peng-Jie Guo, Hao Sun, Ze-Feng Gao and
Zhong-Yi Lu
|
InvDesFlow: An AI-driven materials inverse design workflow to explore
possible high-temperature superconductors
|
Chin. Phys. Lett. 2025,42(4): 047301
|
10.1088/0256-307X/42/4/047301
| null |
cond-mat.supr-con cond-mat.mtrl-sci cs.AI physics.comp-ph
|
The discovery of new superconducting materials, particularly those exhibiting
high critical temperature ($T_c$), has been a vibrant area of study within the
field of condensed matter physics. Conventional approaches primarily rely on
physical intuition to search for potential superconductors within the existing
databases. However, the known materials only scratch the surface of the
extensive array of possibilities within the realm of materials. Here, we
develop InvDesFlow, an AI search engine that integrates deep model pre-training
and fine-tuning techniques, diffusion models, and physics-based approaches
(e.g., first-principles electronic structure calculation) for the discovery of
high-$T_c$ superconductors. Utilizing InvDesFlow, we have obtained 74
dynamically stable materials with critical temperatures predicted by the AI
model to be $T_c \geq$ 15 K based on a very small set of samples. Notably,
these materials are not contained in any existing dataset. Furthermore, we
analyze trends in our dataset and individual materials including B$_4$CN$_3$
(at 5 GPa) and B$_5$CN$_2$ (at ambient pressure) whose $T_c$s are 24.08 K and
15.93 K, respectively. We demonstrate that AI technique can discover a set of
new high-$T_c$ superconductors, outline its potential for accelerating
discovery of the materials with targeted properties.
|
[{'version': 'v1', 'created': 'Thu, 12 Sep 2024 14:16:56 GMT'}, {'version': 'v2', 'created': 'Mon, 2 Dec 2024 14:29:14 GMT'}, {'version': 'v3', 'created': 'Tue, 13 May 2025 08:22:00 GMT'}]
|
2025-05-14
|
Israrul H. Hashmi, Himanshu, Rahul Karmakar and Tarak K Patra
|
Extrapolative ML Models for Copolymers
| null | null | null |
cond-mat.soft cond-mat.mtrl-sci cs.LG
|
Machine learning models have been progressively used for predicting materials
properties. These models can be built using pre-existing data and are useful
for rapidly screening the physicochemical space of a material, which is
astronomically large. However, ML models are inherently interpolative, and
their efficacy for searching candidates outside a material's known range of
property is unresolved. Moreover, the performance of an ML model is intricately
connected to its learning strategy and the volume of training data. Here, we
determine the relationship between the extrapolation ability of an ML model,
the size and range of its training dataset, and its learning approach. We focus
on a canonical problem of predicting the properties of a copolymer as a
function of the sequence of its monomers. Tree search algorithms, which learn
the similarity between polymer structures, are found to be inefficient for
extrapolation. Conversely, the extrapolation capability of neural networks and
XGBoost models, which attempt to learn the underlying functional correlation
between the structure and property of polymers, show strong correlations with
the volume and range of training data. These findings have important
implications on ML-based new material development.
|
[{'version': 'v1', 'created': 'Sun, 15 Sep 2024 11:02:01 GMT'}]
|
2024-09-17
|
Shaswat Mohanty, Yifan Wang, Wei Cai
|
Generalizability of Graph Neural Network Force Fields for Predicting
Solid-State Properties
| null | null | null |
cs.LG cond-mat.mtrl-sci cs.NA math.NA
|
Machine-learned force fields (MLFFs) promise to offer a computationally
efficient alternative to ab initio simulations for complex molecular systems.
However, ensuring their generalizability beyond training data is crucial for
their wide application in studying solid materials. This work investigates the
ability of a graph neural network (GNN)-based MLFF, trained on Lennard-Jones
Argon, to describe solid-state phenomena not explicitly included during
training. We assess the MLFF's performance in predicting phonon density of
states (PDOS) for a perfect face-centered cubic (FCC) crystal structure at both
zero and finite temperatures. Additionally, we evaluate vacancy migration rates
and energy barriers in an imperfect crystal using direct molecular dynamics
(MD) simulations and the string method. Notably, vacancy configurations were
absent from the training data. Our results demonstrate the MLFF's capability to
capture essential solid-state properties with good agreement to reference data,
even for unseen configurations. We further discuss data engineering strategies
to enhance the generalizability of MLFFs. The proposed set of benchmark tests
and workflow for evaluating MLFF performance in describing perfect and
imperfect crystals pave the way for reliable application of MLFFs in studying
complex solid-state materials.
|
[{'version': 'v1', 'created': 'Mon, 16 Sep 2024 02:14:26 GMT'}, {'version': 'v2', 'created': 'Sat, 21 Dec 2024 16:21:51 GMT'}]
|
2024-12-24
|
Amir Omranpour and J\"org Behler
|
A High-Dimensional Neural Network Potential for Co$_3$O$_4$
| null | null | null |
cond-mat.mtrl-sci
|
The Co$_3$O$_4$ spinel is an important material in oxidation catalysis. Its
properties under catalytic conditions, i.e., at finite temperatures, can be
studied by molecular dynamics simulations, which critically depend on an
accurate description of the atomic interactions. Due to the high complexity of
Co$_3$O$_4$, which is related to the presence of multiple oxidation states of
the cobalt ions, to date \textit{ab initio} methods have been essentially the
only way to reliably capture the underlying potential energy surface, while
more efficient atomistic potentials are very challenging to construct.
Consequently, the accessible length and time scales of computer simulations of
systems containing Co$_3$O$_4$ are still severely limited. Rapid advances in
the development of modern machine learning potentials (MLPs) trained on
electronic structure data now make it possible to bridge this gap. In this
work, we employ a high-dimensional neural network potential (HDNNP) to
construct a MLP for bulk Co$_3$O$_4$ spinel based on density functional theory
calculations. After a careful validation of the potential, we compute various
structural, vibrational, and dynamical properties of the Co$_3$O$_4$ spinel
with a particular focus on its temperature-dependent behavior, including the
thermal expansion coefficient.
|
[{'version': 'v1', 'created': 'Tue, 17 Sep 2024 10:02:27 GMT'}]
|
2024-09-18
|
Luke P. J. Gilligan, Matteo Cobelli, Hasan M. Sayeed, Taylor D. Sparks
and Stefano Sanvito
|
Sampling Latent Material-Property Information From LLM-Derived Embedding
Representations
| null | null | null |
cs.CL cond-mat.mtrl-sci
|
Vector embeddings derived from large language models (LLMs) show promise in
capturing latent information from the literature. Interestingly, these can be
integrated into material embeddings, potentially useful for data-driven
predictions of materials properties. We investigate the extent to which
LLM-derived vectors capture the desired information and their potential to
provide insights into material properties without additional training. Our
findings indicate that, although LLMs can be used to generate representations
reflecting certain property information, extracting the embeddings requires
identifying the optimal contextual clues and appropriate comparators. Despite
this restriction, it appears that LLMs still have the potential to be useful in
generating meaningful materials-science representations.
|
[{'version': 'v1', 'created': 'Wed, 18 Sep 2024 13:22:04 GMT'}]
|
2024-09-19
|
Jaime A. Berkovich and Markus J. Buehler
|
LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for
Cellular Automata
| null | null | null |
cs.AI cond-mat.mtrl-sci cond-mat.stat-mech math.DS
|
Conway's Game of Life (Life), a well known algorithm within the broader class
of cellular automata (CA), exhibits complex emergent dynamics, with extreme
sensitivity to initial conditions. Modeling and predicting such intricate
behavior without explicit knowledge of the system's underlying topology
presents a significant challenge, motivating the development of algorithms that
can generalize across various grid configurations and boundary conditions. We
develop a decoder-only generative pretrained transformer (GPT) model to solve
this problem, showing that our model can simulate Life on a toroidal grid with
no prior knowledge on the size of the grid, or its periodic boundary conditions
(LifeGPT). LifeGPT is topology-agnostic with respect to its training data and
our results show that a GPT model is capable of capturing the deterministic
rules of a Turing-complete system with near-perfect accuracy, given
sufficiently diverse training data. We also introduce the idea of an
`autoregressive autoregressor' to recursively implement Life using LifeGPT. Our
results pave the path towards true universal computation within a large
language model framework, synthesizing of mathematical analysis with natural
language processing, and probing AI systems for situational awareness about the
evolution of such algorithms without ever having to compute them. Similar GPTs
could potentially solve inverse problems in multicellular self-assembly by
extracting CA-compatible rulesets from real-world biological systems to create
new predictive models, which would have significant consequences for the fields
of bioinspired materials, tissue engineering, and architected materials design.
|
[{'version': 'v1', 'created': 'Tue, 3 Sep 2024 11:43:16 GMT'}, {'version': 'v2', 'created': 'Thu, 17 Oct 2024 16:55:02 GMT'}]
|
2024-10-18
|
Teng Long, Yixuan Zhang, Hongbin Zhang
|
Generative deep learning for the inverse design of materials
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
In addition to the forward inference of materials properties using machine
learning, generative deep learning techniques applied on materials science
allow the inverse design of materials, i.e., assessing the
composition-processing-(micro-)structure-property relationships in a reversed
way. In this review, we focus on the (micro-)structure-property mapping, i.e.,
crystal structure-intrinsic property and microstructure-extrinsic property, and
summarize comprehensively how generative deep learning can be performed. Three
key elements, i.e., the construction of latent spaces for both the crystal
structures and microstructures, generative learning approaches, and property
constraints, are discussed in detail. A perspective is given outlining the
challenges of the existing methods in terms of computational resource
consumption, data compatibility, and yield of generation.
|
[{'version': 'v1', 'created': 'Fri, 27 Sep 2024 20:10:19 GMT'}]
|
2024-10-01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.