authors
stringlengths 11
2.41k
| title
stringlengths 38
184
| journal-ref
stringclasses 115
values | doi
stringlengths 17
34
⌀ | report-no
stringclasses 3
values | categories
stringlengths 17
83
| abstract
stringlengths 124
1.92k
| versions
stringlengths 62
689
| update_date
stringdate 2007-09-13 00:00:00
2025-05-15 00:00:00
|
|---|---|---|---|---|---|---|---|---|
Franz Damma{\ss}, Karl A. Kalina, Markus K\"astner
|
When invariants matter: The role of I1 and I2 in neural network models
of incompressible hyperelasticity
| null | null | null |
cond-mat.mtrl-sci cond-mat.soft
|
For the formulation of machine learning-based material models, the usage of
invariants of deformation tensors is attractive, since this can a priori
guarantee objectivity and material symmetry. In this work, we consider
incompressible, isotropic hyperelasticity, where two invariants I1 and I2 are
required for depicting a deformation state. First, we aim at enhancing the
understanding of the invariants. We provide an explicit representation of the
set of invariants that are admissible, i.e. for which (I1, I2) a deformation
state does indeed exist. Furthermore, we prove that uniaxial and equi-biaxial
deformation correspond to the boundary of the set of admissible invariants.
Second, we study how the experimentally-observed behaviour of different
materials can be captured by means of neural network models of incompressible
hyperelasticity, depending on whether both I1 and I2 or solely one of the
invariants, i.e. either only I1 or only I2, are taken into account. To this
end, we investigate three different experimental data sets from the literature.
In particular, we demonstrate that considering only one invariant, either I1 or
I2, can allow for good agreement with experiments in case of small
deformations. In contrast, it is necessary to consider both invariants for
precise models at large strains, for instance when rubbery polymers are
deformed. Moreover, we show that multiaxial experiments are strictly required
for the parameterisation of models considering I2. Otherwise, if only data from
uniaxial deformation is available, significantly overly stiff responses could
be predicted for general deformation states. On the contrary, I1-only models
can make qualitatively correct predictions for multiaxial loadings even if
parameterised only from uniaxial data, whereas I2-only models are completely
incapable in even qualitatively capturing experimental stress data at large
deformations.
|
[{'version': 'v1', 'created': 'Wed, 26 Mar 2025 14:43:11 GMT'}]
|
2025-03-27
|
Takuya Shibayama, Hideaki Imamura, Katsuhiko Nishimra, Kohei
Shinohara, Chikashi Shinagawa, So Takamoto, Ju Li
|
Efficient Crystal Structure Prediction Using Genetic Algorithm and
Universal Neural Network Potential
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
Crystal structure prediction (CSP) is crucial for identifying stable crystal
structures in given systems and is a prerequisite for computational atomistic
simulations. Recent advances in neural network potentials (NNPs) have reduced
the computational cost of CSP. However, searching for stable crystal structures
across the entire composition space in multicomponent systems remains a
significant challenge. Here, we propose a novel genetic algorithm (GA) -based
CSP method using a universal NNP. Our GA-based methods are designed to
efficiently expand convex hull volumes while preserving the diversity of
crystal structures. This approach draws inspiration from the similarity between
convex hull updates and Pareto front evolution in multi-objective optimization.
Our evaluation shows that the present method outperforms the symmetry-aware
random structure generation, achieving a larger convex hull with fewer trials.
We demonstrated that our approach, combined with the developed universal NNP
(PFP), can accurately reproduce and explore phase diagrams obtained through DFT
calculations; this indicates the validity of PFP across a wide range of crystal
structures and element combinations. This study, which integrates a universal
NNP with a GA-based CSP method, highlights the promise of these methods in
materials discovery.
|
[{'version': 'v1', 'created': 'Thu, 27 Mar 2025 06:38:59 GMT'}]
|
2025-03-28
|
Somayeh Hosseinhashemi, Philipp Rieder, Orkun Furat, Benedikt
Prifling, Changlin Wu, Christoph Thon, Volker Schmidt, Carsten Schilde
|
Statistical learning of structure-property relationships for transport
in porous media, using hybrid AI modeling
| null | null | null |
cond-mat.mtrl-sci cs.LG
|
The 3D microstructure of porous media, such as electrodes in lithium-ion
batteries or fiber-based materials, significantly impacts the resulting
macroscopic properties, including effective diffusivity or permeability.
Consequently, quantitative structure-property relationships, which link
structural descriptors of 3D microstructures such as porosity or geodesic
tortuosity to effective transport properties, are crucial for further
optimizing the performance of porous media. To overcome the limitations of 3D
imaging, parametric stochastic 3D microstructure modeling is a powerful tool to
generate many virtual but realistic structures at the cost of computer
simulations. The present paper uses 90,000 virtually generated 3D
microstructures of porous media derived from literature by systematically
varying parameters of stochastic 3D microstructure models. Previously, this
data set has been used to establish quantitative microstructure-property
relationships. The present paper extends these findings by applying a hybrid AI
framework to this data set. More precisely, symbolic regression, powered by
deep neural networks, genetic algorithms, and graph attention networks, is used
to derive precise and robust analytical equations. These equations model the
relationships between structural descriptors and effective transport properties
without requiring manual specification of the underlying functional
relationship. By integrating AI with traditional computational methods, the
hybrid AI framework not only generates predictive equations but also enhances
conventional modeling approaches by capturing relationships influenced by
specific microstructural features traditionally underrepresented. Thus, this
paper significantly advances the predictive modeling capabilities in materials
science, offering vital insights for designing and optimizing new materials
with tailored transport properties.
|
[{'version': 'v1', 'created': 'Thu, 27 Mar 2025 14:46:40 GMT'}]
|
2025-03-31
|
Izumi Takahara, Teruyasu Mizoguchi and Bang Liu
|
Accelerated Inorganic Materials Design with Generative AI Agents
| null | null | null |
cond-mat.mtrl-sci
|
Designing inorganic crystalline materials with tailored properties is
critical to technological innovation, yet current generative computational
methods often struggle to efficiently explore desired targets with sufficient
interpretability. Here, we present MatAgent, a generative approach for
inorganic materials discovery that harnesses the powerful reasoning
capabilities of large language models (LLMs). By combining a diffusion-based
generative model for crystal structure estimation with a predictive model for
property evaluation, MatAgent uses iterative, feedback-driven guidance to steer
material exploration precisely toward user-defined targets. Integrated with
external cognitive tools-including short-term memory, long-term memory, the
periodic table, and a comprehensive materials knowledge base-MatAgent emulates
human expert reasoning to vastly expand the accessible compositional space. Our
results demonstrate that MatAgent robustly directs exploration toward desired
properties while consistently achieving high compositional validity,
uniqueness, and material novelty. This framework thus provides a highly
interpretable, practical, and versatile AI-driven solution to accelerate the
discovery and design of next-generation inorganic materials.
|
[{'version': 'v1', 'created': 'Tue, 1 Apr 2025 12:51:07 GMT'}]
|
2025-04-02
|
Zhendong Cao, Lei Wang
|
CrystalFormer-RL: Reinforcement Fine-Tuning for Materials Design
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.comp-ph
|
Reinforcement fine-tuning has instrumental enhanced the instruction-following
and reasoning abilities of large language models. In this work, we explore the
applications of reinforcement fine-tuning to the autoregressive
transformer-based materials generative model CrystalFormer (arXiv:2403.15734)
using discriminative machine learning models such as interatomic potentials and
property prediction models. By optimizing reward signals-such as energy above
the convex hull and material property figures of merit-reinforcement
fine-tuning infuses knowledge from discriminative models into generative
models. The resulting model, CrystalFormer-RL, shows enhanced stability in
generated crystals and successfully discovers crystals with desirable yet
conflicting material properties, such as substantial dielectric constant and
band gap simultaneously. Notably, we observe that reinforcement fine-tuning
enables not only the property-guided novel material design ability of
generative pre-trained model but also unlocks property-driven material
retrieval from the unsupervised pre-training dataset. Leveraging rewards from
discriminative models to fine-tune materials generative models opens an
exciting gateway to the synergies of the machine learning ecosystem for
materials.
|
[{'version': 'v1', 'created': 'Thu, 3 Apr 2025 07:59:30 GMT'}]
|
2025-04-04
|
Yuta Yahagi, Kiichi Obuchi, Fumihiko Kosaka, Kota Matsui
|
Transfer learning from first-principles calculations to experiments with
chemistry-informed domain transformation
| null | null | null |
physics.chem-ph cond-mat.mtrl-sci cs.LG physics.comp-ph
|
Simulation-to-Real (Sim2Real) transfer learning, the machine learning
technique that efficiently solves a real-world task by leveraging knowledge
from computational data, has received increasing attention in materials science
as a promising solution to the scarcity of experimental data. We proposed an
efficient transfer learning scheme from first-principles calculations to
experiments based on the chemistry-informed domain transformation, that
integrates the heterogeneous source and target domains by harnessing the
underlying physics and chemistry. The proposed method maps the computational
data from the simulation space (source domain) into the space of experimental
data (target domain). During this process, these qualitatively different
domains are efficiently integrated by a couple of prior knowledge of chemistry,
(1) the statistical ensemble, and (2) the relationship between source and
target quantities. As a proof-of-concept, we predict the catalyst activity for
the reverse water-gas shift reaction by using the abundant first-principles
data in addition to the experimental data. Through the demonstration, we
confirmed that the transfer learning model exhibits positive transfer in
accuracy and data efficiency. In particular, a significantly high accuracy was
achieved despite using a few (less than ten) target data in domain
transformation, whose accuracy is one order of magnitude smaller than that of a
full scratch model trained with over 100 target data. This result indicates
that the proposed method leverages the high prediction performance with few
target data, which helps to save the number of trials in real laboratories.
|
[{'version': 'v1', 'created': 'Thu, 20 Mar 2025 15:14:23 GMT'}, {'version': 'v2', 'created': 'Mon, 7 Apr 2025 07:29:31 GMT'}]
|
2025-04-08
|
Lingyu Kong, Nima Shoghi, Guoxiang Hu, Pan Li, Victor Fung
|
MatterTune: An Integrated, User-Friendly Platform for Fine-Tuning
Atomistic Foundation Models to Accelerate Materials Simulation and Discovery
| null | null | null |
cond-mat.mtrl-sci cs.AI cs.LG
|
Geometric machine learning models such as graph neural networks have achieved
remarkable success in recent years in chemical and materials science research
for applications such as high-throughput virtual screening and atomistic
simulations. The success of these models can be attributed to their ability to
effectively learn latent representations of atomic structures directly from the
training data. Conversely, this also results in high data requirements for
these models, hindering their application to problems which are data sparse
which are common in this domain. To address this limitation, there is a growing
development in the area of pre-trained machine learning models which have
learned general, fundamental, geometric relationships in atomistic data, and
which can then be fine-tuned to much smaller application-specific datasets. In
particular, models which are pre-trained on diverse, large-scale atomistic
datasets have shown impressive generalizability and flexibility to downstream
applications, and are increasingly referred to as atomistic foundation models.
To leverage the untapped potential of these foundation models, we introduce
MatterTune, a modular and extensible framework that provides advanced
fine-tuning capabilities and seamless integration of atomistic foundation
models into downstream materials informatics and simulation workflows, thereby
lowering the barriers to adoption and facilitating diverse applications in
materials science. In its current state, MatterTune supports a number of
state-of-the-art foundation models such as ORB, MatterSim, JMP, and
EquformerV2, and hosts a wide range of features including a modular and
flexible design, distributed and customizable fine-tuning, broad support for
downstream informatics tasks, and more.
|
[{'version': 'v1', 'created': 'Mon, 14 Apr 2025 19:12:43 GMT'}]
|
2025-04-16
|
Yahao Dai, Henry Chan, Aikaterini Vriza, Fredrick Kim, Yunfei Wang,
Wei Liu, Naisong Shan, Jing Xu, Max Weires, Yukun Wu, Zhiqiang Cao, C.
Suzanne Miller, Ralu Divan, Xiaodan Gu, Chenhui Zhu, Sihong Wang, Jie Xu
|
Adaptive AI decision interface for autonomous electronic material
discovery
| null | null | null |
cond-mat.mtrl-sci cs.AI
|
AI-powered autonomous experimentation (AI/AE) can accelerate materials
discovery but its effectiveness for electronic materials is hindered by data
scarcity from lengthy and complex design-fabricate-test-analyze cycles. Unlike
experienced human scientists, even advanced AI algorithms in AI/AE lack the
adaptability to make informative real-time decisions with limited datasets.
Here, we address this challenge by developing and implementing an AI decision
interface on our AI/AE system. The central element of the interface is an AI
advisor that performs real-time progress monitoring, data analysis, and
interactive human-AI collaboration for actively adapting to experiments in
different stages and types. We applied this platform to an emerging type of
electronic materials-mixed ion-electron conducting polymers (MIECPs) -- to
engineer and study the relationships between multiscale morphology and
properties. Using organic electrochemical transistors (OECT) as the testing-bed
device for evaluating the mixed-conducting figure-of-merit -- the product of
charge-carrier mobility and the volumetric capacitance ({\mu}C*), our adaptive
AI/AE platform achieved a 150% increase in {\mu}C* compared to the commonly
used spin-coating method, reaching 1,275 F cm-1 V-1 s-1 in just 64 autonomous
experimental trials. A study of 10 statistically selected samples identifies
two key structural factors for achieving higher volumetric capacitance: larger
crystalline lamellar spacing and higher specific surface area, while also
uncovering a new polymer polymorph in this material.
|
[{'version': 'v1', 'created': 'Thu, 17 Apr 2025 21:26:48 GMT'}]
|
2025-04-21
|
Theo Jaffrelot Inizan, Sherry Yang, Aaron Kaplan, Yen-hsu Lin, Jian
Yin, Saber Mirzaei, Mona Abdelgaid, Ali H. Alawadhi, KwangHwan Cho, Zhiling
Zheng, Ekin Dogus Cubuk, Christian Borgs, Jennifer T. Chayes, Kristin A.
Persson, Omar M. Yaghi
|
System of Agentic AI for the Discovery of Metal-Organic Frameworks
| null | null | null |
cond-mat.mtrl-sci cs.AI cs.CL cs.MA
|
Generative models and machine learning promise accelerated material discovery
in MOFs for CO2 capture and water harvesting but face significant challenges
navigating vast chemical spaces while ensuring synthetizability. Here, we
present MOFGen, a system of Agentic AI comprising interconnected agents: a
large language model that proposes novel MOF compositions, a diffusion model
that generates crystal structures, quantum mechanical agents that optimize and
filter candidates, and synthetic-feasibility agents guided by expert rules and
machine learning. Trained on all experimentally reported MOFs and computational
databases, MOFGen generated hundreds of thousands of novel MOF structures and
synthesizable organic linkers. Our methodology was validated through
high-throughput experiments and the successful synthesis of five "AI-dreamt"
MOFs, representing a major step toward automated synthesizable material
discovery.
|
[{'version': 'v1', 'created': 'Fri, 18 Apr 2025 23:54:25 GMT'}]
|
2025-04-22
|
Ahmed Sobhi Saleh, Kristof Croes, Hajdin Ceric, Ingrid De Wolf, Houman
Zahedmanesh
|
Novel Concept-Oriented Synthetic Data approach for Training Generative
AI-Driven Crystal Grain Analysis Using Diffusion Model
|
Computational Materials Science, Vol 251 (2025)
|
10.1016/j.commatsci.2025.113723
| null |
cs.LG cond-mat.mtrl-sci
|
The traditional techniques for extracting polycrystalline grain structures
from microscopy images, such as transmission electron microscopy (TEM) and
scanning electron microscopy (SEM), are labour-intensive, subjective, and
time-consuming, limiting their scalability for high-throughput analysis. In
this study, we present an automated methodology integrating edge detection with
generative diffusion models to effectively identify grains, eliminate noise,
and connect broken segments in alignment with predicted grain boundaries. Due
to the limited availability of adequate images preventing the training of deep
machine learning models, a new seven-stage methodology is employed to generate
synthetic TEM images for training. This concept-oriented synthetic data
approach can be extended to any field of interest where the scarcity of data is
a challenge. The presented model was applied to various metals with average
grain sizes down to the nanoscale, producing grain morphologies from
low-resolution TEM images that are comparable to those obtained from advanced
and demanding experimental techniques with an average accuracy of 97.23%.
|
[{'version': 'v1', 'created': 'Mon, 21 Apr 2025 00:46:28 GMT'}]
|
2025-04-22
|
Clement Vincely, Amin Reiners-Sakic, Vedant Dave, Erwin
Povoden-Karadeniz, Elmar Rueckert, Ronald Schnitzer, David Holec
|
Amending CALPHAD databases using a neural network for predicting mixing
enthalpy of liquids
| null | null | null |
physics.chem-ph cond-mat.dis-nn cond-mat.mtrl-sci
|
In order to establish the thermodynamic stability of a system, knowledge of
its Gibbs free energy is essential. Most often, the Gibbs free energy is
predicted within the CALPHAD framework using models employing thermodynamic
properties, such as the mixing enthalpy, heat capacity, and activity
coefficients. Here, we present a deep-learning approach capable of predicting
the mixing enthalpy of liquid phases of binary systems that were not present in
the training dataset. Therefore, our model allows for a system-informed
enhancement of the thermodynamic description to unknown binary systems based on
information present in the available thermodynamic assessment. Thereby,
significant experimental efforts in assessing new systems can be spared. We use
an open database for steels containing 91 binary systems to generate our
initial training (and validation) and amend it with several direct experimental
reports. The model is thoroughly tested using different strategies, including a
test of its predictive capabilities. The model shows excellent predictive
capabilities outside of the training dataset as soon as some data containing
species of the predicted system is included in the training dataset. The
estimated uncertainty of the model is below 1 kJ/mol for the predicted mixing
enthalpy. Subsequently, we used our model to predict the enthalpy of mixing of
all binary systems not present in the original database and extracted the
Redlich-Kister parameters, which can be readily reintegrated into the
thermodynamic database file.
|
[{'version': 'v1', 'created': 'Fri, 25 Apr 2025 14:08:28 GMT'}, {'version': 'v2', 'created': 'Mon, 28 Apr 2025 05:44:02 GMT'}]
|
2025-04-29
|
Zuhong Lin, Daoyuan Ren, Kai Ran, Sun Jing, Xiaotiang Huang, Haiyang
He, Pengxu Pan, Xiaohang Zhang, Ying Fang, Tianying Wang, Minli Wu, Zhanglin
Li, Xiaochuan Zhang, Haipu Li, Jingjing Yao
|
Reshaping MOFs Text Mining with a Dynamic Multi-Agent Framework of Large
Language Agents
| null | null | null |
cs.AI cond-mat.mtrl-sci
|
The mining of synthesis conditions for metal-organic frameworks (MOFs) is a
significant focus in materials science. However, identifying the precise
synthesis conditions for specific MOFs within the vast array of possibilities
presents a considerable challenge. Large Language Models (LLMs) offer a
promising solution to this problem. We leveraged the capabilities of LLMs,
specifically gpt-4o-mini, as core agents to integrate various MOF-related
agents, including synthesis, attribute, and chemical information agents. This
integration culminated in the development of MOFh6, an LLM tool designed to
streamline the MOF synthesis process. MOFh6 allows users to query in multiple
formats, such as submitting scientific literature, or inquiring about specific
MOF codes or structural properties. The tool analyzes these queries to provide
optimal synthesis conditions and generates model files for density functional
theory pre modeling. We believe MOFh6 will enhance efficiency in the MOF
synthesis of all researchers.
|
[{'version': 'v1', 'created': 'Sat, 26 Apr 2025 09:55:04 GMT'}]
|
2025-04-29
|
Yomn Alkabakibi, Congwei Xie, Artem R. Oganov
|
Graph Neural Network Prediction of Nonlinear Optical Properties
| null | null | null |
cond-mat.mtrl-sci cs.LG physics.optics
|
Nonlinear optical (NLO) materials for generating lasers via second harmonic
generation (SHG) are highly sought in today's technology. However, discovering
novel materials with considerable SHG is challenging due to the time-consuming
and costly nature of both experimental methods and first-principles
calculations. In this study, we present a deep learning approach using the
Atomistic Line Graph Neural Network (ALIGNN) to predict NLO properties.
Sourcing data from the Novel Opto-Electronic Materials Discovery (NOEMD)
database and using the Kurtz-Perry (KP) coefficient as the key target, we
developed a robust model capable of accurately estimating nonlinear optical
responses. Our results demonstrate that the model achieves 82.5% accuracy at a
tolerated absolute error up to 1 pm/V and relative error not exceeding 0.5.
This work highlights the potential of deep learning in accelerating the
discovery and design of advanced optical materials with desired properties.
|
[{'version': 'v1', 'created': 'Mon, 28 Apr 2025 17:03:22 GMT'}]
|
2025-04-29
|
Ivan Pinto-Huguet, Marc Botifoll, Xuli Chen, Martin Borstad Eriksen,
Jing Yu, Giovanni Isella, Andreu Cabot, Gonzalo Merino, Jordi Arbiol
|
Enhancing atomic-resolution in electron microscopy: A frequency-domain
deep learning denoiser
| null | null | null |
cond-mat.mtrl-sci
|
Atomic resolution electron microscopy, particularly high-angle annular
dark-field scanning transmission electron microscopy, has become an essential
tool for many scientific fields, when direct visualization of atomic
arrangements and defects are needed, as they dictate the material's functional
and mechanical behavior. However, achieving this precision is often hindered by
noise, arising from electron microscopy acquisition limitations, particularly
when imaging beam-sensitive materials or light atoms. In this work, we present
a deep learning-based denoising approach that operates in the frequency domain
using a convolutional neural network U-Net trained on simulated data. To
generate the training dataset, we simulate FFT patterns for various materials,
crystallographic orientations, and imaging conditions, introducing noise and
drift artifacts to accurately mimic experimental scenarios. The model is
trained to identify relevant frequency components, which are then used to
enhance experimental images by applying element-wise multiplication in the
frequency domain. The model enhances experimental images by identifying and
amplifying relevant frequency components, significantly improving
signal-to-noise ratio while preserving structural integrity. Applied to both Ge
quantum wells and WS2 monolayers, the method facilitates more accurate strain
quantitative analyses, critical for assessing functional device performance
(e.g. quantum properties in SiGe quantum wells), and enables the clear
identification of light atoms in beam sensitive materials. Our results
demonstrate the potential of automated frequency-based deep learning denoising
as a useful tool for atomic-resolution nano-materials analysis.
|
[{'version': 'v1', 'created': 'Sat, 3 May 2025 11:31:53 GMT'}]
|
2025-05-06
|
Yoel Zimmermann, Adib Bazgir, Alexander Al-Feghali, Mehrad Ansari, L.
Catherine Brinson, Yuan Chiang, Defne Circi, Min-Hsueh Chiu, Nathan Daelman,
Matthew L. Evans, Abhijeet S. Gangan, Janine George, Hassan Harb, Ghazal
Khalighinejad, Sartaaj Takrim Khan, Sascha Klawohn, Magdalena Lederbauer,
Soroush Mahjoubi, Bernadette Mohr, Seyed Mohamad Moosavi, Aakash Naik, Aleyna
Beste Ozhan, Dieter Plessers, Aritra Roy, Fabian Sch\"oppach, Philippe
Schwaller, Carla Terboven, Katharina Ueltzen, Shang Zhu, Jan Janssen, Calvin
Li, Ian Foster, and Ben Blaiszik
|
34 Examples of LLM Applications in Materials Science and Chemistry:
Towards Automation, Assistants, Agents, and Accelerated Scientific Discovery
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
Large Language Models (LLMs) are reshaping many aspects of materials science
and chemistry research, enabling advances in molecular property prediction,
materials design, scientific automation, knowledge extraction, and more. Recent
developments demonstrate that the latest class of models are able to integrate
structured and unstructured data, assist in hypothesis generation, and
streamline research workflows. To explore the frontier of LLM capabilities
across the research lifecycle, we review applications of LLMs through 34 total
projects developed during the second annual Large Language Model Hackathon for
Applications in Materials Science and Chemistry, a global hybrid event. These
projects spanned seven key research areas: (1) molecular and material property
prediction, (2) molecular and material design, (3) automation and novel
interfaces, (4) scientific communication and education, (5) research data
management and automation, (6) hypothesis generation and evaluation, and (7)
knowledge extraction and reasoning from the scientific literature.
Collectively, these applications illustrate how LLMs serve as versatile
predictive models, platforms for rapid prototyping of domain-specific tools,
and much more. In particular, improvements in both open source and proprietary
LLM performance through the addition of reasoning, additional training data,
and new techniques have expanded effectiveness, particularly in low-data
environments and interdisciplinary research. As LLMs continue to improve, their
integration into scientific workflows presents both new opportunities and new
challenges, requiring ongoing exploration, continued refinement, and further
research to address reliability, interpretability, and reproducibility.
|
[{'version': 'v1', 'created': 'Mon, 5 May 2025 22:08:37 GMT'}]
|
2025-05-07
|
Ting-Wei Hsu, Zhenyao Fang, Arun Bansil, and Qimin Yan
|
Accurate Prediction of Sequential Tensor Properties Using Equivariant
Graph Neural Network
| null | null | null |
cond-mat.mtrl-sci physics.optics
|
Optical spectra serve as a powerful tool for probing the interactions between
materials and light, unveiling complex electronic structures such as flat bands
and nontrivial topological features. These insights are crucial for the
development and optimization of photonic devices, including solar cells,
light-emitting diodes, and photodetectors, where understanding the electronic
structure directly impacts device performance. Moreover, in anisotropic bulk
materials, the optical responses are direction-dependent, and predicting those
response tensors still remains computationally demanding due to its inherent
complexity and the constraint from crystal symmetry. To address this challenge,
we introduce the sequential tensorial properties equivariant neural network
(StepENN), a graph neural network architecture that maps crystal structures
directly to their full optical tensors across different photon frequencies. By
encoding the isotropic sequential scalar components and anisotropic sequential
tensor components into l=0 and l=2 spherical tensor components, StepENN ensures
symmetry-aware sequential tensor predictions that are consistent with the
inherent symmetry constraints of crystal systems. Trained on a dataset of
frequency-dependent permittivity tensors for 1,432 bulk semiconductors computed
from first-principles methods, our model achieves a mean absolute error (MAE)
of 24.216 millifarads per meter (mF/m) on the predicted tensorial spectra with
85.7% of its predictions exhibiting less than 10% relative error, demonstrating
its potential for deriving other spectrum-related properties, such as optical
conductivity. This framework opens new avenues for the data-driven design of
materials with engineered anisotropic optical responses, accelerating material
advances in optoelectronic applications.
|
[{'version': 'v1', 'created': 'Thu, 8 May 2025 00:10:41 GMT'}]
|
2025-05-09
|
Pungponhavoan Tep and Marc Bernacki
|
High-fidelity Grain Growth Modeling: Leveraging Deep Learning for Fast
Computations
| null | null | null |
cond-mat.mtrl-sci cs.AI
|
Grain growth simulation is crucial for predicting metallic material
microstructure evolution during annealing and resulting final mechanical
properties, but traditional partial differential equation-based methods are
computationally expensive, creating bottlenecks in materials design and
manufacturing. In this work, we introduce a machine learning framework that
combines a Convolutional Long Short-Term Memory networks with an Autoencoder to
efficiently predict grain growth evolution. Our approach captures both spatial
and temporal aspects of grain evolution while encoding high-dimensional grain
structure data into a compact latent space for pattern learning, enhanced by a
novel composite loss function combining Mean Squared Error, Structural
Similarity Index Measurement, and Boundary Preservation to maintain structural
integrity of grain boundary topology of the prediction. Results demonstrated
that our machine learning approach accelerates grain growth prediction by up to
\SI{89}{\times} faster, reducing computation time from \SI{10}{\minute} to
approximately \SI{10}{\second} while maintaining high-fidelity predictions. The
best model (S-30-30) achieving a structural similarity score of
\SI{86.71}{\percent} and mean grain size error of just \SI{0.07}{\percent}. All
models accurately captured grain boundary topology, morphology, and size
distributions. This approach enables rapid microstructural prediction for
applications where conventional simulations are prohibitively time-consuming,
potentially accelerating innovation in materials science and manufacturing.
|
[{'version': 'v1', 'created': 'Thu, 8 May 2025 15:43:40 GMT'}]
|
2025-05-09
|
Jamie Holber and Krishna Garikipati
|
Equivariant graph neural network surrogates for predicting the
properties of relaxed atomic configurations
| null | null | null |
cond-mat.mtrl-sci physics.comp-ph
|
Density functional theory (DFT) calculations determine the relaxed atomic
positions and lattice parameters that minimize the formation energy of a
structure. We present an equivariant graph neural network (EGNN) model to
predict the outcome of DFT calculations for structures of interest. Cluster
expansions are a well established approach for representing the formation
energies. However, traditional cluster expansions are limited in their ability
to handle variations from a fixed lattice, including interstitial atoms,
amorphous materials, and materials with multiple structures. EGNNs offer a more
flexible framework that inherently respects the symmetry of the system without
being reliant on a particular lattice. In this work, we present the
mathematical framework and the results of training for lithium cobalt oxide
(LCO) at various compositions of lithium and arrangements of the lithium atoms.
Our results demonstrate that the EGNN can accurately predict quantities outside
the training set including the largest atomic displacements, the strain tensor
and energy, and the formation energy providing greater insight into the system
being studied without the need for more DFT calculations.
|
[{'version': 'v1', 'created': 'Mon, 12 May 2025 23:22:35 GMT'}]
|
2025-05-14
|
Purun-hanul Kim, Jeong Min Choi, Seungwu Han, Youngho Kang
|
Neural Network-Driven Molecular Insights into Alkaline Wet Etching of
GaN: Toward Atomistic Precision in Nanostructure Fabrication
| null | null | null |
cond-mat.mtrl-sci
|
We present large-scale molecular dynamics (MD) simulations based on a
machine-learning interatomic potential to investigate the wet etching behavior
of various GaN facets in alkaline solution-a process critical to the
fabrication of nitride-based semiconductor devices. A Behler-Parrinello-type
neural network potential (NNP) was developed by training on extensive DFT
datasets and iteratively refined to capture chemical reactions between GaN and
KOH. To simulate the wet etching of GaN, we perform NNP-MD simulations using
the temperature-accelerated dynamics approach, which accurately reproduces the
experimentally observed structural modification of a GaN nanorod during
alkaline etching. The etching simulations reveal surface-specific morphological
evolutions: pyramidal etch pits emerge on the $-c$ plane, while truncated
pyramidal pits form on the $+c$ surface. The non-polar m and a surfaces exhibit
lateral etch progression, maintaining planar morphologies. Analysis of MD
trajectories identifies key surface reactions governing the etching mechanisms.
To gain deeper insights into the etching kinetics, we conduct enhanced-sampling
MD simulations and construct free-energy profiles for Ga dissolution, a process
that critically influences the overall etching rate. The $-c$, $a$, and $m$
planes exhibit moderate activation barriers, indicating the feasibility of
alkaline wet etching. In contrast, the $+c$ surface displays a significantly
higher barrier, illustrating its strong resistance to alkaline etching.
Additionally, we show that Ga-O-Ga bridges can form on etched surfaces,
potentially serving as carrier traps. By providing a detailed atomistic
understanding of GaN wet etching, this work offers valuable guidance for
surface engineering in GaN-based device fabrication.
|
[{'version': 'v1', 'created': 'Tue, 13 May 2025 02:01:07 GMT'}]
|
2025-05-14
|
Xinyu You, Xiang Liu, Chuan-Shen Hu, Kelin Xia and Tze Chien Sum
|
Quotient Complex Transformer (QCformer) for Perovskite Data Analysis
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
The discovery of novel functional materials is crucial in addressing the
challenges of sustainable energy generation and climate change. Hybrid
organic-inorganic perovskites (HOIPs) have gained attention for their
exceptional optoelectronic properties in photovoltaics. Recently, geometric
deep learning, particularly graph neural networks (GNNs), has shown strong
potential in predicting material properties and guiding material design.
However, traditional GNNs often struggle to capture the periodic structures and
higher-order interactions prevalent in such systems. To address these
limitations, we propose a novel representation based on quotient complexes
(QCs) and introduce the Quotient Complex Transformer (QCformer) for material
property prediction. A material structure is modeled as a quotient complex,
which encodes both pairwise and many-body interactions via simplices of varying
dimensions and captures material periodicity through a quotient operation. Our
model leverages higher-order features defined on simplices and processes them
using a simplex-based Transformer module. We pretrain QCformer on benchmark
datasets such as the Materials Project and JARVIS, and fine-tune it on HOIP
datasets. The results show that QCformer outperforms state-of-the-art models in
HOIP property prediction, demonstrating its effectiveness. The quotient complex
representation and QCformer model together contribute a powerful new tool for
predictive modeling of perovskite materials.
|
[{'version': 'v1', 'created': 'Wed, 14 May 2025 06:13:14 GMT'}]
|
2025-05-15
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.