You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for CrystaLLM-pi_COD-XRD

Model Details

Model Description

CrystaLLM-pi_COD-XRD is a conditional generative model designed for the recovery of crystal structures from X-ray Diffraction (XRD) data. It is a fine-tuned version of the CrystaLLM-pi framework, based on a GPT-2 decoder-only architecture. This variant employs the Residual Attention (Slider) mechanism to condition the generation of Crystallographic Information Files (CIFs) on high-dimensional experimental data.

The model generates crystal structures based on an XRD pattern input vector, consisting of the 20 most intense peaks:

  1. Peak Positions ($2\theta$)
  2. Peak Intensities
  • Developed by: Bone et al. (University College London)
  • Model type: Autoregressive Transformer with Residual Attention Conditioning
  • Language(s): CIF (Crystallographic Information File) syntax
  • License: MIT
  • Finetuned from model: c-bone/CrystaLLM-pi_base

Model Sources

Uses

Direct Use

The model is intended for structure solution and recovery from powder XRD data. Researchers can input a list of peak positions and intensities derived from experimental diffraction patterns to generate candidate crystal structures that match the experimental signature.

Out-of-Scope Use

  • Disordered Systems: The model was trained on ordered approximations of structures from the Crystallography Open Database (COD). It does not natively handle partial occupancies or significant disorder.
  • Large Unit Cells: Context window limits apply (~20 atoms/cell).
  • Organic/MOFs: The training data was filtered to exclude hydrocarbon-containing compounds (inorganic only).

Bias, Risks, and Limitations

  • Experimental Noise: While robust to some experimental deviations, the model's performance relies on the quality of the input peak extraction.
  • Missing Data: The "Slider" mechanism is designed to handle missing peaks (padded with -100), but significant data loss will degrade recovery rates.
  • Polymorphs: In cases of strong structural similarity or ambiguous diffraction patterns, the model may be biased towards the polymorph most represented in the training distribution.

How to Get Started with the Model

For instructions on how to load and run generation with this model, please refer to the _load_and_generate.py script in the CrystaLLM-pi GitHub Repository. This script handles the necessary tokenization and normalization of XRD vectors.

Training Details

Training Data

The model underwent a two-stage fine-tuning process:

  1. MatterGen XRD: Theoretical XRD patterns generated from the MatterGen dataset.
  2. COD XRD: Experimental XRD data from the Crystallography Open Database (COD), filtered for inorganic structures and processed to remove partial occupancies.

Training Procedure

  • Architecture: GPT-2 with Residual Attention (Slider) layers. (~47.7M parameters)
  • Mechanism: The Slider mechanism computes a parallel attention score for the conditioning vector and dynamically weights it against the base self-attention. This allows for "softer" conditioning and robust handling of heterogeneous or missing data points in the diffraction pattern.

Evaluation

Metrics

The model is evaluated based on:

  1. Match Rate: The percentage of ground truth structures successfully recovered (within structural similarity tolerances).
  2. RMS-d: Root Mean Square distance between the ground truth and generated structures.
  3. Lattice Parameter MAE: Mean Absolute Error of the predicted unit cell dimensions.

Results

The model achieves competitive performance on the MP-20 and experimental COD benchmarks, effectively recovering structures from experimental XRD data with reduced computational cost compared to traditional solution methods.

Citation

@misc{bone2025discoveryrecoverycrystallinematerials,
      title={Discovery and recovery of crystalline materials with property-conditioned transformers}, 
      author={Cyprien Bone and Matthew Walker and Kuangdai Leng and Luis M. Antunes and Ricardo Grau-Crespo and Amil Aligayev and Javier Dominguez and Keith T. Butler},
      year={2025},
      eprint={2511.21299},
      archivePrefix={arXiv},
      primaryClass={cond-mat.mtrl-sci},
      url={[https://arxiv.org/abs/2511.21299](https://arxiv.org/abs/2511.21299)}, 
}
Downloads last month
7
Safetensors
Model size
47.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for c-bone/CrystaLLM-pi_COD-XRD

Finetuned
(4)
this model

Datasets used to train c-bone/CrystaLLM-pi_COD-XRD