The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
GMD-26: Benchmarking Compositional Generalisation for Machine Learning Force Fields
GMD-26 is a systematic benchmark dataset for evaluating the compositional generalisation capabilities of Machine Learning Force Fields (MLFFs). Unlike standard benchmarks that train and test on the same molecular systems, GMD-26 explicitly separates training and test molecules, probing whether models have learned transferable physical principles or merely interpolate within their training distribution.
Paper: Benchmarking Compositional Generalisation for Learning Inter-atomic Potentials (submitted to NeurIPS 2026)
Dataset Overview
GMD-26 covers 118 molecules across ten fragment groups based on substituted linear alkyl carbon chains, yielding over 296,000 labelled geometries. Each molecule has two AIMD trajectories:
- Primary trajectory: ~2,013 snapshots β used for training and OOD test splits
- Secondary trajectory: 500 snapshots β used for in-distribution (ID) test splits (first 100 for hyperparameter tuning, remaining 400 for ID evaluation)
Energies and forces are labelled at the PBE/def2-TZVP + D3BJ level of theory using ORCA, making this dataset directly compatible with standard MLFF training pipelines.
Molecular Families
| Functional Group | Example Molecules |
|---|---|
| Alkanes | Ethane β Tridecane |
| Primary Alcohols | Ethanol β Pentadecan-1-ol |
| Aldehydes | Ethanal β Pentadecanal |
| Carboxylic Acids | Ethanoic acid β Pentadecanoic acid |
| Primary Amines | Ethanamine β Octan-1-amine |
| Primary Amides | Butanamide β Octanamide |
| Diamines | Ethane-1,2-diamine β Decane-1,10-diamine |
| Dicarboxylic Acids | Ethanedioic acid β Hexadecanedioic acid |
| Amino Acids | 2-Aminoethanoic acid β 10-Aminodecanoic acid |
| Complex Multifunctional | Various diols, triols, tetraols, diones, dialdehydes |
Benchmark Tasks
GMD-26 defines four tasks that each require a distinct form of compositional generalisation. In all tasks, training and OOD test molecules are disjoint.
Task 1 β Fragment Chain Extension
Tests whether a model can extrapolate to longer carbon chains than seen during training.
- Base: Train on alkanes C2βC6; test on alkanes C7βC13
- Augmented: Train on short/long alcohols and medium carboxylic acids; test on the complementary length ranges of each family
Task 2 β Fragment Composition
Tests whether a model can generalise to a novel functional group that is a chemical composition of functional groups seen individually during training.
- Base: Train on alcohols and aldehydes; test on carboxylic acids (βCOOH β βOH + βCHO)
- Augmented: Training additionally includes amines and amides, providing an explicit demonstration of functional group composition
Task 3 β Fragment Duplication
Tests whether a model can generalise from a molecule with one occurrence of a functional group to the same molecule with two occurrences.
- Train on monocarboxylic acids (C5βC10); test on the corresponding dicarboxylic acids of identical chain lengths
Task 4 β Fragment Combination
Tests whether a model can generalise to asymmetrically functionalised molecules when trained exclusively on symmetrically functionalised analogues.
- Train on diamines and dicarboxylic acids (C2βC9); test on the corresponding amino acids (one amine + one carboxylic acid)
Task Splits at a Glance
| Task | Training Set | OOD Test Set |
|---|---|---|
| 1 Base | C2βC6 alkanes | C7βC13 alkanes |
| 1 Augmented | C2βC3 & C9βC15 alcohols; C4βC8 carboxylic acids | C4βC8 alcohols; C2βC3 & C9βC15 carboxylic acids |
| 2 Base | C4βC10 alcohols, aldehydes; C7βC11 complex carbonyls/alcohols | C4βC10 carboxylic acids |
| 2 Augmented | As above + C4βC10 amines and amides | C4βC10 carboxylic acids |
| 3 | C5βC10 monocarboxylic acids | C5βC10 dicarboxylic acids |
| 4 | C2βC9 diamines; C2βC9 dicarboxylic acids | C2βC9 amino acids |
Generation Pipeline
Each trajectory was generated via a four-stage pipeline:
- Conformer generation β RDKit generates an initial 3D geometry from a SMILES string
- Trajectory sampling β FlashMD (PET-MAD / OMat-PES checkpoint) propagates molecular dynamics in vacuum for 2Γ10β΅ steps at 16 fs timestep, Langevin thermostat at 300 K, recording every 100 steps (~2,000 snapshots)
- DFT relabelling β Each snapshot is relabelled at PBE/def2-TZVP + D3BJ using ORCA (
!EnGrad PBE D3BJ def2-TZVP RI TightSCF); snapshots where SCF convergence fails are excluded - Packaging β Outputs stored in extended XYZ format (compatible with ASE) alongside pre-computed task splits
Data Format
Data is stored in ASE trajectory format (.traj), readable natively with the Atomic Simulation Environment. Each frame contains:
| Field | Description |
|---|---|
positions |
Atomic coordinates (Γ ) |
numbers |
Atomic numbers |
energy |
Total PBE/def2-TZVP+D3BJ energy (eV) |
forces |
Per-atom force vectors (eV/Γ ) |
Pre-computed split files for all four tasks are included so that reproducing the evaluation protocol does not require re-running any upstream generation stages.
Loading the Dataset
# Using huggingface_hub (raw files)
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="AmirMasoud/GMD-26",
repo_type="dataset",
local_dir="./GMD-26"
)
# Using ASE to read individual trajectory files
from ase.io.trajectory import Trajectory
traj = Trajectory("path/to/molecule.traj")
energies = [atoms.get_potential_energy() for atoms in traj]
forces = [atoms.get_forces() for atoms in traj]
GOAL Training Framework
Alongside the dataset we release GOAL (available upon paper acceptance), a PyTorch Lightning-based framework that:
- Exposes GMD-26 as a benchmark module with a unified data-loading interface
- Supports from-scratch training of arbitrary MLFF architectures via a single
forward(graph) -> dictmethod - Wraps foundation models (MACE-MP-0 small/medium/large, UMA-small) through thin adapters for fine-tuning under identical conditions
- Supports DDP, FSDP, and FSDP2 for multi-GPU and multi-node training (tested on NVIDIA GH200)
- Configures all experiments through Hydra for full reproducibility
- Exposes trained models as ASE calculators for downstream MD simulation
Extending the Benchmark
New molecular families can be added by providing SMILES strings and a YAML task specification:
task: chain_extension
fragment: alcohol
train_lengths: [2, 3, 4, 5, 6]
ood_lengths: [7, 8, 9, 10, 11, 12, 13]
n_snapshots_per_trajectory: 1000
The four task templates are defined over abstract fragment sets and reusable without modification.
Citation
If you use GMD-26 in your research, please cite:
@inproceedings{nourollah2026gmd26,
title = {Benchmarking Compositional Generalisation for Learning Inter-atomic Potentials},
author = {Nourollah, Amir Masoud and Khalid, Irtaza and Leoni, Stefano and Schockaert, Steven},
booktitle = {The Fourteenth International Conference on Learning Representations (ICLR 2026)},
year = {2026},
url = {https://openreview.net/forum?id=WxTlAbRUE6}
}
Note: The paper is currently under review. If accepted, the citation will be updated with the final publication details.
Licence
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
- Downloads last month
- 679