ms_id uint32 0 102M |
|---|
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
MaterialsSaddles
A high-throughput library of converged transition states for solid-state and surface chemistry. Hub URL: https://huggingface.co/datasets/AnonymouScientist/MaterialsSaddles
34,135,597 fully converged transition states computed by massively-parallel
saddle searches on top of public materials and catalysis datasets, using the
SaddleMill package and Meta's
uma-s-1p2 machine-learning
interatomic potential.
Each entry in a file is a single structure. Three consecutive entries form
one transition-state event: reactant minimum, transition state (first-order
saddle), product minimum. Endpoints are converged to 0.02 eV/Å (max|F|),
saddles to 0.05 eV/Å. Each saddle row also stores its eigenmode —
an (N, 3) per-atom displacement field giving the direction along which the
saddle is unstable.
Quick stats
| Transition states (total) | 34,135,597 |
Files (.aselmdb) |
416 |
| Rows per TS | 3 (reactant, saddle, product), in order |
| Saddle search method | Dimer (lemat / oc20 / oc22), NEB-CI (mp20bat) |
| Calculator | fairchem uma-s-1p2 |
| Endpoint convergence (max|F|) | 0.02 eV/Å |
| Saddle convergence (max|F|) | 0.05 eV/Å |
| License | CC-BY-4.0 |
Breakdown by source dataset
| Subset | Source | Method | # transition states | Files |
|---|---|---|---|---|
lemat/ |
LeMat-Bulk | Dimer | 31,346,419 | 256 |
oc20/ |
Open Catalyst 2020 (OC20) | Dimer | 2,587,101 | 96 |
oc22/ |
Open Catalyst 2022 (OC22) | Dimer | 167,335 | 32 |
mp20bat/ |
Materials Project battery structures | NEB-CI | 34,742 | 32 |
Try it: minimal example notebook
A self-contained Jupyter notebook (example.ipynb) demonstrates loading the dataset, converting ASE-LMDB rows to ASE Atoms objects (including a non-obvious atoms.info round-trip), walking the (R, S, P) triplet layout, visualizing a reaction, loading the train/val/test split manifests, and reproducing two small panels of Fig. 1 of the accompanying paper. It auto-installs its dependencies and downloads roughly 1 GB of sample shards (one each from lemat, oc20, oc22, and mp20bat); end-to-end runtime is roughly five minutes on a typical laptop connection.
To run locally:
bash hf download AnonymouScientist/MaterialsSaddles example.ipynb \ --repo-type dataset --local-dir . jupyter notebook example.ipynb
You can also open the notebook directly in the Hugging Face web UI; it renders inline.
Directory structure
.
├── README.md (this file)
├── DATASHEET.md (Datasheet for Datasets, Gebru et al. 2018)
├── lemat/
│ ├── lemat_dimer_000.aselmdb
│ ├── ...
│ └── lemat_dimer_255.aselmdb (256 files)
├── mp20bat/
│ ├── mp20bat_neb_000.aselmdb
│ ├── ...
│ └── mp20bat_neb_031.aselmdb (32 files)
├── oc20/
│ ├── oc20_dimer_000.aselmdb
│ ├── ...
│ └── oc20_dimer_095.aselmdb (96 files)
└── oc22/
├── oc22_dimer_000.aselmdb
├── ...
└── oc22_dimer_031.aselmdb (32 files)
The split into multiple .aselmdb shards is purely for file-size convenience
(each shard is at most a few GB). Shards within a subset are interchangeable
and can be processed in any order.
What is in each row?
Each .aselmdb is an ASE-LMDB database whose rows are stored in triplets:
row 0 reactant (Dimer: side = -1 ; NEB: image_type = 'endpoint')
row 1 saddle (Dimer: side = 0 ; NEB: image_type = 'climbing') ← TS
row 2 product (Dimer: side = 1 ; NEB: image_type = 'endpoint')
row 3 reactant
...
The rich per-row metadata lives in row.data['info']. After
row.toatoms(), atoms.info is empty — you have to copy
row.data['info'] over yourself (see Loading).
Each row also exposes a small set of searchable scalar key/value pairs
via row.key_value_pairs — at minimum task_name and ms_id, plus
src_index, side, and status where applicable. These are convenient
for db.select(...)-style filtering, but note that ASE's aselmdb backend
performs linear scans: queries are O(N) per shard, not indexed.
The keys in row.data['info'] vary by source dataset and saddle-search
method. Only task_name and ms_id are guaranteed on every row — for
anything else, the table below documents which subsets typically have it.
When in doubt, inspect row.data['info'].keys() for a few rows of your
target subset.
key (in info dict) |
who has it | what it is |
|---|---|---|
side |
dimer rows | -1 / 0 / 1 (reactant / saddle / product) |
image_type |
NEB rows | 'endpoint' (reactant/product) or 'climbing' (saddle) |
image_idx, subband_idx, nimages |
NEB rows | NEB band geometry |
image_converged, band_converged, band_converged_CI |
NEB rows | per-image / band-level convergence flags |
effective_fmax |
NEB rows | per-image max-force (NEB-modified) at convergence |
converged |
dimer endpoint rows | endpoint converged flag |
eigenmode |
saddle rows | (N, 3) float64 — the eigenmode at the saddle, one 3D displacement vector per atom |
curvature |
dimer saddles | eigenvalue along that eigenmode (eV/Ų) |
barrier, dE |
NEB saddles | reactant→TS and reactant→product energy differences (eV) |
is_reaction, n_formed_bonds, n_broken_bonds, formed_bonds, broken_bonds |
dimer rows | bond changes detected via ASE neighbor lists |
is_ads_reaction, n_ads_*, ads_*_bonds |
dimer rows | adsorbate-restricted bond changes |
parent_ts_index |
dimer endpoints | upstream SaddleMill identifier of the parent saddle (an internal pointer; the saddle in the same triplet is the immediately adjacent row in this .aselmdb file) |
src_index |
all rows | SaddleMill internal ID (file-local) |
ms_id |
all rows | global, contiguous row index across the entire dataset (0-indexed, 0..102,406,790). Unique across all 416 shards and all four subsets. Triplets occupy three consecutive ms_ids: 3k (reactant), 3k+1 (saddle), 3k+2 (product). Used by the splits/ manifests below. |
task_name |
all rows | UMA task head used during the saddle search. Varies by subset: "omat" for lemat and mp20bat, "oc20" for oc20, "oc22" for oc22. |
status |
all rows | SaddleMill run status (always a converged* value here) |
orig_info |
all rows | nested dict; carries the source-dataset identifiers (see below) |
row.data may carry additional internal bookkeeping fields (e.g.
traj_path) that are artifacts of the upstream production pipeline and have
no scientific or downstream value. Read from row.data['info']; ignore
anything else.
Where to find the source-dataset identifiers
The location depends on whether the input went through one or two stages of the SaddleMill pipeline before this release:
| Subset | Path to source IDs in the row's info dict |
Example fields |
|---|---|---|
lemat |
info['orig_info']['orig_info'] |
immutable_id (e.g. 'agm005964602'), chemical_formula_*, functional, entalpic_fingerprint |
oc20 |
info['orig_info']['orig_info'] |
source_file (e.g. 'random1176828.extxyz.xz') |
oc22 |
info['orig_info']['orig_info'] |
sid, id, nads, natoms |
mp20bat |
info['orig_info'] |
discharge_id / charge_id (Materials Project IDs, e.g. 'mp-1006112'), working_ion, removed_ion_idxs |
For Dimer subsets info['orig_info'] itself is a SaddleMill-internal dict
(attempt_id, reaction_type, etc.) and the upstream identifiers live one
level deeper. For the NEB subset there's only one level of nesting.
Intended uses
This dataset was built with three downstream uses in mind:
- Training generative models for transition-state prediction. Each triplet gives reactant + product (conditioning) and saddle (target). The eigenmode and bond-change annotations make it easy to filter for chemically meaningful events.
- Generating DFT labels to fight MLIP barrier softening. ML interatomic potentials systematically under-predict activation barriers. Computing single-point energies/forces on these saddles + endpoints with DFT yields targeted training data that pushes MLIPs toward correct barrier heights without requiring full DFT saddle searches.
- Warm-starting DFT saddle searches. The ML-relaxed saddles are usually close enough to the DFT minimum that running a Dimer/NEB at DFT level converges in a small number of force evaluations.
Loading the data
Requirements
pip install "ase>=3.26.0" ase_db_backends
ase_db_backends registers the aselmdb backend so ase.db.connect(path, type="aselmdb") works directly. No fairchem-core install is required to
read the data. If you happen to have fairchem-core in your environment
already, an import fairchem.core.datasets before connect works as a
fallback registration on older stacks.
⚠ The atoms.info reconstruction trap
ASE's aselmdb backend does not round-trip atoms.info. Calling
row.toatoms() returns an Atoms object whose .info is empty — the full
original info dict (every metadata key documented above, including nested
orig_info and the eigenmode ndarray) lives in row.data["info"]. Always
use the canonical reader helper:
def row_to_atoms(row):
atoms = row.toatoms()
atoms.info.update(row.data["info"]) # restore the original info dict
return atoms
Minimal example
from ase.db import connect
db = connect("lemat/lemat_dimer_000.aselmdb", type="aselmdb")
for row in db.select(limit=10):
atoms = row_to_atoms(row)
print(atoms.get_chemical_formula(),
"side=", atoms.info.get("side"),
"image_type=", atoms.info.get("image_type"))
Walking the rows in triplets
from ase.db import connect
db = connect("lemat/lemat_dimer_000.aselmdb", type="aselmdb")
print(len(db), "rows ->", len(db) // 3, "transition states")
batch = []
for row in db.select():
batch.append(row_to_atoms(row))
if len(batch) == 3:
reactant, saddle, product = batch
print(saddle.get_chemical_formula(),
"eigenmode", saddle.info.get("eigenmode").shape if "eigenmode" in saddle.info else None,
"curvature", saddle.info.get("curvature"),
"barrier", saddle.info.get("barrier"))
batch = []
The same loop works without modification on every subset
(lemat, oc20, oc22, mp20bat).
Train / val / test splits
A precomputed stratified 90 / 5 / 5 train / val / test split is published
alongside the data under splits/. The split is:
- Stratified by subset — each of
lemat,oc20,oc22,mp20batis independently split 90 / 5 / 5, so the global split preserves subset proportions exactly. - Triplet-level — the three rows of a transition-state event (reactant, saddle, product) always land in the same split.
- Deterministic — produced with NumPy seed 42 from the global
ms_idenumeration. Re-running the build script reproduces the same assignment.
Layout
splits/
├── lemat/ train.parquet val.parquet test.parquet
├── oc20/ train.parquet val.parquet test.parquet
├── oc22/ train.parquet val.parquet test.parquet
└── mp20bat/ train.parquet val.parquet test.parquet
Each parquet file has a single ms_id (uint32) column listing all ms_ids
that belong to that (subset, split) bucket, sorted ascending. Every triplet
contributes its three consecutive ms_ids (3k, 3k+1, 3k+2).
Counts
| subset | total | train | val | test |
|---|---|---|---|---|
lemat |
31,346,419 | 28,211,777 | 1,567,321 | 1,567,321 |
oc20 |
2,587,101 | 2,328,391 | 129,355 | 129,355 |
oc22 |
167,335 | 150,602 | 8,367 | 8,366 |
mp20bat |
34,742 | 31,268 | 1,737 | 1,737 |
Using the splits
Train on a single subset:
import polars as pl
train_ms_ids = pl.read_parquet("splits/lemat/train.parquet")["ms_id"].to_numpy()
Train on the union of all subsets:
import polars as pl
train_ms_ids = pl.concat([
pl.read_parquet(f"splits/{s}/train.parquet")
for s in ("lemat", "oc20", "oc22", "mp20bat")
])["ms_id"].to_numpy()
Route rows from an aselmdb shard at iteration time. Because each parquet
column is sorted, np.searchsorted is the fastest membership test:
import numpy as np
from ase.db import connect
import fairchem.core.datasets # registers aselmdb backend
train_ms_ids = ... # loaded as above; sorted uint32 array
db = connect("lemat/lemat_dimer_000.aselmdb", type="aselmdb")
for row in db.select():
ms_id = row.data["info"]["ms_id"]
idx = np.searchsorted(train_ms_ids, ms_id)
if idx < len(train_ms_ids) and train_ms_ids[idx] == ms_id:
# row is in the training split — feed it to your trainer
...
The same parquet files can be loaded with datasets:
from datasets import load_dataset
ds = load_dataset(
"AnonymouScientist/MaterialsSaddles",
data_files={"train": "splits/lemat/train.parquet",
"val": "splits/lemat/val.parquet",
"test": "splits/lemat/test.parquet"},
)
Reproducing the split
The 12 parquet files are produced deterministically: shuffle each subset's
triplet indices with NumPy seed 42, take the first 90% as train, next 5% as
val, last 5% as test, then expand each triplet t to its three consecutive
ms_ids {3t, 3t+1, 3t+2}. The per-subset triplet counts in the table
above are sufficient to regenerate every file under splits/ byte-identically.
How the data was produced
We took fully relaxed structures from four public datasets (LeMat-Bulk, OC20,
OC22, and Materials Project battery structures) and ran high-throughput
saddle searches against each one using the
SaddleMill package, with Meta's
uma-s-1p2 universal interatomic potential
(fairchem-core) as the
calculator.
| Subset | Method | SaddleMill entrypoint |
|---|---|---|
lemat |
Dimer | SaddleMill.dimeropt |
oc20 |
Dimer | SaddleMill.dimeropt |
oc22 |
Dimer | SaddleMill.dimeropt |
mp20bat |
NEB-CI | SaddleMill.nebopt (climbing image) |
Initialization protocol (per-subset displacement modes such as vacancy,
hop_insert, kickout_*, ring, adsorbate_atom, diffusion, rotation,
…), eigenmode refinement, and post-search filtering are documented in the
companion paper.
After saddle convergence, every TS was validated by DoubleMinimization — displacing along the eigenmode in both directions and relaxing — and only triplets where the resulting endpoints actually correspond to two distinct basins (i.e. a real reaction occurred) are kept here. Anything that errored, hit a step limit, desorbed, or failed the reaction check is excluded.
Convergence rate of the saddle searches
The fraction of attempted saddle searches that converged on a real reaction (and thus appear in this release):
| Subset | Search attempts | Released TS | Convergence rate |
|---|---|---|---|
lemat |
TODO | 31,346,419 | TODO |
oc20 |
TODO | 2,587,101 | TODO |
oc22 |
TODO | 167,335 | TODO |
mp20bat |
TODO | 34,742 | TODO |
Numbers to be filled in.
Known limitations
- MLIP, not DFT. All saddles and endpoints in this release were converged
with the
uma-s-1p2MLIP rather than DFT. ML interatomic potentials systematically under-predict activation barriers, so the geometries here should be treated as approximate transition states. For DFT-level accuracy, run a single-point or short DFT saddle/NEB starting from these structures. atoms.infois not auto-restored byrow.toatoms(). See the trap callout above. Always use therow_to_atomshelper.row.key_value_pairsqueries are linear scans. ASE's aselmdb backend has no secondary indices;db.select(side=0)reads every row. For large filters, iterate the rows yourself in shard order.- Multi-shard cursors are user code. Each shard is independent; if
you need to iterate the entire subset (or use the
splits/manifests), open shards in sequence and route rows byrow.data["info"]["ms_id"]. - Schema varies by source. Only
task_nameandms_idare guaranteed on every row. NEB-derived rows (e.g.mp20bat) have a differentinfoschema than dimer rows (e.g. noside, butimage_type/image_idx/barrierinstead). Inspectrow.data["info"].keys()if you need to discover what's actually there for a given subset.
Citation
If you use this dataset, please cite:
@article{TODO_OUR_PAPER,
title = {{TODO: paper title}},
author = {{TODO: authors}},
journal = {{TODO: venue}},
year = {{TODO: year}},
note = {{TODO: link / arXiv ID}}
}
…and the upstream sources you actually used:
@article{chanussot2021oc20,
title = {Open Catalyst 2020 (OC20) Dataset and Community Challenges},
author = {Chanussot, Lowik and Das, Abhishek and Goyal, Siddharth and others},
journal = {ACS Catalysis},
volume = {11},
pages = {6059--6072},
year = {2021},
doi = {10.1021/acscatal.0c04525}
}
@article{tran2023oc22,
title = {The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysts},
author = {Tran, Richard and Lan, Janice and Shuaibi, Muhammed and others},
journal = {ACS Catalysis},
volume = {13},
pages = {3066--3084},
year = {2023},
doi = {10.1021/acscatal.2c05426}
}
@article{jain2013mp,
title = {Commentary: The {Materials Project}: A materials genome approach to accelerating materials innovation},
author = {Jain, Anubhav and Ong, Shyue Ping and Hautier, Geoffroy and others},
journal = {APL Materials},
volume = {1},
number = {1},
pages = {011002},
year = {2013},
doi = {10.1063/1.4812323}
}
@misc{lemat-bulk,
title = {{LeMat-Bulk}: A unified, deduplicated dataset of bulk crystal structures},
author = {{Entalpic} and {Hugging Face}},
year = {2024},
note = {\url{https://huggingface.co/datasets/LeMaterial/LeMat-Bulk}}
}
@misc{uma2025,
title = {{UMA}: A Family of Universal Models for Atoms},
author = {{Meta FAIR Chemistry}},
year = {2025},
note = {\url{https://github.com/facebookresearch/fairchem} -- model {\tt uma-s-1p2}}
}
@article{ase,
title = {The atomic simulation environment---a {Python} library for working with atoms},
author = {Larsen, Ask Hjorth and Mortensen, Jens J{\o}rgen and Blomqvist, Jakob and others},
journal = {Journal of Physics: Condensed Matter},
volume = {29},
pages = {273002},
year = {2017},
doi = {10.1088/1361-648X/aa680e}
}
Several of the entries above are placeholders or trimmed; please verify the canonical version against the publisher before submission.
License
This dataset is released under Creative Commons Attribution 4.0 International (CC-BY-4.0).
The upstream datasets retain their own licenses; consult them before any redistribution that combines this dataset with theirs.
Changelog
- v1 — initial public release.
Contact
Issues / questions: open a discussion on the Hugging Face Hub page.
- Downloads last month
- 731