|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- other |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- materials-science |
|
|
- dft |
|
|
- machine-learning |
|
|
- high-entropy-materials |
|
|
- mlip |
|
|
- computational-chemistry |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# CHAOS-MatForge: Combinatorial High-throughput Analysis and Optimization for Synthesis Database |
|
|
|
|
|
This dataset is a contribution from the **Entropy for Energy (S4E) Laboratory** at **Johns Hopkins University**, led by Prof. Corey Oses. |
|
|
|
|
|
We release a large-scale, million-level dataset of high-quality VASP calculations for training machine learning interatomic potentials (MLIPs). This database provides comprehensive coverage of both **ordered** and **disordered** high-entropy material systems. |
|
|
|
|
|
For more information about the S4E Laboratory, visit [https://s4e.ai/](https://s4e.ai/). |
|
|
|
|
|
## Dataset Overview and Statistics |
|
|
|
|
|
The CHAOS-MatForge database contains **millions** of atomic structures with DFT-calculated energies, forces, and structural information. The dataset is designed to support the development and fine-tuning of state-of-the-art MLIP models. |
|
|
|
|
|
### Ordered Structures (Coming Soon) |
|
|
|
|
|
Ordered crystal structures for comparison and validation will be available in future releases. |
|
|
|
|
|
### Disordered Structures |
|
|
|
|
|
High-entropy systems with configurational disorder, generated through AFLOW-POCC (Partially Occupied Crystalline Configurations) methodology. Different snapshots are extracted to represent the configurational space. |
|
|
|
|
|
The disordered structures currently include two main material categories: |
|
|
1. **High-Entropy Alloys (HEA)**: Body-Centered Cubic (BCC) structure |
|
|
2. **High-Entropy Oxides (HEO)**: Rocksalt structure |
|
|
|
|
|
#### Structure Counts |
|
|
Structure counts include all ionic steps from the relaxation trajectories: |
|
|
|
|
|
| Category | Train | Validation | Test | |
|
|
|---------------------|---------|------------|--------| |
|
|
| alloys | 467017 | 59345 | 57753 | |
|
|
| oxides | 2135365 | 264284 | 250559 | |
|
|
| high-entropy-alloys | 168633 | 49675 | 21980 | |
|
|
| high-entropy-oxides | 127086 | 9703 | 13890 | |
|
|
|
|
|
#### Formula Counts |
|
|
Number of unique chemical compositions: |
|
|
|
|
|
| Category | Train | Validation | Test | |
|
|
|---------------------|-------|------------|------| |
|
|
| alloys | 6824 | 805 | 885 | |
|
|
| oxides | 7995 | 944 | 982 | |
|
|
| high-entropy-alloys | 88 | 24 | 12 | |
|
|
| high-entropy-oxides | 169 | 15 | 16 | |
|
|
|
|
|
#### System Counts |
|
|
Number of distinct snapshots (configurational instances): |
|
|
|
|
|
| Category | Train | Validation | Test | |
|
|
|---------------------|-------|------------|------| |
|
|
| alloys | 13681 | 1692 | 1665 | |
|
|
| oxides | 38294 | 4814 | 4512 | |
|
|
| high-entropy-alloys | 167 | 47 | 22 | |
|
|
| high-entropy-oxides | 169 | 15 | 16 | |
|
|
|
|
|
**Note**: System is defined as a combination of formula and space group. High entropy systems are approximated using the POCC formalism, and each contain multiple sets of structures as ordered representatives. |
|
|
|
|
|
Each split is fully deterministic, based on the sha256sum of the formula. |
|
|
|
|
|
--- |
|
|
|
|
|
## Data Format |
|
|
|
|
|
- **File Format**: `.tar.zst` (zstandard compressed tar archives) |
|
|
- **Database Format**: `.aselmdb` (ASE database format) |
|
|
- **Each Structure Contains**: |
|
|
- Atomic positions (3D coordinates) |
|
|
- Cell parameters (lattice vectors) |
|
|
- Energy (DFT calculated, in eV) |
|
|
- Forces (on each atom, in eV/Å) |
|
|
- Chemical composition and metadata |
|
|
|
|
|
--- |
|
|
|
|
|
## Tutorials |
|
|
|
|
|
**Note**: The helper scripts required for the tutorials (e.g., `evaluate_uma.py`, `relax_uma.py`, `plot_energy.py`) can be found in the [GitHub repository](https://github.com/entropy4energy/aflow-data-release). |
|
|
|
|
|
### Fine-tuning Meta's UMA Model |
|
|
|
|
|
<details><summary>Open to show fine-tuning instructions</summary> |
|
|
|
|
|
This requires the new version of fairchem which does not support eSEN-OAM (as of 2025-07-16). |
|
|
|
|
|
#### Set Up the Python Environment |
|
|
Create and activate a virtual environment. Using [uv](https://github.com/astral-sh/uv) . |
|
|
|
|
|
**Note**: All of the helper scripts are based on `uv`, which allows multiple different versions of python to be used, and manages all the dependencies automatically. |
|
|
|
|
|
If you do not use `uv`, you will need to manage the environments yourself using `pip`. We do not recommend doing this as the different models require different versions of python, torch, and is difficult to maintain. |
|
|
|
|
|
It is important to run the python scripts directly, instead of using `python`, for the dependencies to be managed automatically by `uv`. |
|
|
|
|
|
``` sh |
|
|
curl -LsSf https://astral.sh/uv/install.sh | sh |
|
|
``` |
|
|
|
|
|
#### Prepare the Data |
|
|
First, you want to download the UMA model, and save it to `./models/uma-s-p1.pt`. |
|
|
|
|
|
Select the data you want to use for the fine tuning. You should move all the training sets into a directory named `train`, and all the validation sets into a directory named `val`. |
|
|
|
|
|
For example, if we want to pick up the high-entropy alloys and oxides: |
|
|
|
|
|
``` sh |
|
|
mkdir train |
|
|
tar xaf high-entropy-alloys-train.tar.zst |
|
|
tar xaf high-entropy-oxides-train.tar.zst |
|
|
mv high-entropy-alloys-train/* train/ |
|
|
mv high-entropy-oxides-train/* train/ |
|
|
|
|
|
mkdir val |
|
|
tar xaf high-entropy-alloys-val.tar.zst |
|
|
tar xaf high-entropy-oxides-val.tar.zst |
|
|
mv high-entropy-alloys-val/* val/ |
|
|
mv high-entropy-oxides-val/* val/ |
|
|
``` |
|
|
|
|
|
We also need to get the fine-tuning scripts from fairchem: |
|
|
|
|
|
``` sh |
|
|
git clone https://github.com/facebookresearch/fairchem |
|
|
mv fairchem/{src,configs} . |
|
|
``` |
|
|
|
|
|
#### (Optional) Run Evaluations Before Fine-tuning |
|
|
|
|
|
##### Force/Energy Errors |
|
|
It is helpful to see the performance of the base model first before fine-tuning. |
|
|
|
|
|
``` sh |
|
|
./evaluate_uma.py test/ |
|
|
``` |
|
|
|
|
|
This produces a jsonl file which can be plotted with another helper script: |
|
|
``` sh |
|
|
./plot_energy.py test_uma_ef.jsonl |
|
|
``` |
|
|
##### Relaxations |
|
|
|
|
|
``` sh |
|
|
./relax_uma.py test_prototypes/ |
|
|
``` |
|
|
This produces a new folder of `.aselmdb` files named `test_uma_relaxed`, which can be plotted: |
|
|
|
|
|
``` sh |
|
|
./plot_energy.py test_prototypes_uma_relaxed.jsonl |
|
|
``` |
|
|
|
|
|
#### Run Fine-tuning |
|
|
First, generate the configuration using fairchem's helper script: |
|
|
|
|
|
``` sh |
|
|
uv run src/fairchem/core/scripts/create_uma_finetune_dataset.py --train-dir train/ --val-dir val/ --output-dir ./finetune_output --uma-task=omat --regression-task ef |
|
|
``` |
|
|
|
|
|
(`ef` means energy+force. Force is required to run relaxations. `efs` can also be used to include stress.). |
|
|
|
|
|
The configuration file is at `finetune_output/uma_sm_finetune_template.yaml`. It should be edited to increase the step count between running evals so as to not slow down the training excessively. Find the corresponding keys, and change it to the following: |
|
|
|
|
|
``` yaml |
|
|
evaluate_every_n_steps: 5000 |
|
|
checkpoint_every_n_steps: 5000 |
|
|
``` |
|
|
|
|
|
If you have more than one GPU, you should select a GPU now (run `nvidia-smi` to see which GPU is free). |
|
|
`fairchem`'s finetuning scripts only work on a single GPU, but this should be relatively fast. |
|
|
|
|
|
``` sh |
|
|
export CUDA_VISIBLE_DEVICES=0 |
|
|
``` |
|
|
|
|
|
Now, you can run the fine-tuning. This will take a while. It is recommended to use `screen`, `tmux`, or a similar tool to avoid interruptions. |
|
|
|
|
|
``` yaml |
|
|
uv run fairchem -c finetune_output/uma_sm_finetune_template.yaml job.run_dir=./finetune_out |
|
|
``` |
|
|
|
|
|
After the fine tuning you should get a checkpoint inside the `finetune_out` directory. |
|
|
|
|
|
### Evaluation of the Fine-tuned Model |
|
|
|
|
|
Now that you have a fine-tuned model, you can evaluate it on the test set. |
|
|
|
|
|
Several metrics can be used for this. Common ones included in this repository are: |
|
|
- Mean absolute error of force and energy predictions (vs DFT) |
|
|
- Geometry error of relaxed structures (root-mean-squared displacement, RMSD). |
|
|
|
|
|
#### Force/Energy Errors |
|
|
|
|
|
It is helpful to see the performance of the base model first before fine-tuning. |
|
|
|
|
|
``` sh |
|
|
./evaluate_uma.py --model finetune_out/checkpoint.pt test/ |
|
|
``` |
|
|
|
|
|
This produces a jsonl file which can be plotted with another helper script: |
|
|
``` sh |
|
|
./plot_energy.py test/ test_uma_relaxed/ |
|
|
``` |
|
|
#### Relaxations |
|
|
|
|
|
``` sh |
|
|
./relax_uma.py --model finetune_out/checkpoint.pt test_prototypes/ |
|
|
``` |
|
|
|
|
|
This produces a new folder of `.aselmdb` files named `test_prototypes_uma_relaxed`, which can be plotted. |
|
|
|
|
|
Note that the energies here should be compared against the final relaxed structures, which is provided separatly in `*-test-final.tar.zst`. |
|
|
|
|
|
``` sh |
|
|
./plot_energy.py test_final/ test_prototypes_uma_relaxed/ |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
</details> |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your research, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{chaos_matforge_2024, |
|
|
title={CHAOS-MatForge: Combinatorial High-throughput Analysis and Optimization for Synthesis Database}, |
|
|
author={Han, Guangshuai and Tseng, Shao-Yu and Li, Tianhao and Oses, Corey}, |
|
|
year={2024}, |
|
|
publisher={Hugging Face}, |
|
|
url={https://huggingface.co/datasets/entropy4energy/S4E-MatForge} |
|
|
} |
|
|
``` |
|
|
|