--- license: mit task_categories: - other language: - en tags: - materials-science - dft - machine-learning - high-entropy-materials - mlip - computational-chemistry size_categories: - 100KOpen to show fine-tuning instructions This requires the new version of fairchem which does not support eSEN-OAM (as of 2025-07-16). #### Set Up the Python Environment Create and activate a virtual environment. Using [uv](https://github.com/astral-sh/uv) . **Note**: All of the helper scripts are based on `uv`, which allows multiple different versions of python to be used, and manages all the dependencies automatically. If you do not use `uv`, you will need to manage the environments yourself using `pip`. We do not recommend doing this as the different models require different versions of python, torch, and is difficult to maintain. It is important to run the python scripts directly, instead of using `python`, for the dependencies to be managed automatically by `uv`. ``` sh curl -LsSf https://astral.sh/uv/install.sh | sh ``` #### Prepare the Data First, you want to download the UMA model, and save it to `./models/uma-s-p1.pt`. Select the data you want to use for the fine tuning. You should move all the training sets into a directory named `train`, and all the validation sets into a directory named `val`. For example, if we want to pick up the high-entropy alloys and oxides: ``` sh mkdir train tar xaf high-entropy-alloys-train.tar.zst tar xaf high-entropy-oxides-train.tar.zst mv high-entropy-alloys-train/* train/ mv high-entropy-oxides-train/* train/ mkdir val tar xaf high-entropy-alloys-val.tar.zst tar xaf high-entropy-oxides-val.tar.zst mv high-entropy-alloys-val/* val/ mv high-entropy-oxides-val/* val/ ``` We also need to get the fine-tuning scripts from fairchem: ``` sh git clone https://github.com/facebookresearch/fairchem mv fairchem/{src,configs} . ``` #### (Optional) Run Evaluations Before Fine-tuning ##### Force/Energy Errors It is helpful to see the performance of the base model first before fine-tuning. ``` sh ./evaluate_uma.py test/ ``` This produces a jsonl file which can be plotted with another helper script: ``` sh ./plot_energy.py test_uma_ef.jsonl ``` ##### Relaxations ``` sh ./relax_uma.py test_prototypes/ ``` This produces a new folder of `.aselmdb` files named `test_uma_relaxed`, which can be plotted: ``` sh ./plot_energy.py test_prototypes_uma_relaxed.jsonl ``` #### Run Fine-tuning First, generate the configuration using fairchem's helper script: ``` sh uv run src/fairchem/core/scripts/create_uma_finetune_dataset.py --train-dir train/ --val-dir val/ --output-dir ./finetune_output --uma-task=omat --regression-task ef ``` (`ef` means energy+force. Force is required to run relaxations. `efs` can also be used to include stress.). The configuration file is at `finetune_output/uma_sm_finetune_template.yaml`. It should be edited to increase the step count between running evals so as to not slow down the training excessively. Find the corresponding keys, and change it to the following: ``` yaml evaluate_every_n_steps: 5000 checkpoint_every_n_steps: 5000 ``` If you have more than one GPU, you should select a GPU now (run `nvidia-smi` to see which GPU is free). `fairchem`'s finetuning scripts only work on a single GPU, but this should be relatively fast. ``` sh export CUDA_VISIBLE_DEVICES=0 ``` Now, you can run the fine-tuning. This will take a while. It is recommended to use `screen`, `tmux`, or a similar tool to avoid interruptions. ``` yaml uv run fairchem -c finetune_output/uma_sm_finetune_template.yaml job.run_dir=./finetune_out ``` After the fine tuning you should get a checkpoint inside the `finetune_out` directory. ### Evaluation of the Fine-tuned Model Now that you have a fine-tuned model, you can evaluate it on the test set. Several metrics can be used for this. Common ones included in this repository are: - Mean absolute error of force and energy predictions (vs DFT) - Geometry error of relaxed structures (root-mean-squared displacement, RMSD). #### Force/Energy Errors It is helpful to see the performance of the base model first before fine-tuning. ``` sh ./evaluate_uma.py --model finetune_out/checkpoint.pt test/ ``` This produces a jsonl file which can be plotted with another helper script: ``` sh ./plot_energy.py test/ test_uma_relaxed/ ``` #### Relaxations ``` sh ./relax_uma.py --model finetune_out/checkpoint.pt test_prototypes/ ``` This produces a new folder of `.aselmdb` files named `test_prototypes_uma_relaxed`, which can be plotted. Note that the energies here should be compared against the final relaxed structures, which is provided separatly in `*-test-final.tar.zst`. ``` sh ./plot_energy.py test_final/ test_prototypes_uma_relaxed/ ``` --- ## Citation If you use this dataset in your research, please cite: ```bibtex @dataset{chaos_matforge_2024, title={CHAOS-MatForge: Combinatorial High-throughput Analysis and Optimization for Synthesis Database}, author={Han, Guangshuai and Tseng, Shao-Yu and Li, Tianhao and Oses, Corey}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/datasets/entropy4energy/S4E-MatForge} } ```