| | --- |
| | license: mit |
| | tags: |
| | - Machine Learning Interatomic Potential |
| | --- |
| | |
| | # Model Card for mace-universal / mace-mp |
| |
|
| | MACE-MP is a pretrained general-purpose foundational interatomic potential published with the [preprint arXiv:2401.00096](https://arxiv.org/abs/2401.00096). |
| |
|
| | This repository is the archive of pretrained checkpoints for manual loading using `MACECalculator` or further fine-tuning. Now the easiest way to use models is to follow the **[documentation for foundtional models](https://mace-docs.readthedocs.io/en/latest/examples/foundation_models.html)**. |
| | All the models are trained with MPTrj data, [Materials Project](https://materialsproject.org) relaxation trajectories compiled by [CHGNet](https://arxiv.org/abs/2302.14231) authors to cover 89 elements and 1.6M configurations. The checkpoint was used for materials stability prediction on [Matbench Discovery](https://matbench-discovery.materialsproject.org/) and the associated [preprint](https://arXiv.org/abs/2308.14920). |
| |
|
| | [MACE](https://github.com/ACEsuit/mace) (Multiple Atomic Cluster Expansion) is a machine learning interatomic potential (MLIP) with higher order equivariant message passing. For more information about MACE formalism, please see authors' [paper](https://arxiv.org/abs/2206.07697). |
| |
|
| | # Usage |
| |
|
| | 1. (optional) Install Pytorch, [ASE](https://wiki.fysik.dtu.dk/ase/) prerequisites for preferred version |
| | 2. Install [MACE](https://github.com/ACEsuit/mace) through GitHub (not through pypi) |
| |
|
| | ```shell |
| | pip install git+https://github.com/ACEsuit/mace.git |
| | ``` |
| | 3. Use MACECalculator |
| |
|
| | ```python |
| | from mace.calculators import MACECalculator |
| | from ase import Atoms, units |
| | from ase.build import bulk |
| | from ase.md.npt import NPT |
| | |
| | atoms = bulk("NaCl", crystalstructure='rocksalt', a=3.54, cubic=True) |
| | |
| | calculator = MACECalculator( |
| | model_paths=/path/to/pretrained.model, |
| | device=device, |
| | default_dtype="float32" or "float64", |
| | ) |
| | |
| | atoms.calc = calculator |
| | |
| | dyn = NPT( |
| | atoms=atoms, |
| | timestep=timestep, |
| | temperature_K=temperature, |
| | externalstress=externalstress, |
| | ttime=ttime, |
| | pfactor=pfactor, |
| | ) |
| | |
| | dyn.run(steps) |
| | ``` |
| |
|
| | # Citing |
| |
|
| | If you use the pretrained models in this repository, please cite all the following: |
| |
|
| | ``` |
| | @article{batatia2023foundation, |
| | title={A foundation model for atomistic materials chemistry}, |
| | author={Batatia, Ilyes and Benner, Philipp and Chiang, Yuan and Elena, Alin M and Kov{\'a}cs, D{\'a}vid P and Riebesell, Janosh and Advincula, Xavier R and Asta, Mark and Baldwin, William J and Bernstein, Noam and others}, |
| | journal={arXiv preprint arXiv:2401.00096}, |
| | year={2023} |
| | } |
| | |
| | @inproceedings{Batatia2022mace, |
| | title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields}, |
| | author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi}, |
| | booktitle={Advances in Neural Information Processing Systems}, |
| | editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho}, |
| | year={2022}, |
| | url={https://openreview.net/forum?id=YPpSngE-ZU} |
| | } |
| | |
| | @article{riebesell2023matbench, |
| | title={Matbench Discovery--An evaluation framework for machine learning crystal stability prediction}, |
| | author={Riebesell, Janosh and Goodall, Rhys EA and Jain, Anubhav and Benner, Philipp and Persson, Kristin A and Lee, Alpha A}, |
| | journal={arXiv preprint arXiv:2308.14920}, |
| | year={2023} |
| | } |
| | |
| | |
| | @misc {yuan_chiang_2023, |
| | author = { {Yuan Chiang, Philipp Benner} }, |
| | title = { mace-universal (Revision e5ebd9b) }, |
| | year = 2023, |
| | url = { https://huggingface.co/cyrusyc/mace-universal }, |
| | doi = { 10.57967/hf/1202 }, |
| | publisher = { Hugging Face } |
| | } |
| | |
| | @article{deng2023chgnet, |
| | title={CHGNet as a pretrained universal neural network potential for charge-informed atomistic modelling}, |
| | author={Deng, Bowen and Zhong, Peichen and Jun, KyuJung and Riebesell, Janosh and Han, Kevin and Bartel, Christopher J and Ceder, Gerbrand}, |
| | journal={Nature Machine Intelligence}, |
| | pages={1--11}, |
| | year={2023}, |
| | publisher={Nature Publishing Group UK London} |
| | } |
| | ``` |
| |
|
| | # Training Guide |
| |
|
| | ## Training Data |
| |
|
| | <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
| |
|
| | For now, please download MPTrj data from [figshare](https://figshare.com/articles/dataset/Materials_Project_Trjectory_MPtrj_Dataset/23713842). We may upload to HuggingFace Datasets in the future. |
| |
|
| | ## Fine-tuning |
| |
|
| | <!-- This should link to a Training Procedure Card, perhaps with a short stub of information on what the training procedure is all about as well as documentation related to hyperparameters or additional training details. --> |
| |
|
| | We provide an example multi-GPU training script [2023-08-14-mace-universal.sbatch](https://huggingface.co/cyrusyc/mace-universal/blob/main/2023-08-14-mace-universal.sbatch), which uses 40 A100s on NERSC Perlmutter. Please see MACE `multi-gpu` branch for more detailed instructions. |
| |
|
| |
|