CP-Catalysis Environment & Pretrained Models

Pre-packed conda environment and pretrained model checkpoints for CP-MLIP — a multi-model active learning framework for electrochemical catalysis with architecture-level Constant Potential (CP) support across 8 interatomic potential models.

This project includes architecture-level modifications to multiple model packages (CP-MACE's ScaleShiftMACE, GPUMD's CUDA source for NEP electron injection + potential output, etc.) and custom CP wrappers that patch into MatterSim and fairchem-core at runtime. These modifications cannot be reproduced from standard pip install alone. You must use the pre-packed conda environment which includes all modified binaries and patched packages.

Supported Models

All models use joint fine-tuning with combined loss (energy + 100x forces + potential), matching the CP-MACE training strategy:

Model Type Pretrained Base CP Training Outputs
MACE Graph NN MACE-MP medium Joint (E+F+mu) E, F, mu
SchNet Message Passing From scratch Joint (E+F+mu) E, F, mu
DimeNet++ Directional MP From scratch Joint (E+F+mu) E, F, mu
PaiNN Equivariant MP From scratch Joint (E+F+mu) E, F, mu
MatterSim M3GNet mattersim-v1.0.0-1M Joint (E+F+mu) E, F, mu
EquiformerV2/ESEN Transformer esen-sm-conserving-all-oc25 Joint (E+F+mu) E, F, mu
UMA eSCN-MoE uma-s-1p1 Joint (E+F+mu) E, F, mu
NEP89 Neuroevolution nep89_20250409 Joint (E+F+mu, CUDA) E, F, mu

E = energy, F = forces, mu = electrode potential.

Files

Conda Environment

File Size Description
cp-al-multimodel.tar.gz ~7.2 GB Complete conda environment (Python 3.11 + CUDA 12.8) with all CP-modified packages
environment.yml - Conda env spec (reference only)
requirements.txt - pip freeze (reference only)

Pretrained Model Checkpoints

All 5 pretrained foundation models used for CP fine-tuning:

File Size Model CP Modification
pretrained/mace-mp-medium.model 76 MB MACE-MP medium FermiMACE: electron concat to node_attrs, potential output layer
pretrained/mattersim-v1.0.0-1M.pth 18 MB MatterSim M3GNet CPMatterSimM3GNet: electron to atom_feat, all graph_conv
pretrained/esen-sm-conserving-all-oc25.pt 49 MB EquiformerV2/ESEN (OC25 catalysis) CPFairChemBackbone: electron to csd_embedding, all eSCNMD blocks
pretrained/uma-s-1p1.pt 1.1 GB UMA small (universal) CPFairChemBackbone: electron to csd_embedding, all eSCNMD blocks
pretrained/esen-oc25-iso_atom_elem_refs.yaml - ESEN atom reference energies Required by ESEN calculator
pretrained/uma-iso_atom_elem_refs.yaml - UMA atom reference energies Required by UMA calculator
pretrained/uma-form_elem_refs.yaml - UMA formation energy refs Required by UMA calculator

NEP89 Foundation Model

File Size Description
nep89/nep89_20250409.txt 15 MB NEP89 model weights (89-element universal potential)
nep89/nep89_20250409.restart 30 MB NEP89 SNES parameters (required for fine-tuning)

What's in the Conda Environment

All dependencies pre-installed with architecture-level CP modifications:

Component Version CP Modification
PyTorch 2.8.0+cu128 -
torch-geometric 2.7.0 -
e3nn 0.5.3 -
CP-MACE (local) - FermiMACE variant: electron_input=True + potential_output=True in ScaleShiftMACE; joint training (E+F+mu)
MatterSim 1.2.1 CPMatterSimM3GNet wrapper: electron to atom_feat, joint fine-tune all params (E+F+mu)
fairchem-core 2.18.0 CPFairChemBackbone wrapper: electron to csd_embedding, joint fine-tune full HydraModel (E+F+mu)
GPUMD nep latest + CP use_electron + w1_pot potential output head; snes.cu fine-tune weight mapping (dim+1, zero-init)
GPUMD gpumd latest + CP Inference: electron from model.xyz to atom.electron_count to kernel q[dim-1]
PyNEP 0.0.1 CPU NEP inference
Calorine 3.3 GPU NEP inference (GPUNEP)

GPUMD CUDA Modifications (in compiled nep + gpumd binaries)

Training (nep binary):

File Modification
main_nep/structure.cuh/cu electron_count, potential fields; XYZ parsing for electron= keyword
main_nep/parameters.cuh/cu use_electron config keyword; dim += 1 for extra NN input
main_nep/dataset.cuh/cu electron_count_ref_gpu/cpu arrays; broadcast per-atom
main_nep/nep.cu apply_ann_electron kernel with w1_pot potential output head
main_nep/nep.cuh w1_pot[NUM_ELEMENTS], b1_pot in ANN struct
main_nep/snes.cu Fine-tune weight loading: maps NEP89 weights to dim+1, zero-inits electron column + w1_pot
main_nep/fitness.cu Writes nep4_electron/nep4_zbl_electron tag in nep.txt
utilities/nep_utilities.cuh MAX_DIM 103 to 110

Inference (gpumd binary):

File Modification
force/nep.cu Recognizes nep4_electron/nep4_zbl_electron tags, reads w1_pot weights, adjusts dim+1
force/nep.cuh Added w1_pot/b1_pot to inference ANN struct
force/force.cu Electron model type in allowed potential dispatch + temperature compute
model/atom.cuh float electron_count field in Atom class
model/read_xyz.cu Parse electron= from model.xyz comment line
main_gpumd/run.cu Override force.temperature with atom.electron_count for model_type==4

ASE Calculator CP Integration

All CP calculators override check_state() to detect atoms.info["electron"] changes, forcing recalculation when electron count changes (ASE's default check_state only tracks positions/numbers/cell).

Installation

# 1. Download environment
wget https://huggingface.co/zhilong777/CP-catalysis-env/resolve/main/cp-al-multimodel.tar.gz

# 2. Download all pretrained models
HF_BASE=https://huggingface.co/zhilong777/CP-catalysis-env/resolve/main
mkdir -p pretrained nep89

# MACE-MP (for FermiMACE fine-tuning)
wget $HF_BASE/pretrained/mace-mp-medium.model -P pretrained/

# MatterSim (for CPMatterSimM3GNet fine-tuning)
wget $HF_BASE/pretrained/mattersim-v1.0.0-1M.pth -P pretrained/

# EquiformerV2/ESEN (for CPFairChemBackbone fine-tuning)
wget $HF_BASE/pretrained/esen-sm-conserving-all-oc25.pt -P pretrained/
wget $HF_BASE/pretrained/esen-oc25-iso_atom_elem_refs.yaml -P pretrained/

# UMA (for CPFairChemBackbone fine-tuning)
wget $HF_BASE/pretrained/uma-s-1p1.pt -P pretrained/
wget $HF_BASE/pretrained/uma-iso_atom_elem_refs.yaml -P pretrained/
wget $HF_BASE/pretrained/uma-form_elem_refs.yaml -P pretrained/

# NEP89 (for GPUMD nep fine-tuning with use_electron)
wget $HF_BASE/nep89/nep89_20250409.txt -P nep89/
wget $HF_BASE/nep89/nep89_20250409.restart -P nep89/

# 3. Extract and activate environment
mkdir -p ~/envs/cp-al-multimodel
tar -xzf cp-al-multimodel.tar.gz -C ~/envs/cp-al-multimodel
source ~/envs/cp-al-multimodel/bin/activate
conda-unpack

# 4. Clone and install the framework
git clone https://github.com/AI4QC/CP-catalysis.git
cd CP-catalysis
pip install -e .

# 5. Place pretrained model files
mv nep89 environments/nep89

# 6. Verify (requires NVIDIA GPU)
python -m pytest test/test_e2e_si.py -v
python -m pytest test/test_cp_finetune.py -v

Hardware Requirements

  • NVIDIA GPU with compute capability >= 7.5 (RTX 20xx or newer)
  • CUDA driver >= 12.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support