license: mit
pipeline_tag: graph-ml
EquiformerV3:
Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers
This repository contains the checkpoints for EquiformerV3, the third generation of the $SE(3)$-equivariant graph attention Transformer. EquiformerV3 is designed to advance efficiency, expressivity, and generality in 3D atomistic modeling.
Building on EquiformerV2, this version introduces software optimizations achieving a $1.75\times$ speedup, structural improvements like equivariant merged layer normalization and smooth-cutoff attention, and SwiGLU-$S^2$ activations to incorporate many-body interactions while preserving strict equivariance. EquiformerV3 achieves state-of-the-art results on benchmarks including OC20, OMat24, and Matbench Discovery.
Please refer to the official GitHub repository for detailed instructions on environment setup and usage.
Checkpoints
MPtrj
| Model | Training data | Checkpoint |
| EquiformerV3 | MPtrj | mptrj_gradient.pt |
OMat24 → MPtrj and sAlex
Training consists of (1) direct pre-training on OMat24, (2) gradient fine-tuning on OMat24 initialized from (1), and (3) gradient fine-tuning on MPtrj and sAlex initialized from (2).
| Model | Training data | Config | Checkpoint |
| EquiformerV3 (direct pre-training) | OMat24 | omat24_direct.yml | omat24_direct.pt |
| EquiformerV3 (gradient fine-tuning) | OMat24 | omat24_gradient.yml | omat24_gradient.pt |
| EquiformerV3 (gradient fine-tuning) | MPtrj and sAlex | mptrj-salex_gradient.yml | omat24-mptrj-salex_gradient.pt |
Citation
If you find this work helpful, please consider citing:
@article{equiformer_v3,
title={EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers},
author={Yi-Lun Liao and Alexander J. Hoffman and Sabrina C. Shen and Alexandre Duval and Sam Walton Norwood and Tess Smidt},
journal={arXiv preprint arXiv:2604.09130},
year={2026}
}