---
license: mit
pipeline_tag: graph-ml
---
EquiformerV3:
Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers
Code |
Paper
This repository contains the checkpoints for **EquiformerV3**, the third generation of the SE(3)-equivariant graph attention Transformer.
EquiformerV3 is designed to advance efficiency, expressivity, and generality in 3D atomistic modeling.
Building on [EquiformerV2](https://arxiv.org/abs/2306.12059), this version introduces (1) software optimizations,
(2) simple and effective modifications like equivariant merged layer normalization and attention with smooth cutoff, and
(3) SwiGLU-S^2 activations, which incorporate many-body interactions and preserve strict equivariance.
EquiformerV3 achieves state-of-the-art results on benchmarks including OC20, OMat24, and Matbench Discovery.
Please refer to the [official GitHub repository](https://github.com/atomicarchitects/equiformer_v3) for detailed instructions on environment setup and usage.
## Checkpoints
### MPtrj
### OMat24 → MPtrj and sAlex
Training consists of (1) direct pre-training on OMat24, (2) gradient fine-tuning on OMat24 initialized from (1), and (3) gradient fine-tuning on MPtrj and sAlex initialized from (2).
## Citation
Please consider citing this work below if it is helpful:
```bibtex
@article{equiformer_v3,
title={EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers},
author={Yi-Lun Liao and Alexander J. Hoffman and Sabrina C. Shen and Alexandre Duval and Sam Walton Norwood and Tess Smidt},
journal={arXiv preprint arXiv:2604.09130},
year={2026}
}
```