Graph Machine Learning
yilunliao commited on
Commit
4862007
·
verified ·
1 Parent(s): 8fb22e6

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,3 +1,156 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ <h1 align="center" style="font-size: 24px;">EquiformerV3:<br>Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers</h1>
6
+
7
+ <!--
8
+ # **[Code](https://github.com/atomicarchitects/equiformer_v3)** | **[Paper]()**
9
+ -->
10
+
11
+ <a href="https://github.com/atomicarchitects/equiformer_v3" style="color: #1a73e8; font-weight: bold; font-size: 20px;">Code</a> |
12
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/docs/equiformer_v3_paper.pdf" style="color: #1a73e8; font-weight: bold; font-size: 20px;">Paper</a>
13
+
14
+ This repository contains the checkpoints of the work "EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers".
15
+ Please refer to the [code](https://github.com/atomicarchitects/equiformer_v3) for detailed description of usage.
16
+
17
+ <p align="center">
18
+ <img width="50%" height="50%" src="https://cdn-uploads.huggingface.co/production/uploads/64948a4a8d5ff0dd776655fe/03TPndezDyUw4FcfTBk4n.png"?
19
+ </p>
20
+
21
+
22
+
23
+
24
+ ## Content ##
25
+ <!--
26
+ 0. [OC20](#oc20)
27
+ -->
28
+ 0. [MPtrj](#mptrj)
29
+ 0. [OMat24 → MPtrj and sAlex](#oam)
30
+
31
+
32
+ <!--
33
+ <h2 id="oc20">OC20</h2>
34
+
35
+ <table>
36
+ <tr style="background-color: #f0f0f0;">
37
+ <td><strong>Model</strong></td>
38
+ <td><strong>Training data</strong></td>
39
+ <td><strong>Config</strong></td>
40
+ <td><strong>Checkpoint</strong></td>
41
+ </tr>
42
+
43
+ <tr>
44
+ <td>EquiformerV3 (91M)</td>
45
+ <td>OC20 S2EF-2M</td>
46
+ <td><a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/oc20/2M/equiformer_v3/experiments/base_N%408-L%406-C%40128-attn-hidden%4064-ffn%40512-envelope-num-rbf%40128_merge-layer-norm_gates2-gridmlp_use-gate-force-head_wd%401e-3-grad-clip%40100_lin-ref-e%404.yml">base.yml</a></td>
47
+ <td></td>
48
+ </tr>
49
+ </table>
50
+ -->
51
+
52
+
53
+ <h2 id="mptrj">MPtrj</h2>
54
+ <!--
55
+ Training consists of (1) direct pre-training and (2) gradient fine-tuning initialized from (1).
56
+ <table>
57
+ <tr style="background-color: #f0f0f0;">
58
+ <td><strong>Model</strong></td>
59
+ <td><strong>Training data</strong></td>
60
+ <td><strong>Config</strong></td>
61
+ <td><strong>Checkpoint</strong></td>
62
+ </tr>
63
+ <tr>
64
+ <td>EquiformerV3 (direct pre-training)</td>
65
+ <td>MPtrj</td>
66
+ <td>
67
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/omat24/mptrj/experiments/direct/equiformer_v3_N%407_L%404_attn-hidden%4032_rbf%4010_max-neighbors%40300_attn-grid%4014-8_ffn-grid%4014_use-gate-force-head_merge-layer-norm_epochs%4070-bs%40512-wd%401e-3-beta2%400.95_dens-p%400.5-std%400.025-r%400.5-w%4010-strict-max-r%400.75-no-stress.yml">
68
+ direct.yml
69
+ </a>
70
+ </td>
71
+ <td></td>
72
+ </tr>
73
+ <tr>
74
+ <td>EquiformerV3 (gradient fine-tuning)</td>
75
+ <td>MPtrj</td>
76
+ <td>
77
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/omat24/mptrj/experiments/gradient/equiformer_v3_grad-finetune_N%407_L%404_attn-hidden%4032_rbf%4010_max-neighbors%40300_attn-grid%4014-8_ffn-grid%4014_pt-reg-dens-no-stress-strict-max-r%400.75-ft-no-reg_lr%400-5e-5-epochs%4010-bs%4064x8-wd%401e-3-beta2%400.95.yml">
78
+ gradient.yml
79
+ </a>
80
+ </td>
81
+ <td></td>
82
+ </tr>
83
+ </table>
84
+ -->
85
+ <table>
86
+ <tr style="background-color: #f0f0f0;">
87
+ <td><strong>Model</strong></td>
88
+ <td><strong>Training data</strong></td>
89
+ <td><strong>Checkpoint</strong></td>
90
+ </tr>
91
+ <tr>
92
+ <td>EquiformerV3</td>
93
+ <td>MPtrj</td>
94
+ <td>
95
+ <a href="https://huggingface.co/yilunliao/equiformer_v3/blob/main/checkpoint/mptrj_gradient.pt">
96
+ mptrj_gradient.pt
97
+ </a>
98
+ </td>
99
+ </tr>
100
+ </table>
101
+
102
+
103
+ <h2 id="oam">OMat24 → MPtrj and sAlex</h2>
104
+ Training consists of (1) direct pre-training on OMat24,
105
+ (2) gradient fine-tuning on OMat24 initialized from (1), and
106
+ (3) gradient fine-tuning on MPtrj and sAlex initialized from (2).
107
+ <table>
108
+ <tr style="background-color: #f0f0f0;">
109
+ <td><strong>Model</strong></td>
110
+ <td><strong>Training data</strong></td>
111
+ <td><strong>Config</strong></td>
112
+ <td><strong>Checkpoint</strong></td>
113
+ </tr>
114
+ <tr>
115
+ <td>EquiformerV3 (direct pre-training)</td>
116
+ <td>OMat24</td>
117
+ <td>
118
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/omat24/omat24/experiments/direct/equiformer_v3_N%407_L%404_attn-hidden%4032_rbf%4064_max-neighbors%40300_attn-grid%4014-8_ffn-grid%4014_use-gate-force-head_merge-layer-norm_epochs%404-bs%40512-wd%401e-3-beta2%400.98-eps%401e-6_dens-p%400.5-std%400.025-r%400.5-0.75-w%401-no-stress-max-f%402.5_no-amp.yml">
119
+ omat24_direct.yml
120
+ </a>
121
+ </td>
122
+ <td>
123
+ <a href="https://huggingface.co/yilunliao/equiformer_v3/blob/main/checkpoint/omat24_direct.pt">
124
+ omat24_direct.pt
125
+ </a>
126
+ </td>
127
+ </tr>
128
+ <tr>
129
+ <td>EquiformerV3 (gradient fine-tuning)</td>
130
+ <td>OMat24</td>
131
+ <td>
132
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/omat24/omat24/experiments/gradient/equiformer_v3_grad-finetune_N%407_L%404_attn-hidden%4032_rbf%4064_max-neighbors%40300_attn-grid%4014-8_ffn-grid%4014_merge-layer-norm_lr%400-1e-4-epochs%402-bs%40512-wd%401e-3-beta2%400.98-eps%401e-6_pt-reg-dens-ft-no-reg.yml">
133
+ omat24_gradient.yml
134
+ </a>
135
+ </td>
136
+ <td>
137
+ <a href="https://huggingface.co/yilunliao/equiformer_v3/blob/main/checkpoint/omat24_gradient.pt">
138
+ omat24_gradient.pt
139
+ </a>
140
+ </td>
141
+ </tr>
142
+ <tr>
143
+ <td>EquiformerV3 (gradient fine-tuning)</td>
144
+ <td>MPtrj and sAlex</td>
145
+ <td>
146
+ <a href="https://github.com/atomicarchitects/equiformer_v3/blob/main/experimental/configs/omat24/salex_mptrj/experiments/gradient/equiformer_v3_grad-finetune_N%407_L%404_attn-hidden%4032_rbf%4064_max-neighbors%40300_attn-grid%4014-8_ffn-grid%4014_attn-eps%401e-8_lr%400-5e-5-warmup%400.1-epochs%402-mptrj-salex-ratio%408-bs%40256-wd%401e-3-beta2%400.98-eps%401e-6_pt-reg-dens-ft-no-reg-lr%401e-4.yml">
147
+ mptrj-salex_gradient.yml
148
+ </a>
149
+ </td>
150
+ <td>
151
+ <a href="https://huggingface.co/yilunliao/equiformer_v3/blob/main/checkpoint/omat24-mptrj-salex_gradient.pt">
152
+ omat24-mptrj-salex_gradient.pt
153
+ </a>
154
+ </td>
155
+ </tr>
156
+ </table>
checkpoint/mptrj_gradient.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59c6c23573a3b05b347662f473209d1bb1ccb5b85f6624a8f87072e7c397addf
3
+ size 485354773
checkpoint/omat24-mptrj-salex_gradient.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:429ccded98163122e7ba588d78e2441653f37f3e091e106c432807fe373c8f98
3
+ size 486243861
checkpoint/omat24_direct.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd874ab2698a391a03a5083b8729c286c676d76ca584437ff5e550696609a96d
3
+ size 559867247
checkpoint/omat24_gradient.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c4b9139c1a14094ed724c3addd584ce07afaf0032e1ec77490d2c60ed8b8a2b
3
+ size 486243797