gabrielbianchin commited on
Commit
0923247
·
verified ·
1 Parent(s): 4d9c542

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: "ESM2 Quantized Models"
3
+ ---
4
+
5
+ ## ESM2 Quantized
6
+
7
+ ESM2 Quantized is an adapted version of the ESM2 architectures. It uses local attention instead of global attention, allowing for models with longer input sizes. ESM2 Quantized models have a context size of 2,050, double that of the standard ESM2 model. This kind of model was trained with int4 quantization. Several ESM2 Quantized models are available:
8
+
9
+ | Model | Num layers |
10
+ |------------------------------|----|
11
+ | [gabrielbianchin/esm2_t36_long_int4](https://huggingface.co/gabrielbianchin/esm2_t36_long_int4) | 36 |
12
+ | [gabrielbianchin/esm2_t33_long_int4](https://huggingface.co/gabrielbianchin/esm2_t33_long_int4) | 33 |
13
+ | [gabrielbianchin/esm2_t30_long_int4](https://huggingface.co/gabrielbianchin/esm2_t30_long_int4) | 30 |
14
+ | [gabrielbianchin/esm2_t12_long_int4](https://huggingface.co/gabrielbianchin/esm2_t12_long_int4) | 12 |
15
+ | [gabrielbianchin/esm2_t6_long_int4](https://huggingface.co/gabrielbianchin/esm2_t6_long_int4) | 6 |
16
+
17
+
18
+
19
+ For detailed information, please refer to the paper.