Update README.md
Browse files
README.md
CHANGED
|
@@ -21,8 +21,14 @@ To load tokenizer from ESM, you need to install transformers with this version a
|
|
| 21 |
!git clone -b add_esm-proper --single-branch https://github.com/liujas000/transformers.git
|
| 22 |
!pip -q install ./transformers
|
| 23 |
|
| 24 |
-
This model is a fine-tuned version of [facebook/esm-1b](https://huggingface.co/facebook/esm-1b) on
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
- Loss: 0.2250
|
| 27 |
- Accuracy: 0.9620
|
| 28 |
- F1: 0.9632
|
|
|
|
| 21 |
!git clone -b add_esm-proper --single-branch https://github.com/liujas000/transformers.git
|
| 22 |
!pip -q install ./transformers
|
| 23 |
|
| 24 |
+
This model is a fine-tuned version of [facebook/esm-1b](https://huggingface.co/facebook/esm-1b) on AAV2 dataset with ~230k sequences (Bryant et al 2020).
|
| 25 |
+
|
| 26 |
+
The WT sequence (aa561-588): D E E E I R T T N P V A T E Q Y G S V S T N L Q R G N R
|
| 27 |
+
Maximum length: 50
|
| 28 |
+
|
| 29 |
+
It achieves the following results on the evaluation set.
|
| 30 |
+
Note:this is result of the last epoch, I think the pushed model is loaded with best checkpoint - best val_loss, I'm not so sure though :/
|
| 31 |
+
|
| 32 |
- Loss: 0.2250
|
| 33 |
- Accuracy: 0.9620
|
| 34 |
- F1: 0.9632
|