Update README.md
Browse files
README.md
CHANGED
|
@@ -10,10 +10,8 @@ datasets:
|
|
| 10 |
# DistilProtBert
|
| 11 |
|
| 12 |
Distilled version of [ProtBert-UniRef100](https://huggingface.co/Rostlab/prot_bert) model.
|
| 13 |
-
In addition to cross entropy and cosine teacher-student losses, DistilProtBert was pretrained on a masked language modeling (MLM) objective and it only works with capital letter amino acids.
|
| 14 |
|
| 15 |
-
|
| 16 |
-
Access to [git](https://github.com/yarongef/DistilProtBert)
|
| 17 |
# Model details
|
| 18 |
| **Model** | **# of parameters** | **# of hidden layers** | **Pretraining dataset** | **# of proteins** | **Pretraining hardware** |
|
| 19 |
|:--------------:|:-------------------:|:----------------------:|:-----------------------:|:------------------------------:|:------------------------:|
|
|
|
|
| 10 |
# DistilProtBert
|
| 11 |
|
| 12 |
Distilled version of [ProtBert-UniRef100](https://huggingface.co/Rostlab/prot_bert) model.
|
| 13 |
+
In addition to cross entropy and cosine teacher-student losses, DistilProtBert was pretrained on a masked language modeling (MLM) objective and it only works with capital letter amino acids. [Git](https://github.com/yarongef/DistilProtBert) repository.
|
| 14 |
|
|
|
|
|
|
|
| 15 |
# Model details
|
| 16 |
| **Model** | **# of parameters** | **# of hidden layers** | **Pretraining dataset** | **# of proteins** | **Pretraining hardware** |
|
| 17 |
|:--------------:|:-------------------:|:----------------------:|:-----------------------:|:------------------------------:|:------------------------:|
|