Update README.md
Browse files
README.md
CHANGED
|
@@ -16,10 +16,10 @@ In addition to cross entropy and cosine teacher-student losses, DistilProtBert w
|
|
| 16 |
|
| 17 |
DistilProtBert was pretrained on millions of proteins sequences.
|
| 18 |
|
| 19 |
-
|
| 20 |
1. Size of the model:
|
| 21 |
-
|
| 22 |
-
|
| 23 |
2. Size of the pretraining dataset: ~43M proteins (ProtBert was pretrained on 216M proteins)
|
| 24 |
3. Hardware used for pretraining: five v100 32GB Nvidia GPUs (ProtBert was pretrained on 512 16GB TPUs)
|
| 25 |
|
|
|
|
| 16 |
|
| 17 |
DistilProtBert was pretrained on millions of proteins sequences.
|
| 18 |
|
| 19 |
+
Differences between DistilProtBert model and ProtBert:
|
| 20 |
1. Size of the model:
|
| 21 |
+
- 230M parameters (420M parameters in ProtBert)
|
| 22 |
+
- 15 hidden layers (30 hidden layers in ProtBert)
|
| 23 |
2. Size of the pretraining dataset: ~43M proteins (ProtBert was pretrained on 216M proteins)
|
| 24 |
3. Hardware used for pretraining: five v100 32GB Nvidia GPUs (ProtBert was pretrained on 512 16GB TPUs)
|
| 25 |
|