yarongef commited on
Commit
da94312
·
1 Parent(s): 62bf279

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,10 +16,10 @@ In addition to cross entropy and cosine teacher-student losses, DistilProtBert w
16
 
17
  DistilProtBert was pretrained on millions of proteins sequences.
18
 
19
- Few important differences between DistilProtBert model and the original ProtBert version are:
20
  1. Size of the model:
21
- - 230M parameters (ProtBert has 420M parameters)
22
- - 15 hidden layers (ProtBert has 30 hidden layers)
23
  2. Size of the pretraining dataset: ~43M proteins (ProtBert was pretrained on 216M proteins)
24
  3. Hardware used for pretraining: five v100 32GB Nvidia GPUs (ProtBert was pretrained on 512 16GB TPUs)
25
 
 
16
 
17
  DistilProtBert was pretrained on millions of proteins sequences.
18
 
19
+ Differences between DistilProtBert model and ProtBert:
20
  1. Size of the model:
21
+ - 230M parameters (420M parameters in ProtBert)
22
+ - 15 hidden layers (30 hidden layers in ProtBert)
23
  2. Size of the pretraining dataset: ~43M proteins (ProtBert was pretrained on 216M proteins)
24
  3. Hardware used for pretraining: five v100 32GB Nvidia GPUs (ProtBert was pretrained on 512 16GB TPUs)
25