File size: 443 Bytes
a2c6fee
 
 
 
 
 
 
 
0887071
 
 
 
fbf513c
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
datasets:
- c4
language:
- en
metrics:
- accuracy
pipeline_tag: fill-mask
---

A small version of `DeBERTa` trained on the clean version of google C4 dataset. For more info about the size of the model, see `config.json`.

The model has been trained for **100K** steps with a batch size of **2048** and a sequence length of **512**, for a total of **104B** tokens.

The vocabulary and the tokenizer are the same as `microsoft/deberta-base`.