|
|
--- |
|
|
language: ti |
|
|
widget: |
|
|
- text: "ዓቕሚ ደቂኣንስትዮ [MASK] ብግብሪ ተራእዩ" |
|
|
--- |
|
|
|
|
|
# BERT Base for Tigrinya Language |
|
|
|
|
|
We pre-train a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs. |
|
|
|
|
|
This repo contains the original pre-trained Flax model that was trained on a TPU v3.8 and its corresponding PyTorch version. |
|
|
|
|
|
## Hyperparameters |
|
|
|
|
|
The hyperparameters corresponding to the model sizes mentioned above are as follows: |
|
|
|
|
|
| Model Size | L | AH | HS | FFN | P | Seq | |
|
|
|------------|----|----|-----|------|------|------| |
|
|
| BASE | 12 | 12 | 768 | 3072 | 110M | 512 | |
|
|
|
|
|
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.) |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model in your product or research, please cite as follows: |
|
|
|
|
|
``` |
|
|
@article{Fitsum2021TiPLMs, |
|
|
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park}, |
|
|
title={Monolingual Pre-trained Language Models for Tigrinya}, |
|
|
year=2021, |
|
|
publisher={WiNLP 2021 at EMNLP 2021} |
|
|
} |
|
|
``` |
|
|
|