|
|
| --- |
| license: apache-2.0 |
| language: |
| - bua |
| datasets: |
| - allenai/MADLAD-400 |
| - oscar-corpus/OSCAR-2109 |
| library_name: transformers |
| pipeline_tag: text-generation |
| tags: |
| - goldfish |
| - arxiv:2408.10441 |
| --- |
| |
| # bua_cyrl_full |
|
|
| Goldfish is a suite of monolingual language models trained for 350 languages. |
| This model is the <b>Buriat</b> (Cyrillic script) model trained on 39MB of data (all our data in the language), after accounting for an estimated byte premium of 1.70; content-matched text in Buriat takes on average 1.70x as many UTF-8 bytes to encode as English. |
| The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs). |
|
|
| Note: bua_cyrl is a [macrolanguage](https://iso639-3.sil.org/code_tables/639/data) code. Individual language code bxr_cyrl (Russia Buriat) is included in Goldfish, although with less data. |
|
|
| All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441). |
|
|
| Training code and sample usage: https://github.com/tylerachang/goldfish |
|
|
| Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing) |
|
|
| ## Model details: |
|
|
| To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. |
| All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. |
| For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! |
| Details for this model specifically: |
| |
| * Architecture: gpt2 |
| * Parameters: 124770816 |
| * Maximum sequence length: 512 tokens |
| * Training text data (raw): 66.54MB |
| * Training text data (byte premium scaled): 39.105MB |
| * Training tokens: 8951808 (x10 epochs) |
| * Vocabulary size: 50000 |
| * Compute cost: 4.5674892361728e+16 FLOPs or ~4.3 NVIDIA A6000 GPU hours |
| |
| Training datasets (percentages prior to deduplication): |
| * 89.97116%: [Languages of Russia](http://web-corpora.net/wsgi3/minorlangs/download) |
| * 8.41374%: [MADLAD-400 (CommonCrawl)](https://huggingface.co/datasets/allenai/MADLAD-400) |
| * 1.21634%: [Wikipedia 2023/08](https://dumps.wikimedia.org/) |
| * 0.38769%: [Wortschatz Leipzig Data](https://wortschatz.uni-leipzig.de/en/download) |
| * 0.00722%: [Tatoeba](https://tatoeba.org/en/) |
| * 0.00386%: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) |
| |
| |
| ## Citation |
| |
| If you use this model, please cite: |
| |
| ``` |
| @article{chang-etal-2024-goldfish, |
| title={Goldfish: Monolingual Language Models for 350 Languages}, |
| author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.}, |
| journal={Preprint}, |
| year={2024}, |
| url={https://www.arxiv.org/abs/2408.10441}, |
| } |
| ``` |
| |