| --- |
| license: apache-2.0 |
| task_categories: |
| - text-classification |
| language: |
| - multilingual |
| tags: |
| - language-identification |
| - unigram |
| - tokenizer |
| - tinyaya |
| pretty_name: TinyAya LID Experiment Logs |
| --- |
| |
| # TinyAya LID — Models, Eval Data & Training Artifacts |
|
|
| Artifacts for the **Contrastive UniLID** project: language identification using LLM tokenizer vocabularies (TinyAya 261k BPE→Unigram), trained on GlotLID-C, evaluated on CommonLID. |
|
|
| Source code: [github.com/divyanshsinghvi/tinyAyaLid](https://github.com/divyanshsinghvi/tinyAyaLid) |
|
|
| > **Note**: GlotLID-C training corpus is **not included** here — it can be re-downloaded from [`cis-lmu/glotlid-corpus`](https://huggingface.co/datasets/cis-lmu/glotlid-corpus). This repo only ships the eval data, models, training weights, and LLM cache. |
|
|
| --- |
|
|
| ## Structure |
|
|
| ``` |
| . |
| ├── models/ # Trained .unilid model files + eval JSONs |
| │ ├── tinyaya_v3_200k/ # Best TinyAya model — 200k samples/lang |
| │ ├── tinyaya_v3_100k/ # TinyAya, 100k samples/lang |
| │ ├── tinyaya_soft_full/ # TinyAya, full GlotLID-C corpus |
| │ ├── mistral_v3_200k/ # Mistral-Nemo 131k tokenizer comparison |
| │ ├── scratch_v3_200k/ # Scratch 100k vocab comparison |
| │ ├── commonlid_20pct/ # Trained on 20% CommonLID split (TinyAya) |
| │ ├── commonlid_50pct/ # Trained on 50% CommonLID split (TinyAya) |
| │ ├── commonlid_20pct_mistral/ # 20% CommonLID split (Mistral) |
| │ ├── commonlid_50pct_mistral/ # 50% CommonLID split (Mistral) |
| │ ├── commonlid_20pct_scratch/ # 20% CommonLID split (Scratch) |
| │ └── commonlid_50pct_scratch/ # 50% CommonLID split (Scratch) |
| │ |
| ├── data/ |
| │ ├── commonlid/ # CommonLID evaluation corpus (fastText format) |
| │ │ ├── commonlid_full.txt # Full test set (373k samples, 109 tags) |
| │ │ ├── commonlid_train.txt # Train split |
| │ │ ├── commonlid_test.txt # Test split |
| │ │ ├── commonlid_50pct_test.txt # 50% split |
| │ │ ├── commonlid_80pct_test.txt # 80% split |
| │ │ ├── commonlid_50perlang.txt # 50 samples/lang subsample |
| │ │ ├── commonlid_150perlang.txt # 150 samples/lang subsample |
| │ │ ├── commonlid_200perlang.txt # 200 samples/lang subsample |
| │ │ ├── commonlid_20pct_by_lang/ # Per-language files (20pct split) |
| │ │ └── commonlid_50pct_by_lang/ # Per-language files (50pct split) |
| │ │ |
| │ └── misc/ # Small training experiment files |
| │ ├── train_quick.txt |
| │ ├── train_quick_test.txt |
| │ ├── train_1k.txt |
| │ ├── train_1k_test.txt |
| │ └── train_test.txt |
| │ |
| ├── training_weights/ # Per-language unigram log-prob dists from soft EM (compressed) |
| │ └── *.tar.gz # One tarball per experiment config |
| │ |
| └── cache/ # Cached LLM API responses (two-stage eval) |
| └── cache.tar.gz |
| ``` |
|
|
| ## Data Formats |
|
|
| - **fastText format** (`__label__<lang_Script> <text>`): all CommonLID files |
| - **Plain text** (one sentence per line): misc training files |
|
|
| ## Languages |
|
|
| - **CommonLID eval**: 109 language tags (373,230 samples in `commonlid_full.txt`) |
| - **Alias mapping** (CommonLID→model individual code): |
| `ara→arb, aze→azj, bik→bcl, est→ekk, lav→lvs, mlg→plt, msa→zsm, orm→gaz, swa→swh, tgl→fil, uzb→uzn, zho→cmn` |
|
|
| ## Reproducing Training |
|
|
| To retrain a model, download GlotLID-C separately: |
| ```python |
| from datasets import load_dataset |
| ds = load_dataset("cis-lmu/glotlid-corpus") |
| ``` |
| Then run `train.py` from the source repo using the desired tokenizer. |
|
|
| ## Contributors |
|
|
| Divyansh Singhvi, Megha Agarwal. Mentored by Julia Kreutzer. |
|
|