Yoruba-English Code-Switching Language Identification (LID)
This model is a fine-tuned version of AfroXLM-R-Large designed to identify language boundaries in Yoruba-English code-switched text at the token level.
Model Description
The model classifies each token in a sentence into one of three categories:
- YORUBA: Tokens belonging to the Yoruba language.
- ENGLISH: Tokens belonging to the English language.
By utilizing the AfroXLM-R-Large backbone, which was pre-trained with a focus on African languages, this model demonstrates exceptional robustness in handling the morphological complexities of Yoruba and the fluid transitions in code-switched speech.
Performance (Test Set)
The model achieved near-perfect performance. Peak generalization was reached at Epoch 1. While training continued for 5 epochs for observation, the final deployed weights are from the first epoch to ensure maximum generalizability and prevent over-memorization of training samples.
| Class | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|
| Overall | 0.991 | 0.991 | 0.991 | ~80k |
| English | 0.995 | 0.994 | 0.994 | 63,016 |
| Yoruba | 0.976 | 0.979 | 0.978 | 17,069 |
Intended Uses & Limitations
Intended Use
- Research in Code-Switching (CS) patterns.
- Preprocessing for Machine Translation or Speech Synthesis (TTS) involving Yoruba-English bilingual speakers.
- Computational linguistics studies on the matrix language frame in Nigerian English.
Limitations
- Tonal Markers: Performance may slightly vary if Yoruba text lacks standard diacritics (tonal marks).
- Domain Sensitivity: Optimized for general conversational and science-related prompts; performance on archaic or highly legalistic Yoruba may vary.
Training Procedure
Hyperparameters
- Base Model: AfroXLM-R Large (550M parameters)
- Batch Size: 128 (Global)
- Learning Rate: 3e-05 (with Cosine Decay)
- Precision: BF16 (Brain Floating Point)
- Optimizer: AdamW (Fused)
Training Narrative
The model converges remarkably fast due to the pre-existing linguistic knowledge in the AfroXLM-R base. Users will notice that Validation Loss is lowest at Epoch 1.0 ($0.0240$). Despite the training loss continuing to drop, the validation loss begins a slight upward trend thereafter, indicating that the model captures the underlying linguistic boundaries almost immediately.
How to Use
from transformers import pipeline
lid_model = pipeline("token-classification", model="your-username/yoruba-en-ner-model")
text = "Egungun eleru helps to cleanse the village by carrying ebo"
results = lid_model(text)
for entity in results:
print(f"Token: {entity['word']}, Language: {entity['entity']}")
Citation
If you use this model in your research, please cite the Masakhane AfroXLM-R paper and this fine-tuned version.
- Downloads last month
- 64
Model tree for Professor/yoruba-en-ner-model
Evaluation results
- Overall F1 on Yoruba-English Code-Switched Datasetself-reported0.991