Datasets:
What OPUS-MT Gets Right (and Wrong) for Arabic Translation
What OPUS-MT Gets Right (and Wrong) for Arabic Translation
After testing Helsinki-NLP/opus-mt-ar-en with 9 different inputs — from formal Arabic to Egyptian and Sudanese dialects — the results show a clear divide: Modern Standard Arabic (MSA) translates well, but dialectal Arabic causes truncation and missed meaning.
The Setup
Arabic is a morphologically rich language with significant dialectal variation. While MSA is the written standard, spoken Arabic varies widely across regions (Egyptian, Levantine, Gulf, Sudanese, Maghrebi). Most translation models are trained on MSA data, leaving a gap for real-world applications.
I tested the model via HuggingFace Serverless Inference API with:
- 4 English→Arabic inputs (formal, technical, colloquial, code-switching)
- 5 Arabic→English inputs (MSA, technical, politeness, Egyptian dialect, Sudanese dialect)
Results at a Glance
| Direction | Test Type | Latency | Quality |
|---|---|---|---|
| EN→AR | Technical | 13.48s | ✅ Accurate |
| AR→EN | MSA | 3.63s | ✅ Accurate |
| AR→EN | Egyptian dialect | 0.42s | ❌ Truncated |
| AR→EN | Sudanese dialect | 0.58s | ❌ Missed half |
What Worked
MSA and technical content translated accurately. The model handles ML/API terminology well, preserving "JSON" and "API" in Arabic context:
Input: "The API endpoint returns a JSON response with status code 200."
Output: "نقطة نهاية API ترجع استجابة JSON مع رمز الوضع 200."
Latency for MSA ranged from 3.6s to 13.5s, with technical sentences taking longest.
What Failed
Dialectal inputs caused truncation. The model silently dropped content:
- Egyptian: "إزيك؟ كله تمام؟ كنت عايز أسألك عن حاجة." → "I was gonna ask you something." (Missed the entire greeting)
- Sudanese: "يا زول، كيف حالك؟ تعال نتغدا سوا النهاردة." → "Hey, Zol, how are you?" (Missed half the sentence, untranslated "Zol")
This is particularly problematic for production systems — the truncation is silent, not signaled as an error.
Implications for Arabic NLP
For teams building Arabic translation pipelines:
- Dialect detection first — Route dialectal inputs to specialized models
- Consider NLLB-200 — Meta's multilingual model has better dialect coverage
- Hybrid approach — Use OPUS-MT for MSA, fallback for dialects
Full Results
- Dataset: O96a/opus-mt-arabic-benchmark-2026-03-28
- Discussion: Helsinki-NLP/opus-mt-ar-en/discussions/10
Has anyone benchmarked NLLB-200 on Arabic dialects? Would love to compare notes.