Ethio-ASR: Multilingual ASR models trained on WAXAL
Hi WAXAL team,
It is our pleasure to share with you that we have used the WAXAL corpus to train Ethio-ASR; a suite of multilingual CTC-based ASR models for five Ethiopian languages (Amharic, Tigrinya, Oromo, Sidaama, and Wolaytta).
We found the dataset to be of high quality and well-suited for training competitive ASR systems. Our best model achieves a WER of 22.92% on the Amharic test set, outperforming larger baselines including all OmniASR variants. Native speakers of the target languages have further verified that our models produce state-of-the-art transcriptions.
All details are in our pre-print and the models are publicly available with permissive licenses:
- Pre-print: https://arxiv.org/pdf/2603.23654
- Models: https://huggingface.co/collections/badrex/ethio-asr
Thank you very much for creating and openly releasing this valuable resource 🤗
Best,
Badr
Hi Badr,
We are very happy to hear that you were able to make use of our dataset! Congrats on these very exciting results and looking forward to seeing what other innovations your work spurs.
Best,
Perry & Team