Ethio-ASR: Multilingual ASR models trained on WAXAL

#18
by badrex - opened

Hi WAXAL team,

It is our pleasure to share with you that we have used the WAXAL corpus to train Ethio-ASR; a suite of multilingual CTC-based ASR models for five Ethiopian languages (Amharic, Tigrinya, Oromo, Sidaama, and Wolaytta).

We found the dataset to be of high quality and well-suited for training competitive ASR systems. Our best model achieves a WER of 22.92% on the Amharic test set, outperforming larger baselines including all OmniASR variants. Native speakers of the target languages have further verified that our models produce state-of-the-art transcriptions.

All details are in our pre-print and the models are publicly available with permissive licenses:

Thank you very much for creating and openly releasing this valuable resource 🤗

Best,

Badr

Google org

Hi Badr,

We are very happy to hear that you were able to make use of our dataset! Congrats on these very exciting results and looking forward to seeing what other innovations your work spurs.

Best,

Perry & Team

Sign up or log in to comment