Datasets:
Formats:
parquet
Size:
10K - 100K
Tags:
automatic-speech-recognition
text-to-speech
spoken-language-identification
speech
audio
african-languages
License:
metadata
configs:
- config_name: tts_test_ewe
data_files:
- split: test
path: tts_test/HF_test-ewe*
- config_name: tts_test_kin
data_files:
- split: test
path: tts_test/HF_test-kin*
- config_name: tts_test_Asante-twi
data_files:
- split: test
path: tts_test/HF_test-Asante-twi*
- config_name: tts_test_yor
data_files:
- split: test
path: tts_test/HF_test-yor*
- config_name: tts_test_wol
data_files:
- split: test
path: tts_test/HF_test-wol*
- config_name: tts_test_hau
data_files:
- split: test
path: tts_test/HF_test-hau*
- config_name: tts_test_lin
data_files:
- split: test
path: tts_test/HF_test-lin*
- config_name: tts_test_xho
data_files:
- split: test
path: tts_test/HF_test-xho*
- config_name: tts_test_tsn
data_files:
- split: test
path: tts_test/HF_test-tsn*
- config_name: tts_test_afr
data_files:
- split: test
path: tts_test/HF_test-afr*
- config_name: tts_test_sot
data_files:
- split: test
path: tts_test/HF_test-sot*
- config_name: tts_test_Akuapim-twi
data_files:
- split: test
path: tts_test/HF_test-Akuapim-twi*
- config_name: slid_61_test
data_files:
- split: test
path: slid_test/HF_merged_slid_test_61*
- config_name: asr_test_Akuapim-twi
data_files:
- split: test
path: asr_test/HF_test-Akuapim-twi*
- config_name: asr_test_Asante-twi
data_files:
- split: test
path: asr_test/HF_test-Asante-twi*
- config_name: asr_test_afr
data_files:
- split: test
path: asr_test/HF_test-afr*
- config_name: asr_test_amh
data_files:
- split: test
path: asr_test/HF_test-amh*
- config_name: asr_test_bas
data_files:
- split: test
path: asr_test/HF_test-bas*
- config_name: asr_test_bem
data_files:
- split: test
path: asr_test/HF_test-bem*
- config_name: asr_test_dav
data_files:
- split: test
path: asr_test/HF_test-dav*
- config_name: asr_test_dyu
data_files:
- split: test
path: asr_test/HF_test-dyu*
- config_name: asr_test_fat
data_files:
- split: test
path: asr_test/HF_test-fat*
- config_name: asr_test_fon
data_files:
- split: test
path: asr_test/HF_test-fon*
- config_name: asr_test_fuc
data_files:
- split: test
path: asr_test/HF_test-fuc*
- config_name: asr_test_fuf
data_files:
- split: test
path: asr_test/HF_test-fuf*
- config_name: asr_test_gaa
data_files:
- split: test
path: asr_test/HF_test-gaa*
- config_name: asr_test_hau
data_files:
- split: test
path: asr_test/HF_test-hau*
- config_name: asr_test_ibo
data_files:
- split: test
path: asr_test/HF_test-ibo*
- config_name: asr_test_kab
data_files:
- split: test
path: asr_test/HF_test-kab*
- config_name: asr_test_kin
data_files:
- split: test
path: asr_test/HF_test-kin*
- config_name: asr_test_kln
data_files:
- split: test
path: asr_test/HF_test-kln*
- config_name: asr_test_loz
data_files:
- split: test
path: asr_test/HF_test-loz*
- config_name: asr_test_lug
data_files:
- split: test
path: asr_test/HF_test-lug*
- config_name: asr_test_luo
data_files:
- split: test
path: asr_test/HF_test-luo*
- config_name: asr_test_mlq
data_files:
- split: test
path: asr_test/HF_test-mlq*
- config_name: asr_test_nbl
data_files:
- split: test
path: asr_test/HF_test-nbl*
- config_name: asr_test_nso
data_files:
- split: test
path: asr_test/HF_test-nso*
- config_name: asr_test_nya
data_files:
- split: test
path: asr_test/HF_test-nya*
- config_name: asr_test_sot
data_files:
- split: test
path: asr_test/HF_test-sot*
- config_name: asr_test_srr
data_files:
- split: test
path: asr_test/HF_test-srr*
- config_name: asr_test_ssw
data_files:
- split: test
path: asr_test/HF_test-ssw*
- config_name: asr_test_sus
data_files:
- split: test
path: asr_test/HF_test-sus*
- config_name: asr_test_swa
data_files:
- split: test
path: asr_test/HF_test-sw*
- config_name: asr_test_tig
data_files:
- split: test
path: asr_test/HF_test-tig*
- config_name: asr_test_tir
data_files:
- split: test
path: asr_test/HF_test-tir*
- config_name: asr_test_toi
data_files:
- split: test
path: asr_test/HF_test-toi*
- config_name: asr_test_tsn
data_files:
- split: test
path: asr_test/HF_test-tsn*
- config_name: asr_test_tso
data_files:
- split: test
path: asr_test/HF_test-tso*
- config_name: asr_test_twi
data_files:
- split: test
path: asr_test/HF_test-twi*
- config_name: asr_test_ven
data_files:
- split: test
path: asr_test/HF_test-ven*
- config_name: asr_test_wol
data_files:
- split: test
path: asr_test/HF_test-wol*
- config_name: asr_test_xho
data_files:
- split: test
path: asr_test/HF_test-xho*
- config_name: asr_test_yor
data_files:
- split: test
path: asr_test/HF_test-yor*
- config_name: asr_test_zgh
data_files:
- split: test
path: asr_test/HF_test-zgh*
- config_name: asr_test_zul
data_files:
- split: test
path: asr_test/HF_test-zul*
language:
- afr
- amh
- bas
- bem
- dyu
- ee
- fat
- fon
- fra
- fuc
- fuf
- gaa
- hau
- ibo
- kab
- kin
- kln
- kon
- lin
- loz
- lug
- luo
- mlq
- nbl
- nso
- nya
- orm
- por
- sna
- som
- sot
- ssw
- swa
- tir
- tig
- toi
- tsn
- tso
- twi
- ven
- wol
- xho
- yor
- zul
- zgh
- aka
- mos
- umb
- din
- sag
- mlg
license: cc-by-4.0
tags:
- automatic-speech-recognition
- text-to-speech
- spoken-language-identification
- speech
- audio
- african-languages
- multilingual
- low-resource
- benchmark
- simbabench
- simba
task_categories:
- automatic-speech-recognition
- text-to-speech
- audio-classification
models:
- UBC-NLP/Simba-S
- UBC-NLP/Simba-M
- UBC-NLP/Simba-H
- UBC-NLP/Simba-W
- UBC-NLP/Simba-X
- UBC-NLP/Simba-TTS-twi-asanti
- UBC-NLP/Simba-TTS-lin
- UBC-NLP/Simba-TTS-sot
- UBC-NLP/Simba-TTS-tsn
- UBC-NLP/Simba-TTS-xho
- UBC-NLP/Simba-TTS-twi-akuapem
- UBC-NLP/Simba-TTS-afr
datasets:
- UBC-NLP/SimbaBench
SibmaBench Data Release & Benchmarking
To evaluate your model on SimbaBench across all supported tasks (ASR, TTS, and SLID), simply load the corresponding configuration for the task and language you wish to benchmark.
Each task is organized by configuration name (e.g., asr_test_afr, tts_test_wol, slid_61_test). Loading a configuration provides the standardized evaluation split for that specific benchmark.
Example:
from datasets import load_dataset
data = load_dataset("UBC-NLP/SimbaBench_dataset", "asr_test_afr")
DatasetDict({
test: Dataset({
features: ['split', 'benchmark_id', 'audio', 'text', 'duration_s', 'lang_iso3', 'lang_name'],
num_rows: 1000
})
})
data['test'][0]
{'split': 'test',
'benchmark_id': 'afr_Lwazi_afr_test_idx3889',
'audio': {'path': None,
'array': array([ 4.27246094e-04, 7.62939453e-04, 6.71386719e-04, ...,
-3.05175781e-04, -2.13623047e-04, -6.10351562e-05]),
'sampling_rate': 16000},
'text': 'watter, verontwaardiging sou daar, in ons binneste gewees het?',
'duration_s': 5.119999885559082,
'lang_iso3': 'afr',
'lang_name': 'Afrikaans'}
📌 ASR Evaluation Configurations
| Config Name | Language | ISO | # Samples | # Hours |
|---|---|---|---|---|
| asr_test_Akuapim-twi | Akuapim-twi | Akuapim-twi | 1,000 | 1.35 |
| asr_test_Asante-twi | Asante-twi | Asante-twi | 1,000 | 0.97 |
| asr_test_afr | Afrikaans | afr | 1,000 | 0.87 |
| asr_test_amh | Amharic | amh | 581 | 1.12 |
| asr_test_bas | Basaa | bas | 582 | 0.76 |
| asr_test_bem | Bemba | bem | 1,000 | 2.15 |
| asr_test_dav | Taita | dav | 878 | 1.17 |
| asr_test_dyu | Dyula | dyu | 59 | 0.10 |
| asr_test_fat | Fanti | fat | 1,000 | 1.38 |
| asr_test_fon | Fon | fon | 1,000 | 0.66 |
| asr_test_fuc | Pulaar | fuc | 100 | 0.10 |
| asr_test_fuf | Pular | fuf | 129 | 0.03 |
| asr_test_gaa | Ga | gaa | 1,000 | 1.52 |
| asr_test_hau | Hausa | hau | 681 | 0.89 |
| asr_test_ibo | Igbo | ibo | 5 | 0.01 |
| asr_test_kab | Kabyle | kab | 1,000 | 1.05 |
| asr_test_kin | Kinyarwanda | kin | 1,000 | 1.50 |
| asr_test_kln | Kalenjin | kln | 1,000 | 1.50 |
| asr_test_loz | Lozi | loz | 399 | 0.91 |
| asr_test_lug | Ganda | lug | 1,000 | 1.65 |
| asr_test_luo | Luo (Kenya and Tanzania) | luo | 1,000 | 1.31 |
| asr_test_mlq | Western Maninkakan | mlq | 182 | 0.04 |
| asr_test_nbl | South Ndebele | nbl | 1,000 | 1.12 |
| asr_test_nso | Northern Sotho | nso | 1,000 | 0.88 |
| asr_test_nya | Nyanja | nya | 428 | 1.31 |
| asr_test_sot | Southern Sotho | sot | 1,000 | 0.82 |
| asr_test_srr | Serer | srr | 899 | 2.84 |
| asr_test_ssw | Swati | ssw | 1,000 | 0.93 |
| asr_test_sus | Susu | sus | 210 | 0.05 |
| asr_test_swa | Swahili | swa | 1,000 | 1.23 |
| asr_test_tig | Tigre | tig | 185 | 0.33 |
| asr_test_tir | Tigrinya | tir | 7 | 0.01 |
| asr_test_toi | Tonga (Zambia) | toi | 463 | 1.47 |
| asr_test_tsn | Tswana | tsn | 1,000 | 0.82 |
| asr_test_tso | Tsonga | tso | 1,000 | 0.99 |
| asr_test_twi | Twi | twi | 12 | 0.02 |
| asr_test_ven | Venda | ven | 1,000 | 0.92 |
| asr_test_wol | Wolof | wol | 1,000 | 1.19 |
| asr_test_xho | Xhosa | xho | 1,000 | 0.92 |
| asr_test_yor | Yoruba | yor | 359 | 0.42 |
| asr_test_zgh | Standard Moroccan Tamazight | zgh | 197 | 0.22 |
| asr_test_zul | Zulu | zul | 1,000 | 1.10 |
📌 TTS Evaluation Configurations
| Config Name | Language | ISO | # Samples | # Hours |
|---|---|---|---|---|
| tts_test_ewe | Ewe | ewe | 66 | 0.29 |
| tts_test_kin | Kinyarwanda | kin | 1,053 | 1.30 |
| tts_test_Asante-twi | Asante-twi | Asante-twi | 64 | 0.18 |
| tts_test_yor | Yoruba | yor | 40 | 0.13 |
| tts_test_wol | Wolof | wol | 4,001 | 4.12 |
| tts_test_hau | Hausa | hau | 124 | 0.24 |
| tts_test_lin | Lingala | lin | 63 | 0.28 |
| tts_test_xho | Xhosa | xho | 242 | 0.31 |
| tts_test_tsn | Tswana | tsn | 238 | 0.36 |
| tts_test_afr | Afrikaans | afr | 293 | 0.34 |
| tts_test_sot | Southern Sotho | sot | 210 | 0.33 |
| tts_test_Akuapim-twi | Akuapim-twi | Akuapim-twi | 83 | 0.22 |
📌 SLID Evaluation
| Config Name | Language Scope | # Samples | # Hours |
|---|---|---|---|
| slid_61_test | 61 Languages | 21,817 | 34.36 |
Citation
If you use the Simba models or SimbaBench benchmark for your scientific publication, or if you find the resources in this website useful, please cite our paper.
@inproceedings{elmadany-etal-2025-voice,
title = "Voice of a Continent: Mapping {A}frica{'}s Speech Technology Frontier",
author = "Elmadany, AbdelRahim A. and
Kwon, Sang Yun and
Toyin, Hawau Olamide and
Alcoba Inciarte, Alcides and
Aldarmaki, Hanan and
Abdul-Mageed, Muhammad",
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.559/",
doi = "10.18653/v1/2025.emnlp-main.559",
pages = "11039--11061",
ISBN = "979-8-89176-332-6",
}