elmadany commited on
Commit
c1e0101
·
verified ·
1 Parent(s): e895d34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +215 -0
README.md CHANGED
@@ -232,3 +232,218 @@ configs:
232
  - split: test
233
  path: asr_test/HF_test-zul*
234
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
232
  - split: test
233
  path: asr_test/HF_test-zul*
234
  ---
235
+
236
+
237
+
238
+
239
+ <div align="center">
240
+
241
+ <img src="https://africa.dlnlp.ai/simba/images/VoC_logo.png" alt="VoC Logo">
242
+
243
+ [![EMNLP 2025 Paper](https://img.shields.io/badge/EMNLP_2025-Paper-B31B1B?style=for-the-badge&logo=arxiv&logoColor=B31B1B&labelColor=FFCDD2)](https://aclanthology.org/2025.emnlp-main.559/)
244
+ [![Official Website](https://img.shields.io/badge/Official-Website-2EA44F?style=for-the-badge&logo=googlechrome&logoColor=2EA44F&labelColor=C8E6C9)](https://africa.dlnlp.ai/simba/)
245
+ [![SimbaBench](https://img.shields.io/badge/SimbaBench-Benchmark-8A2BE2?style=for-the-badge&logo=googlecharts&logoColor=8A2BE2&labelColor=E1BEE7)](https://huggingface.co/spaces/UBC-NLP/SimbaBench)
246
+ [![GitHub Repository](https://img.shields.io/badge/GitHub-Repository-181717?style=for-the-badge&logo=github&logoColor=181717&labelColor=E0E0E0)](https://github.com/UBC-NLP/simba)
247
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E?style=for-the-badge&logoColor=black&labelColor=FFF9C4)](https://huggingface.co/collections/UBC-NLP/simba-speech-series)
248
+ [![Hugging Face Dataset](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-FFD21E?style=for-the-badge&logo=huggingface&logoColor=black&labelColor=FFF9C4)](https://huggingface.co/datasets/UBC-NLP/SimbaBench_dataset)
249
+
250
+
251
+ </div>
252
+
253
+
254
+ ## SibmaBench Data Release & Benchmarking
255
+
256
+ ## Evaluating Your Model on SimbaBench
257
+
258
+ To evaluate your model on **SimbaBench** across all supported tasks (ASR, TTS, and SLID), simply load the corresponding configuration for the task and language you wish to benchmark.
259
+
260
+ Each task is organized by configuration name (e.g., `asr_test_afr`, `tts_test_wol`, `slid_61_test`). Loading a configuration provides the standardized evaluation split for that specific benchmark.
261
+
262
+ Example:
263
+
264
+ ```python
265
+ from datasets import load_dataset
266
+
267
+ data = load_dataset("UBC-NLP/SimbaBench_dataset", "asr_test_afr")
268
+ ```
269
+ ```
270
+ DatasetDict({
271
+ test: Dataset({
272
+ features: ['split', 'benchmark_id', 'audio', 'text', 'duration_s', 'lang_iso3', 'lang_name'],
273
+ num_rows: 1000
274
+ })
275
+ })
276
+
277
+ ```
278
+ ``` python
279
+ data['test'][0]
280
+ ```
281
+ ```
282
+ {'split': 'test',
283
+ 'benchmark_id': 'afr_Lwazi_afr_test_idx3889',
284
+ 'audio': {'path': None,
285
+ 'array': array([ 4.27246094e-04, 7.62939453e-04, 6.71386719e-04, ...,
286
+ -3.05175781e-04, -2.13623047e-04, -6.10351562e-05]),
287
+ 'sampling_rate': 16000},
288
+ 'text': 'watter, verontwaardiging sou daar, in ons binneste gewees het?',
289
+ 'duration_s': 5.119999885559082,
290
+ 'lang_iso3': 'afr',
291
+ 'lang_name': 'Afrikaans'}
292
+
293
+ ```
294
+
295
+ ## 📌 ASR Evaluation Configurations
296
+
297
+ | Config Name | Language | ISO | # Samples | # Hours |
298
+ |-------------|----------|-----|----------|--------|
299
+ | asr_test_Akuapim-twi | Akuapim-twi | Akuapim-twi | 1,000 | 1.35 |
300
+ | asr_test_Asante-twi | Asante-twi | Asante-twi | 1,000 | 0.97 |
301
+ | asr_test_afr | Afrikaans | afr | 1,000 | 0.87 |
302
+ | asr_test_amh | Amharic | amh | 581 | 1.12 |
303
+ | asr_test_bas | Basaa | bas | 582 | 0.76 |
304
+ | asr_test_bem | Bemba | bem | 1,000 | 2.15 |
305
+ | asr_test_dav | Taita | dav | 878 | 1.17 |
306
+ | asr_test_dyu | Dyula | dyu | 59 | 0.10 |
307
+ | asr_test_fat | Fanti | fat | 1,000 | 1.38 |
308
+ | asr_test_fon | Fon | fon | 1,000 | 0.66 |
309
+ | asr_test_fuc | Pulaar | fuc | 100 | 0.10 |
310
+ | asr_test_fuf | Pular | fuf | 129 | 0.03 |
311
+ | asr_test_gaa | Ga | gaa | 1,000 | 1.52 |
312
+ | asr_test_hau | Hausa | hau | 681 | 0.89 |
313
+ | asr_test_ibo | Igbo | ibo | 5 | 0.01 |
314
+ | asr_test_kab | Kabyle | kab | 1,000 | 1.05 |
315
+ | asr_test_kin | Kinyarwanda | kin | 1,000 | 1.50 |
316
+ | asr_test_kln | Kalenjin | kln | 1,000 | 1.50 |
317
+ | asr_test_loz | Lozi | loz | 399 | 0.91 |
318
+ | asr_test_lug | Ganda | lug | 1,000 | 1.65 |
319
+ | asr_test_luo | Luo (Kenya and Tanzania) | luo | 1,000 | 1.31 |
320
+ | asr_test_mlq | Western Maninkakan | mlq | 182 | 0.04 |
321
+ | asr_test_nbl | South Ndebele | nbl | 1,000 | 1.12 |
322
+ | asr_test_nso | Northern Sotho | nso | 1,000 | 0.88 |
323
+ | asr_test_nya | Nyanja | nya | 428 | 1.31 |
324
+ | asr_test_sot | Southern Sotho | sot | 1,000 | 0.82 |
325
+ | asr_test_srr | Serer | srr | 899 | 2.84 |
326
+ | asr_test_ssw | Swati | ssw | 1,000 | 0.93 |
327
+ | asr_test_sus | Susu | sus | 210 | 0.05 |
328
+ | asr_test_swa | Swahili | swa | 1,000 | 1.23 |
329
+ | asr_test_tig | Tigre | tig | 185 | 0.33 |
330
+ | asr_test_tir | Tigrinya | tir | 7 | 0.01 |
331
+ | asr_test_toi | Tonga (Zambia) | toi | 463 | 1.47 |
332
+ | asr_test_tsn | Tswana | tsn | 1,000 | 0.82 |
333
+ | asr_test_tso | Tsonga | tso | 1,000 | 0.99 |
334
+ | asr_test_twi | Twi | twi | 12 | 0.02 |
335
+ | asr_test_ven | Venda | ven | 1,000 | 0.92 |
336
+ | asr_test_wol | Wolof | wol | 1,000 | 1.19 |
337
+ | asr_test_xho | Xhosa | xho | 1,000 | 0.92 |
338
+ | asr_test_yor | Yoruba | yor | 359 | 0.42 |
339
+ | asr_test_zgh | Standard Moroccan Tamazight | zgh | 197 | 0.22 |
340
+ | asr_test_zul | Zulu | zul | 1,000 | 1.10 |
341
+
342
+ ---
343
+ ## 📌 TTS Evaluation Configurations
344
+
345
+ | Config Name | Language | ISO | # Samples | # Hours |
346
+ |----------------------|----------------|-------------|----------|--------|
347
+ | tts_test_ewe | Ewe | ewe | 66 | 0.29 |
348
+ | tts_test_kin | Kinyarwanda | kin | 1,053 | 1.30 |
349
+ | tts_test_Asante-twi | Asante-twi | Asante-twi | 64 | 0.18 |
350
+ | tts_test_yor | Yoruba | yor | 40 | 0.13 |
351
+ | tts_test_wol | Wolof | wol | 4,001 | 4.12 |
352
+ | tts_test_hau | Hausa | hau | 124 | 0.24 |
353
+ | tts_test_lin | Lingala | lin | 63 | 0.28 |
354
+ | tts_test_xho | Xhosa | xho | 242 | 0.31 |
355
+ | tts_test_tsn | Tswana | tsn | 238 | 0.36 |
356
+ | tts_test_afr | Afrikaans | afr | 293 | 0.34 |
357
+ | tts_test_sot | Southern Sotho | sot | 210 | 0.33 |
358
+ | tts_test_Akuapim-twi | Akuapim-twi | Akuapim-twi | 83 | 0.22 |
359
+
360
+ ---
361
+
362
+ ## 📌 SLID Evaluation
363
+
364
+ | Config Name | Language Scope | # Samples | # Hours |
365
+ |--------------|---------------|----------|--------|
366
+ | slid_61_test | 61 Languages | 21,817 | 34.36 |
367
+
368
+ ---
369
+
370
+
371
+
372
+ ## 🔎 Example: Loading a Benchmark Configuration
373
+
374
+ Below is an example of loading a specific ASR evaluation configuration using the Hugging Face `datasets` library:
375
+
376
+ ```python
377
+ from datasets import load_dataset
378
+
379
+ data = load_dataset(dataset_id, "asr_test_afr")
380
+ data["test"][0]
381
+
382
+ ``` python
383
+ configs = get_dataset_config_names(dataset_id)
384
+ print(f"{'Config Name':<35} | {'Language':<25} | {'ISO':<5} | {'# Samples':<12} | {'# Hours':<10}")
385
+ print("-" * 100)
386
+
387
+ total_all_samples = 0
388
+ total_all_hours = 0
389
+
390
+ for config in configs:
391
+ try:
392
+ # Load the dataset for the specific config
393
+ ds = load_dataset(dataset_id, config, split='test')
394
+
395
+ # Calculate samples and hours
396
+ num_samples = len(ds)
397
+ total_seconds = sum(ds['duration_s'])
398
+ num_hours = total_seconds / 3600
399
+
400
+ # Get language name and ISO code from the first sample
401
+ lang_name = ds[0]['lang_name'] if num_samples > 0 else 'N/A'
402
+ lang_iso = ds[0]['lang_iso3'] if num_samples > 0 else 'N/A'
403
+
404
+ # Accumulate totals for a grand total at the end
405
+ total_all_samples += num_samples
406
+ total_all_hours += num_hours
407
+
408
+ print(f"{config:<35} | {lang_name:<25} | {lang_iso:<5} | {num_samples:<12,} | {num_hours:<10.2f}")
409
+
410
+ except Exception as e:
411
+ print(f"{config:<35} | Error loading config: {e}")
412
+
413
+ print("-" * 100)
414
+ print(f"{'TOTAL':<35} | {'':<25} | {'':<5} | {total_all_samples:<12,} | {total_all_hours:<10.2f}")
415
+
416
+ ```
417
+
418
+ ## Citation
419
+
420
+ If you use the Simba models or SimbaBench benchmark for your scientific publication, or if you find the resources in this website useful, please cite our paper.
421
+
422
+ ```bibtex
423
+
424
+ @inproceedings{elmadany-etal-2025-voice,
425
+ title = "Voice of a Continent: Mapping {A}frica{'}s Speech Technology Frontier",
426
+ author = "Elmadany, AbdelRahim A. and
427
+ Kwon, Sang Yun and
428
+ Toyin, Hawau Olamide and
429
+ Alcoba Inciarte, Alcides and
430
+ Aldarmaki, Hanan and
431
+ Abdul-Mageed, Muhammad",
432
+ editor = "Christodoulopoulos, Christos and
433
+ Chakraborty, Tanmoy and
434
+ Rose, Carolyn and
435
+ Peng, Violet",
436
+ booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
437
+ month = nov,
438
+ year = "2025",
439
+ address = "Suzhou, China",
440
+ publisher = "Association for Computational Linguistics",
441
+ url = "https://aclanthology.org/2025.emnlp-main.559/",
442
+ doi = "10.18653/v1/2025.emnlp-main.559",
443
+ pages = "11039--11061",
444
+ ISBN = "979-8-89176-332-6",
445
+ }
446
+
447
+ ```
448
+
449
+