Datasets:
Formats:
parquet
Size:
10K - 100K
Tags:
automatic-speech-recognition
text-to-speech
spoken-language-identification
speech
audio
african-languages
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -244,8 +244,8 @@ configs:
|
|
| 244 |
[](https://africa.dlnlp.ai/simba/)
|
| 245 |
[](https://huggingface.co/spaces/UBC-NLP/SimbaBench)
|
| 246 |
[](https://github.com/UBC-NLP/simba)
|
| 247 |
-
[, simply load the corresponding configuration for the task and language you wish to benchmark.
|
| 259 |
|
| 260 |
Each task is organized by configuration name (e.g., `asr_test_afr`, `tts_test_wol`, `slid_61_test`). Loading a configuration provides the standardized evaluation split for that specific benchmark.
|
|
@@ -368,53 +366,6 @@ data['test'][0]
|
|
| 368 |
---
|
| 369 |
|
| 370 |
|
| 371 |
-
|
| 372 |
-
## 🔎 Example: Loading a Benchmark Configuration
|
| 373 |
-
|
| 374 |
-
Below is an example of loading a specific ASR evaluation configuration using the Hugging Face `datasets` library:
|
| 375 |
-
|
| 376 |
-
```python
|
| 377 |
-
from datasets import load_dataset
|
| 378 |
-
|
| 379 |
-
data = load_dataset(dataset_id, "asr_test_afr")
|
| 380 |
-
data["test"][0]
|
| 381 |
-
|
| 382 |
-
``` python
|
| 383 |
-
configs = get_dataset_config_names(dataset_id)
|
| 384 |
-
print(f"{'Config Name':<35} | {'Language':<25} | {'ISO':<5} | {'# Samples':<12} | {'# Hours':<10}")
|
| 385 |
-
print("-" * 100)
|
| 386 |
-
|
| 387 |
-
total_all_samples = 0
|
| 388 |
-
total_all_hours = 0
|
| 389 |
-
|
| 390 |
-
for config in configs:
|
| 391 |
-
try:
|
| 392 |
-
# Load the dataset for the specific config
|
| 393 |
-
ds = load_dataset(dataset_id, config, split='test')
|
| 394 |
-
|
| 395 |
-
# Calculate samples and hours
|
| 396 |
-
num_samples = len(ds)
|
| 397 |
-
total_seconds = sum(ds['duration_s'])
|
| 398 |
-
num_hours = total_seconds / 3600
|
| 399 |
-
|
| 400 |
-
# Get language name and ISO code from the first sample
|
| 401 |
-
lang_name = ds[0]['lang_name'] if num_samples > 0 else 'N/A'
|
| 402 |
-
lang_iso = ds[0]['lang_iso3'] if num_samples > 0 else 'N/A'
|
| 403 |
-
|
| 404 |
-
# Accumulate totals for a grand total at the end
|
| 405 |
-
total_all_samples += num_samples
|
| 406 |
-
total_all_hours += num_hours
|
| 407 |
-
|
| 408 |
-
print(f"{config:<35} | {lang_name:<25} | {lang_iso:<5} | {num_samples:<12,} | {num_hours:<10.2f}")
|
| 409 |
-
|
| 410 |
-
except Exception as e:
|
| 411 |
-
print(f"{config:<35} | Error loading config: {e}")
|
| 412 |
-
|
| 413 |
-
print("-" * 100)
|
| 414 |
-
print(f"{'TOTAL':<35} | {'':<25} | {'':<5} | {total_all_samples:<12,} | {total_all_hours:<10.2f}")
|
| 415 |
-
|
| 416 |
-
```
|
| 417 |
-
|
| 418 |
## Citation
|
| 419 |
|
| 420 |
If you use the Simba models or SimbaBench benchmark for your scientific publication, or if you find the resources in this website useful, please cite our paper.
|
|
|
|
| 244 |
[](https://africa.dlnlp.ai/simba/)
|
| 245 |
[](https://huggingface.co/spaces/UBC-NLP/SimbaBench)
|
| 246 |
[](https://github.com/UBC-NLP/simba)
|
| 247 |
+
[](https://huggingface.co/collections/UBC-NLP/simba-speech-series)
|
| 248 |
+
[](https://huggingface.co/datasets/UBC-NLP/SimbaBench_dataset)
|
| 249 |
|
| 250 |
|
| 251 |
</div>
|
|
|
|
| 253 |
|
| 254 |
## SibmaBench Data Release & Benchmarking
|
| 255 |
|
|
|
|
|
|
|
| 256 |
To evaluate your model on **SimbaBench** across all supported tasks (ASR, TTS, and SLID), simply load the corresponding configuration for the task and language you wish to benchmark.
|
| 257 |
|
| 258 |
Each task is organized by configuration name (e.g., `asr_test_afr`, `tts_test_wol`, `slid_61_test`). Loading a configuration provides the standardized evaluation split for that specific benchmark.
|
|
|
|
| 366 |
---
|
| 367 |
|
| 368 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 369 |
## Citation
|
| 370 |
|
| 371 |
If you use the Simba models or SimbaBench benchmark for your scientific publication, or if you find the resources in this website useful, please cite our paper.
|