Sukuma-Voices-ACL / README.md
Mollel's picture
Update README.md
216781a verified
metadata
license: apache-2.0
task_categories:
  - text-to-speech
  - automatic-speech-recognition
language:
  - suk
tags:
  - sukuma
  - tts
  - speech-synthesis
  - low-resource
  - evaluation
size_categories:
  - 1K<n<10K

dataset_info: features: - name: audio dtype: audio - name: text dtype: string - name: gender dtype: string - name: voice dtype: string - name: filename dtype: string - name: record_id dtype: string splits: - name: train num_examples: 3257 - name: test num_examples: 362 - name: test_indistribution_synthesis num_examples: 362 - name: test_outdistribution_synthesis num_examples: 362 download_size: 6725563759 dataset_size: 6935373055 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: test_indistribution_synthesis path: data/test_indistribution_synthesis-* - split: test_outdistribution_synthesis path: data/test_outdistribution_synthesis-* license: cc-by-4.0 language: - suk task_categories: - automatic-speech-recognition - text-to-speech - speech-synthesis tags: - sukuma - low-resource - african-languages - bantu - tanzania - speech-corpus - speech-evaluation - tts-evaluation size_categories: - 1K<n<10K pretty_name: Sukuma Voices

Sukuma Voices Dataset 🎙️

Language Samples Duration License

The first publicly available speech corpus for Sukuma (Kisukuma), a Bantu language spoken by approximately 10 million people in northern Tanzania. This dataset supports speech-to-text, text-to-speech, and speech evaluation tasks.


Dataset Description

Sukuma Voices addresses the critical gap in speech technology resources for one of Africa's most severely under-resourced languages. The dataset includes both human recordings and TTS-synthesized audio for comprehensive model evaluation.

Dataset Summary

Metric Value
Total Samples 4,343
Total Duration 19.56 hours
Average Duration 10.25 ± 4.15 seconds
Duration Range 1.40 - 30.36 seconds
Total Words 140,325
Unique Vocabulary 21,366
Average Words/Sample 20.4
Speaking Rate 121.6 WPM

Supported Tasks

  • Automatic Speech Recognition (ASR): Converting Sukuma speech to text
  • Text-to-Speech (TTS): Synthesizing natural-sounding Sukuma speech
  • Speech Evaluation: Comparing human vs. synthesized speech quality
  • Cross-lingual Speech Processing: Research between Swahili and Sukuma

Languages

  • Sukuma (ISO 639-3: suk) - A Bantu language of the Niger-Congo family

Dataset Structure

Splits Overview

Split Samples Description
train 3,257 Human recordings for training
test 362 Human recordings for evaluation
test_indistribution_synthesis 362 TTS-generated audio (in-distribution)
test_outdistribution_synthesis 362 TTS-generated audio (out-of-distribution)

Split Details

Split Purpose Audio Source
train Model training Human recordings
test ASR/TTS evaluation on natural speech Human recordings
test_indistribution_synthesis Evaluate TTS quality on seen text patterns TTS-generated
test_outdistribution_synthesis Evaluate TTS generalization TTS-generated

Features

All subsets contain the following features:

Feature Type Description
audio Audio Speech samples (16kHz for ASR, 24kHz for TTS)
text String Text content in Sukuma language
gender String Speaker gender
voice String Voice identifier
filename String Original filename
record_id String Unique record identifier

Example Instance

{
    "audio": {
        "array": [...],
        "sampling_rate": 16000
    },
    "text": "Umunhu ngwunuyo agabhalelaga chiza abhanhu bhakwe.",
    "gender": "female",
    "voice": "speaker_01",
    "filename": "sukuma_001.wav",
    "record_id": "suk_00001"
}

Example Sentences

Language Text
Sukuma Umunhu ngwunuyo agabhalelaga chiza abhanhu bhakwe, kunguyo ya kikalile kakwe akagubhatogwa na gubhambilija abho bhali mumakoye.
English This person raises his people well, because of his good behavior, of loving people and helping his colleagues who are in trouble, in their lives.

Usage

Loading the Dataset

from datasets import load_dataset

# Load all splits
dataset = load_dataset("sartifyllc/SUKUMA_VOICE")

# Access specific splits
train_data = dataset["train"]
test_data = dataset["test"]
test_synth_in = dataset["test_indistribution_synthesis"]
test_synth_out = dataset["test_outdistribution_synthesis"]

print(f"Train samples: {len(train_data)}")
print(f"Test samples: {len(test_data)}")

ASR Training Example

from datasets import load_dataset
from transformers import WhisperProcessor, WhisperForConditionalGeneration

# Load dataset
dataset = load_dataset("sartifyllc/SUKUMA_VOICE")

# Load model
processor = WhisperProcessor.from_pretrained("openai/whisper-large-v3")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v3")

# Process a sample
sample = dataset["test"][0]
input_features = processor(
    sample["audio"]["array"], 
    sampling_rate=16000, 
    return_tensors="pt"
).input_features

# Generate transcription
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(f"Predicted: {transcription}")
print(f"Reference: {sample['text']}")

TTS Evaluation Example

from datasets import load_dataset
from jiwer import wer

# Load human and synthesized test sets
dataset = load_dataset("sartifyllc/SUKUMA_VOICE")
human_test = dataset["test"]
synth_test = dataset["test_indistribution_synthesis"]

# Compare WER between human and synthesized speech
# (using your trained ASR model)
human_wer = evaluate_asr(model, human_test)
synth_wer = evaluate_asr(model, synth_test)

print(f"Human Speech WER: {human_wer:.2%}")
print(f"Synthesized Speech WER: {synth_wer:.2%}")

Dataset Creation

Source Data

The dataset was curated from audio recordings and textual transcriptions of the Sukuma New Testament 2000 translation, sourced from the Bible.com platform.

Why Biblical Text?

  1. Standardized orthographic conventions ensuring transcription consistency
  2. Diverse linguistic structures encompassing narrative, dialogue, and theological discourse
  3. Cultural relevance to Sukuma-speaking communities
  4. Availability of both audio recordings and verified textual transcriptions

Synthesis Pipeline

The test_indistribution_synthesis and test_outdistribution_synthesis splits were generated using our Sukuma-TTS model, fine-tuned on Orpheus 3B with LoRA.

Annotations

The data was rigorously annotated to ensure phonetic and orthographic consistency, with validation by native Sukuma speakers.


Baseline Results

ASR Performance (Whisper Large V3)

Metric Human Speech Synthesized Speech
Final WER 25.19% 32.60%
Min WER 22.01% 29.97%
WER Reduction 82.94% 78.93%

Key Findings:

  • Strong correlation between human and synthetic learning curves (Pearson's r = 0.997)
  • Performance gap narrows as training progresses (from 9.97 to 8.11 WER points)
  • Synthetic speech captures essential acoustic-phonetic characteristics despite ~28% relative performance gap

TTS Performance (Orpheus 3B v0.1)

Metric Score
Mean Opinion Score (MOS) 3.9 ± 0.15
Human Recording MOS 4.6 ± 0.1

Evaluated by native Sukuma speakers using a 5-point Likert scale.


Considerations for Using the Data

Known Limitations

  1. Domain Specificity: Data is primarily from biblical texts, which may not fully represent everyday conversational Sukuma
  2. Diacritic Variations: Sukuma has two written forms (with and without diacritics); this dataset focuses on the non-diacritic version
  3. Single Source: Limited speaker diversity from a single recording source

Linguistic Challenges

  • Sukuma is a tonal language with complex phonological features
  • The language lacks standardized orthographic conventions across written materials
  • Diacritic and non-diacritic text representations can affect vocabulary size and evaluation metrics

Personal and Sensitive Information

The dataset contains religious text (Bible readings) and does not include personal or sensitive information about individuals.


Citation

If you use this dataset, please cite:

@inproceedings{mgonzo2025sukuma,
  title={Learning from Scarcity: Building and Benchmarking Speech Technology for Sukuma},
  author={Mgonzo, Macton and Oketch, Kezia and Etori, Naome and Mang'eni, Winnie and Nyaki, Elizabeth and Mollel, Michael S.},
  booktitle={Proceedings of the Association for Computational Linguistics},
  year={2025}
}

Additional Information

Authors

Name Affiliation Contact
Macton Mgonzo Brown University macton_mgonzo@brown.edu
Kezia Oketch University of Notre Dame
Naome Etori University of Minnesota - Twin Cities
Winnie Mang'eni Pawa AI
Elizabeth Nyaki Pawa AI, Sartify Company Limited
Michael S. Mollel Sartify Company Limited

Dataset Curators

Acknowledgments

We would like to express our gratitude to Sartify Company Limited and Pawa AI for their instrumental role in initiating this project and for providing the data access necessary to develop and evaluate our models. We also extend our sincere thanks to all the volunteers who generously dedicated their time to the evaluation process.

Licensing Information

This dataset is released under CC-BY-4.0.

Contributions

We welcome contributions to expand and improve this dataset! Areas of interest include:

  • Additional Sukuma speech data beyond religious content
  • Conversational and everyday language recordings
  • Multi-speaker recordings
  • Diacritic-annotated transcriptions

Ethical Considerations

  • Consent was obtained from all human participants involved in data annotation
  • Participants were informed about the technology's limitations and potential impacts
  • The authors acknowledge that models trained on this data may inherit biases present in the source material

Related Resources


This dataset represents an important step toward inclusive speech technology for African languages.

SartifyPawa AITTS Model