Mollel commited on
Commit
216781a
·
verified ·
1 Parent(s): d6deeb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +341 -41
README.md CHANGED
@@ -15,72 +15,372 @@ size_categories:
15
  - 1K<n<10K
16
  ---
17
 
18
- # SUKUMA Voice Dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
- This dataset contains Sukuma language voice data from multiple sources for speech-to-text and text-to-speech and speech evaluation tasks.
21
 
22
- ## Dataset Details
 
 
 
 
 
23
 
24
- - **train**: 3,257 samples
25
- - **test**: 362 samples
26
- - **test_indistribution_synthesis**: 362 samples
27
- - **test_outdistribution_synthesis**: 362 samples
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Dataset Structure
31
 
32
- The dataset contains three subsets:
33
 
34
- - **Train**: Base SUKUMA voice data with human recordings
35
- - **Test**: Base SUKUMA voice data with human recordings
36
- - **test_indistribution_synthesis**: TTS-generated audio for in-distribution evaluation
37
- - **test_outdistribution_synthesis**: TTS-generated audio for out-of-distribution evaluation
 
 
 
 
 
 
 
 
 
 
 
38
 
39
  ### Features
40
 
41
  All subsets contain the following features:
42
- - `audio`: Audio feature with speech samples
43
- - `text`: Text content in Sukuma language
44
- - `gender`: Speaker gender
45
- - `voice`: Voice identifier
46
- - `filename`: Original filename
47
- - `record_id`: Unique record identifier
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Usage
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```python
52
  from datasets import load_dataset
 
53
 
54
- # Load the full dataset
55
  dataset = load_dataset("sartifyllc/SUKUMA_VOICE")
 
 
56
 
57
- # Access individual subsets
58
- original_data = dataset["train"]
59
- original_data = dataset["test"]
60
- indist_data = dataset["test_indistribution_synthesis"]
61
- outdist_data = dataset["test_outdistribution_synthesis"]
62
-
63
- # Example: Load first sample from original subset
64
- sample = dataset["original"][0]
65
- audio_array = sample["audio"]["array"]
66
- sample_rate = sample["audio"]["sampling_rate"]
67
- text = sample["text"]
68
  ```
69
 
70
- ## Evaluation Use Cases
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
- This dataset supports various TTS evaluation scenarios:
73
- - **Quality Assessment**: Compare original vs synthesized audio
74
- - **In-distribution Evaluation**: Test on similar data distribution
75
- - **Out-of-distribution Evaluation**: Test generalization capabilities
76
 
77
- ## Source Datasets
 
 
78
 
79
- This merged dataset combines:
80
- - `Mollel/SUKUMA_VOICE` (original subset)
81
- - `Mollel/sukuma-tts-evaluation` (indistribution_synthesis subset)
82
- - `Mollel/sukuma-tts-syevaluation` (outdistribution_synthesis subset)
 
83
 
84
  ## Citation
85
 
86
- If you use this dataset, please cite the original sources and this merged version.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  - 1K<n<10K
16
  ---
17
 
18
+ ---
19
+ dataset_info:
20
+ features:
21
+ - name: audio
22
+ dtype: audio
23
+ - name: text
24
+ dtype: string
25
+ - name: gender
26
+ dtype: string
27
+ - name: voice
28
+ dtype: string
29
+ - name: filename
30
+ dtype: string
31
+ - name: record_id
32
+ dtype: string
33
+ splits:
34
+ - name: train
35
+ num_examples: 3257
36
+ - name: test
37
+ num_examples: 362
38
+ - name: test_indistribution_synthesis
39
+ num_examples: 362
40
+ - name: test_outdistribution_synthesis
41
+ num_examples: 362
42
+ download_size: 6725563759
43
+ dataset_size: 6935373055
44
+ configs:
45
+ - config_name: default
46
+ data_files:
47
+ - split: train
48
+ path: data/train-*
49
+ - split: test
50
+ path: data/test-*
51
+ - split: test_indistribution_synthesis
52
+ path: data/test_indistribution_synthesis-*
53
+ - split: test_outdistribution_synthesis
54
+ path: data/test_outdistribution_synthesis-*
55
+ license: cc-by-4.0
56
+ language:
57
+ - suk
58
+ task_categories:
59
+ - automatic-speech-recognition
60
+ - text-to-speech
61
+ - speech-synthesis
62
+ tags:
63
+ - sukuma
64
+ - low-resource
65
+ - african-languages
66
+ - bantu
67
+ - tanzania
68
+ - speech-corpus
69
+ - speech-evaluation
70
+ - tts-evaluation
71
+ size_categories:
72
+ - 1K<n<10K
73
+ pretty_name: Sukuma Voices
74
+ ---
75
 
76
+ # Sukuma Voices Dataset 🎙️
77
 
78
+ <p align="center">
79
+ <img src="https://img.shields.io/badge/Language-Sukuma-blue" alt="Language">
80
+ <img src="https://img.shields.io/badge/Samples-4,343-green" alt="Samples">
81
+ <img src="https://img.shields.io/badge/Duration-19.56%20hours-orange" alt="Duration">
82
+ <img src="https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey" alt="License">
83
+ </p>
84
 
85
+ **The first publicly available speech corpus for Sukuma (Kisukuma)**, a Bantu language spoken by approximately 10 million people in northern Tanzania. This dataset supports speech-to-text, text-to-speech, and speech evaluation tasks.
 
 
 
86
 
87
+ ---
88
+
89
+ ## Dataset Description
90
+
91
+ **Sukuma Voices** addresses the critical gap in speech technology resources for one of Africa's most severely under-resourced languages. The dataset includes both human recordings and TTS-synthesized audio for comprehensive model evaluation.
92
+
93
+ ### Dataset Summary
94
+
95
+ | Metric | Value |
96
+ |--------|-------|
97
+ | Total Samples | 4,343 |
98
+ | Total Duration | 19.56 hours |
99
+ | Average Duration | 10.25 ± 4.15 seconds |
100
+ | Duration Range | 1.40 - 30.36 seconds |
101
+ | Total Words | 140,325 |
102
+ | Unique Vocabulary | 21,366 |
103
+ | Average Words/Sample | 20.4 |
104
+ | Speaking Rate | 121.6 WPM |
105
+
106
+ ### Supported Tasks
107
+
108
+ - **Automatic Speech Recognition (ASR)**: Converting Sukuma speech to text
109
+ - **Text-to-Speech (TTS)**: Synthesizing natural-sounding Sukuma speech
110
+ - **Speech Evaluation**: Comparing human vs. synthesized speech quality
111
+ - **Cross-lingual Speech Processing**: Research between Swahili and Sukuma
112
+
113
+ ### Languages
114
+
115
+ - **Sukuma** (ISO 639-3: `suk`) - A Bantu language of the Niger-Congo family
116
+
117
+ ---
118
 
119
  ## Dataset Structure
120
 
121
+ ### Splits Overview
122
 
123
+ | Split | Samples | Description |
124
+ |-------|---------|-------------|
125
+ | `train` | 3,257 | Human recordings for training |
126
+ | `test` | 362 | Human recordings for evaluation |
127
+ | `test_indistribution_synthesis` | 362 | TTS-generated audio (in-distribution) |
128
+ | `test_outdistribution_synthesis` | 362 | TTS-generated audio (out-of-distribution) |
129
+
130
+ ### Split Details
131
+
132
+ | Split | Purpose | Audio Source |
133
+ |-------|---------|--------------|
134
+ | **train** | Model training | Human recordings |
135
+ | **test** | ASR/TTS evaluation on natural speech | Human recordings |
136
+ | **test_indistribution_synthesis** | Evaluate TTS quality on seen text patterns | TTS-generated |
137
+ | **test_outdistribution_synthesis** | Evaluate TTS generalization | TTS-generated |
138
 
139
  ### Features
140
 
141
  All subsets contain the following features:
142
+
143
+ | Feature | Type | Description |
144
+ |---------|------|-------------|
145
+ | `audio` | Audio | Speech samples (16kHz for ASR, 24kHz for TTS) |
146
+ | `text` | String | Text content in Sukuma language |
147
+ | `gender` | String | Speaker gender |
148
+ | `voice` | String | Voice identifier |
149
+ | `filename` | String | Original filename |
150
+ | `record_id` | String | Unique record identifier |
151
+
152
+ ### Example Instance
153
+
154
+ ```python
155
+ {
156
+ "audio": {
157
+ "array": [...],
158
+ "sampling_rate": 16000
159
+ },
160
+ "text": "Umunhu ngwunuyo agabhalelaga chiza abhanhu bhakwe.",
161
+ "gender": "female",
162
+ "voice": "speaker_01",
163
+ "filename": "sukuma_001.wav",
164
+ "record_id": "suk_00001"
165
+ }
166
+ ```
167
+
168
+ ### Example Sentences
169
+
170
+ | Language | Text |
171
+ |----------|------|
172
+ | **Sukuma** | Umunhu ngwunuyo agabhalelaga chiza abhanhu bhakwe, kunguyo ya kikalile kakwe akagubhatogwa na gubhambilija abho bhali mumakoye. |
173
+ | **English** | This person raises his people well, because of his good behavior, of loving people and helping his colleagues who are in trouble, in their lives. |
174
+
175
+ ---
176
 
177
  ## Usage
178
 
179
+ ### Loading the Dataset
180
+
181
+ ```python
182
+ from datasets import load_dataset
183
+
184
+ # Load all splits
185
+ dataset = load_dataset("sartifyllc/SUKUMA_VOICE")
186
+
187
+ # Access specific splits
188
+ train_data = dataset["train"]
189
+ test_data = dataset["test"]
190
+ test_synth_in = dataset["test_indistribution_synthesis"]
191
+ test_synth_out = dataset["test_outdistribution_synthesis"]
192
+
193
+ print(f"Train samples: {len(train_data)}")
194
+ print(f"Test samples: {len(test_data)}")
195
+ ```
196
+
197
+ ### ASR Training Example
198
+
199
+ ```python
200
+ from datasets import load_dataset
201
+ from transformers import WhisperProcessor, WhisperForConditionalGeneration
202
+
203
+ # Load dataset
204
+ dataset = load_dataset("sartifyllc/SUKUMA_VOICE")
205
+
206
+ # Load model
207
+ processor = WhisperProcessor.from_pretrained("openai/whisper-large-v3")
208
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v3")
209
+
210
+ # Process a sample
211
+ sample = dataset["test"][0]
212
+ input_features = processor(
213
+ sample["audio"]["array"],
214
+ sampling_rate=16000,
215
+ return_tensors="pt"
216
+ ).input_features
217
+
218
+ # Generate transcription
219
+ predicted_ids = model.generate(input_features)
220
+ transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
221
+ print(f"Predicted: {transcription}")
222
+ print(f"Reference: {sample['text']}")
223
+ ```
224
+
225
+ ### TTS Evaluation Example
226
+
227
  ```python
228
  from datasets import load_dataset
229
+ from jiwer import wer
230
 
231
+ # Load human and synthesized test sets
232
  dataset = load_dataset("sartifyllc/SUKUMA_VOICE")
233
+ human_test = dataset["test"]
234
+ synth_test = dataset["test_indistribution_synthesis"]
235
 
236
+ # Compare WER between human and synthesized speech
237
+ # (using your trained ASR model)
238
+ human_wer = evaluate_asr(model, human_test)
239
+ synth_wer = evaluate_asr(model, synth_test)
240
+
241
+ print(f"Human Speech WER: {human_wer:.2%}")
242
+ print(f"Synthesized Speech WER: {synth_wer:.2%}")
 
 
 
 
243
  ```
244
 
245
+ ---
246
+
247
+ ## Dataset Creation
248
+
249
+ ### Source Data
250
+
251
+ The dataset was curated from audio recordings and textual transcriptions of the **Sukuma New Testament 2000 translation**, sourced from the Bible.com platform.
252
+
253
+ #### Why Biblical Text?
254
+
255
+ 1. **Standardized orthographic conventions** ensuring transcription consistency
256
+ 2. **Diverse linguistic structures** encompassing narrative, dialogue, and theological discourse
257
+ 3. **Cultural relevance** to Sukuma-speaking communities
258
+ 4. **Availability** of both audio recordings and verified textual transcriptions
259
+
260
+ ### Synthesis Pipeline
261
+
262
+ The `test_indistribution_synthesis` and `test_outdistribution_synthesis` splits were generated using our [Sukuma-TTS](https://huggingface.co/Mollel/Sukuma-TTS) model, fine-tuned on Orpheus 3B with LoRA.
263
+
264
+ ### Annotations
265
+
266
+ The data was rigorously annotated to ensure phonetic and orthographic consistency, with validation by native Sukuma speakers.
267
+
268
+ ---
269
+
270
+ ## Baseline Results
271
+
272
+ ### ASR Performance (Whisper Large V3)
273
+
274
+ | Metric | Human Speech | Synthesized Speech |
275
+ |--------|--------------|-------------------|
276
+ | Final WER | 25.19% | 32.60% |
277
+ | Min WER | 22.01% | 29.97% |
278
+ | WER Reduction | 82.94% | 78.93% |
279
+
280
+ **Key Findings:**
281
+ - Strong correlation between human and synthetic learning curves (Pearson's r = 0.997)
282
+ - Performance gap narrows as training progresses (from 9.97 to 8.11 WER points)
283
+ - Synthetic speech captures essential acoustic-phonetic characteristics despite ~28% relative performance gap
284
+
285
+ ### TTS Performance (Orpheus 3B v0.1)
286
+
287
+ | Metric | Score |
288
+ |--------|-------|
289
+ | **Mean Opinion Score (MOS)** | 3.9 ± 0.15 |
290
+ | Human Recording MOS | 4.6 ± 0.1 |
291
+
292
+ *Evaluated by native Sukuma speakers using a 5-point Likert scale.*
293
+
294
+ ---
295
+
296
+ ## Considerations for Using the Data
297
+
298
+ ### Known Limitations
299
+
300
+ 1. **Domain Specificity**: Data is primarily from biblical texts, which may not fully represent everyday conversational Sukuma
301
+ 2. **Diacritic Variations**: Sukuma has two written forms (with and without diacritics); this dataset focuses on the non-diacritic version
302
+ 3. **Single Source**: Limited speaker diversity from a single recording source
303
 
304
+ ### Linguistic Challenges
 
 
 
305
 
306
+ - Sukuma is a **tonal language** with complex phonological features
307
+ - The language lacks standardized orthographic conventions across written materials
308
+ - Diacritic and non-diacritic text representations can affect vocabulary size and evaluation metrics
309
 
310
+ ### Personal and Sensitive Information
311
+
312
+ The dataset contains religious text (Bible readings) and does not include personal or sensitive information about individuals.
313
+
314
+ ---
315
 
316
  ## Citation
317
 
318
+ If you use this dataset, please cite:
319
+
320
+ ```bibtex
321
+ @inproceedings{mgonzo2025sukuma,
322
+ title={Learning from Scarcity: Building and Benchmarking Speech Technology for Sukuma},
323
+ author={Mgonzo, Macton and Oketch, Kezia and Etori, Naome and Mang'eni, Winnie and Nyaki, Elizabeth and Mollel, Michael S.},
324
+ booktitle={Proceedings of the Association for Computational Linguistics},
325
+ year={2025}
326
+ }
327
+ ```
328
+
329
+ ---
330
+
331
+ ## Additional Information
332
+
333
+ ### Authors
334
+
335
+ | Name | Affiliation | Contact |
336
+ |------|-------------|---------|
337
+ | **Macton Mgonzo** | Brown University | macton_mgonzo@brown.edu |
338
+ | **Kezia Oketch** | University of Notre Dame | |
339
+ | **Naome Etori** | University of Minnesota - Twin Cities | |
340
+ | **Winnie Mang'eni** | Pawa AI | |
341
+ | **Elizabeth Nyaki** | Pawa AI, Sartify Company Limited | |
342
+ | **Michael S. Mollel** | Sartify Company Limited | |
343
+
344
+ ### Dataset Curators
345
+
346
+ - [Sartify Company Limited](https://www.sartify.com/)
347
+ - [Pawa AI](https://www.pawa-ai.com/)
348
+
349
+ ### Acknowledgments
350
+
351
+ We would like to express our gratitude to [Sartify Company Limited](https://www.sartify.com/) and [Pawa AI](https://www.pawa-ai.com/) for their instrumental role in initiating this project and for providing the data access necessary to develop and evaluate our models. We also extend our sincere thanks to all the volunteers who generously dedicated their time to the evaluation process.
352
+
353
+ ### Licensing Information
354
+
355
+ This dataset is released under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/).
356
+
357
+ ### Contributions
358
+
359
+ We welcome contributions to expand and improve this dataset! Areas of interest include:
360
+ - Additional Sukuma speech data beyond religious content
361
+ - Conversational and everyday language recordings
362
+ - Multi-speaker recordings
363
+ - Diacritic-annotated transcriptions
364
+
365
+ ### Ethical Considerations
366
+
367
+ - Consent was obtained from all human participants involved in data annotation
368
+ - Participants were informed about the technology's limitations and potential impacts
369
+ - The authors acknowledge that models trained on this data may inherit biases present in the source material
370
+
371
+ ### Related Resources
372
+
373
+ - 🤖 **TTS Model:** [Mollel/Sukuma-TTS](https://huggingface.co/Mollel/Sukuma-TTS)
374
+ - 📂 **GitHub:** [sukuma-voices](https://github.com/your-username/sukuma-voices)
375
+
376
+ ---
377
+
378
+ <p align="center">
379
+ <i>This dataset represents an important step toward inclusive speech technology for African languages.</i>
380
+ </p>
381
+
382
+ <p align="center">
383
+ <a href="https://www.sartify.com/">Sartify</a> •
384
+ <a href="https://www.pawa-ai.com/">Pawa AI</a> •
385
+ <a href="https://huggingface.co/Mollel/Sukuma-TTS">TTS Model</a>
386
+ </p>