Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -80,5 +80,124 @@ tags:
|
|
| 80 |
- RobotsMali
|
| 81 |
- afvoices
|
| 82 |
- asr
|
| 83 |
-
pretty_name:
|
| 84 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
- RobotsMali
|
| 81 |
- afvoices
|
| 82 |
- asr
|
| 83 |
+
pretty_name: Robots
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
# 📘 **African Next Voices – Bambara (AfVoices)**
|
| 87 |
+
|
| 88 |
+
The **AfVoices** dataset is the largest open corpus of spontaneous Bambara speech at its release in late 2025. It contains **423 hours** of segmented audio and **612 hours** of original raw recordings collected across southern Mali. Speech was recorded in natural, conversational settings and annotated using a semi-automated transcription pipeline combining ASR pre-labels and human corrections.
|
| 89 |
+
|
| 90 |
+
---
|
| 91 |
+
|
| 92 |
+
## 🔎 **Quick Facts**
|
| 93 |
+
|
| 94 |
+
| Category | Value |
|
| 95 |
+
| ---------------------------------------- | ------------------------------------------------------------------------------------------------- |
|
| 96 |
+
| **Total raw hours** | 612 h (1,777 raw recordings; publicly available on GCS) |
|
| 97 |
+
| **Total segmented hours** | 423 h (874,762 segments) |
|
| 98 |
+
| **Speakers** | 512 |
|
| 99 |
+
| **Regions** | Bamako, Ségou, Sikasso, Bagineda, Bougouni |
|
| 100 |
+
| **Avg. segment duration** | ~2 seconds |
|
| 101 |
+
| **Subsets** | 159 h human-corrected, 212 h model-annotated, 52 h short (<1s) |
|
| 102 |
+
| **Age distribution** | Broad, across young to elderly speakers (90% between 18 and 45) |
|
| 103 |
+
| **Topics** | Health, agriculture, Miscellaneous (art, education, history etc.) |
|
| 104 |
+
| **SNR distribution (raw recordings)** | 71.75% High or Very High SNR |
|
| 105 |
+
| **Train / Test split** | 155 h / 4 h |
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
## **Motivation**
|
| 110 |
+
|
| 111 |
+
The **African Next Voices (ANV)** project is a multi-country effort aiming to gather over **9,000 hours of speech** across 18 African languages. Its goal is to build high-quality datasets that empower local communities, support inclusive AI research, and provide strong foundations for ASR in underrepresented languages.
|
| 112 |
+
|
| 113 |
+
As part of this initiative, **RobotsMali** led the Bambara data collection for Mali. This dataset reflects RobotsMali’s broader mission to advance AI and NLP research malian languages, with a long-term focus on improving education, access, and technology across Mali and the wider Manding linguistic region.
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
## 🎙️ **Characteristics of the Dataset**
|
| 118 |
+
|
| 119 |
+
### **Data Collection**
|
| 120 |
+
|
| 121 |
+
* Speech was collected through trained **facilitators** who guided participants, ensured audio quality, and encouraged natural, topic-focused conversations.
|
| 122 |
+
* All recordings are **spontaneous speech**, not read text.
|
| 123 |
+
* A custom **Flutter mobile app** ([open-source](https://github.com/RobotsMali-AI/Africa-Voice-App)) was used to simplify the process and reduce training time.
|
| 124 |
+
* Geographic focus: **Southern Mali**, to limit extreme accent variation and build a clean baseline corpus.
|
| 125 |
+
|
| 126 |
+
### **Segmentation and Preprocessing**
|
| 127 |
+
|
| 128 |
+
* Raw audio was segmented using **Silero VAD**, retaining ~70% of the original duration.
|
| 129 |
+
* Segments range from **240 ms to 30 s**.
|
| 130 |
+
* Voice activity detection helped remove long silences and improve data usability.
|
| 131 |
+
|
| 132 |
+
### **Transcriptions**
|
| 133 |
+
|
| 134 |
+
* Pre-transcribed using the ASR model **soloni-114m-tdt-ctc-v0**.
|
| 135 |
+
* Human annotators corrected the transcripts.
|
| 136 |
+
* A second model (**soloni-114m-tdt-ctc-v2**) was trained using the corrected transcripts and used to regenerate improved labels.
|
| 137 |
+
* Two automatic transcription variants exist for each sample: **v1** (from soloni-v0) and **v2** (from soloni-v2).
|
| 138 |
+
|
| 139 |
+
### **Acoustic Event Tags**
|
| 140 |
+
|
| 141 |
+
The following tags appear in transcriptions:
|
| 142 |
+
|
| 143 |
+
| Tag | Meaning |
|
| 144 |
+
| --------- | ------------------------------------------------------------- |
|
| 145 |
+
| `[um]` | Vocalized pauses, filler sounds |
|
| 146 |
+
| `[cs]` | Code-switched or foreign word |
|
| 147 |
+
| `[noise]` | Background noise (applause, coughing, children, etc.) |
|
| 148 |
+
| `[?]` | Inaudible or overlapped speech |
|
| 149 |
+
| `[pause]` | Long silence (>5 seconds or >3 seconds at segment boundaries); due to VAD segmentation this tag is rarely used |
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
## 📂 **Subsets**
|
| 154 |
+
|
| 155 |
+
### **1. Human-corrected (159 h, 260k samples)**
|
| 156 |
+
|
| 157 |
+
* Fully reviewed and corrected by annotators.
|
| 158 |
+
* Only subset with a definitive `text` field containing the validated transcription.
|
| 159 |
+
|
| 160 |
+
### **2. Model-annotated (212 h, 355k samples)**
|
| 161 |
+
|
| 162 |
+
* Includes automatic labels: `v1` (soloni-v0) and `v2` (soloni-v2).
|
| 163 |
+
* No human review.
|
| 164 |
+
|
| 165 |
+
### **3. Short subset (52 h, 259k samples)**
|
| 166 |
+
|
| 167 |
+
* Segments <1 second (formulaic expressions, discourse markers).
|
| 168 |
+
* Excluded from human annotation for optimization purposes.
|
| 169 |
+
* Automatically labeled (v1 & v2).
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
## ⚠️ **Limitations**
|
| 174 |
+
|
| 175 |
+
* **Clean dataset vs real-world noise:**
|
| 176 |
+
Over 70% of recordings can be categorized as relatively clean speech. Models trained solely on this dataset may underperform in noisy street or radio environments typical in Mali. See this [report](https://zenodo.org/records/17672774) if you are interested in learning more about the strengths and weaknesses of RobotsMali's ASR models.
|
| 177 |
+
|
| 178 |
+
* **Reduced code-switching:**
|
| 179 |
+
French terms were often replaced by `[cs]` or normalized into Bambara phonology. This improves model stability but reduces realism for natural bilingual speech.
|
| 180 |
+
|
| 181 |
+
* **Geographic homogeneity:**
|
| 182 |
+
Focused on the southern region to control accent variability. Broader dialectal coverage might require additional data.
|
| 183 |
+
|
| 184 |
+
* **Simplified linguistic conditions:**
|
| 185 |
+
Overlaps, multi-speaker settings, and conversational chaos are minimized—again improving training stability at the cost of deployment realism.
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
## 📑 **Citation**
|
| 190 |
+
|
| 191 |
+
```bibtex
|
| 192 |
+
@article{diarra2025afvoices,
|
| 193 |
+
title={Dealing with the Hard Facts of Low-Resource African NLP},
|
| 194 |
+
author={Diarra, Yacouba and Coulibaly, Nouhoum Souleymane and Kamaté, Panga Azazia and Tall, Madani Amadou and Koné, Emmanuel Élisé and Dembélé, Aymane and Leventhal, Michael},
|
| 195 |
+
journal={Preprint},
|
| 196 |
+
note = {arxiv coming soon}
|
| 197 |
+
year={2025},
|
| 198 |
+
}
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
---
|
| 202 |
+
|
| 203 |
+
You may want to download the original 612 hours dataset with its associated metadata for research purposes or to create a derivative. You will find the codes and manifest files to download those files from Google Cloud Storage in this repository: [RobotsMali-AI/afvoices](https://github.com/RobotsMali-AI/afvoices). Do not hesitate to open an issue for Help or suggestions 🤗
|