File size: 9,444 Bytes
d23b171
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9174950
d23b171
 
9174950
d23b171
 
9174950
d23b171
 
9174950
d23b171
 
 
 
 
 
 
 
 
9174950
 
 
 
 
 
 
 
 
 
 
867a426
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4780f47
867a426
 
 
 
1241eff
867a426
 
 
 
 
 
 
 
 
 
 
 
1241eff
867a426
 
4780f47
867a426
 
 
 
 
 
 
 
 
 
 
 
1241eff
867a426
 
 
 
 
 
4780f47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c048840
 
4780f47
 
 
 
 
 
 
8ddb540
867a426
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4780f47
867a426
 
 
 
 
 
 
 
 
 
 
 
4780f47
867a426
 
 
 
 
 
 
 
 
 
 
1241eff
867a426
 
 
 
 
4780f47
867a426
 
 
 
 
 
 
 
 
 
 
 
4780f47
 
867a426
 
4780f47
867a426
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4780f47
867a426
 
 
4780f47
 
867a426
 
4780f47
 
 
 
9174950
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
---
dataset_info:
  features:
  - name: english_text
    dtype: string
  - name: english_audio
    dtype: audio
  - name: naija_text
    dtype: string
  - name: naija_audio
    dtype: audio
  - name: speaker
    dtype: string
  splits:
  - name: igbo
    num_bytes: 77329160
    num_examples: 500
  - name: yoruba
    num_bytes: 107895468
    num_examples: 500
  - name: hausa
    num_bytes: 238658365
    num_examples: 500
  download_size: 423205037
  dataset_size: 423882993
configs:
- config_name: default
  data_files:
  - split: igbo
    path: data/igbo-*
  - split: yoruba
    path: data/yoruba-*
  - split: hausa
    path: data/hausa-*
license: apache-2.0
task_categories:
- automatic-speech-recognition
- text-to-speech
- translation
- text-classification
language:
- en
- ig
- yo
- ha
multilinguality: multilingual
language_creators:
- AfroVoices
tags:
- audio
- text
- speech-translation
- text-translation
- machine-translation
- automatic-speech-recognition
- low-resource
- derived-from-fleurs
- afrovoices
- igbo
- yoruba
- hausa
pretty_name: Hypa_Fleurs
size_categories:
- 1K<n<10K
---

# Hypa_Fleurs

**Hypa_Fleurs** is an open-source multilingual, multi-modal dataset with a long term vision of advancing speech and language technology for low-resource African languages by leveraging the English split of the [Google Fleurs](https://ai.google/tools/datasets/google-fleurs) dataset to create parallel speech and text datasets for a wide range of low-resource African languages. **In this initial release, professional AfroVoices experts translated the original English texts into three under-resourced African languages: Igbo (`ig`), Yoruba (`yo`), and Hausa (`ha`).** 
In addition to the text-to-text translations, the dataset includes parallel speech recordings where the experts read the corresponding English and local language texts. This dataset provides:

- **Text-to-Text Translations:** English sentences paired with their translations in Igbo, Yoruba, and Hausa.
- **Speech-to-Speech Recordings:** Audio recordings of native speakers reading both the English texts and the corresponding translated texts.

This dual modality (text and audio) supports various downstream tasks such as machine translation, automatic speech recognition (ASR), text-to-speech (TTS), language identification (LI), and cross-lingual transfer learning.

---

## Dataset Components

### Text-to-Text Translations

- **Source:** Derived from the English split of the Google Fleurs dataset.
- **Languages:** English paired with translations in:
  - Igbo
  - Yoruba
  - Hausa
- **Format:** Typically stored in CSV or JSON files where each record contains:
  - The English sentence.
  - The corresponding translations for each target language.
- **Splits:** The dataset is divided according to each low-resource language to mirror Fleurs partitioning but particularly for African Languages.

### Speech-to-Speech Recordings

- **Source:** Audio recordings by AfroVoices experts.
- **Languages:** Parallel recordings for:
  - English
  - Igbo
  - Yoruba
  - Hausa
- **Format:** Audio files (e.g., WAV) with accompanying metadata files (e.g., CSV/JSON) that include:
  - Unique identifier linking to text entries.
  - Language code.
  - Duration, sample rate, and other audio properties.
- **Parallelism:** Each audio file is aligned with the corresponding text in both the source (English) and target languages.

---

## Data Structure

### Data Instances
A typical data instance contains the source English text, the target language text (in Igbo, Yoruba, Hausa, etc...), the corresponding target language audio recording, and speaker name. Language name (or codes) are embeded in the splits.

```json
{
  "Split": "igbo",
  "english_text": "A tornado is a spinning column of very low-pressure air, which sucks the surrounding air inward and upward.",
  "naija_text": "Oke ifufe bụ kọlụm na-atụgharị ikuku dị obere, nke na-amịpụta ikuku gbara ya gburugburu n'ime na elu.",
  "source_audio_path": "[path/to/fleurs/en_us/audio/train/1234.wav]", // Optional or based on your structure
  "english_audio": {
    "path": "[hypaai/Hypa_Fleurs/english/data/0001_English.wav]", // Relative path within the dataset
    "array": [...], // Decoded audio array (when loaded with datasets)
    "sampling_rate": 16000
  }
  "naija_audio": {
    "path": "[hypaai/Hypa_Fleurs/igbo/data/0001_Igbo.wav]", // Relative path within the dataset
    "array": [...], // Decoded audio array (when loaded with datasets)
    "sampling_rate": 16000 
  }
  "Speaker": "Gift"
}
```

Data Fields
* english_text (string): The original English transcription derived from the FLEURS dataset.
* naija_text (string): The human-translated text in the specified target language.
* english_audio (datasets.Audio): An audio feature containing the recorded speech in the source language. When loaded, provides the path, decoded audio array, and sampling rate (16000 Hz).
* target_audio (datasets.Audio): An audio feature containing the recorded speech in the target language. When loaded, provides the path, decoded audio array, and sampling rate (16000 Hz).
* [Optional Fields]: Currently, the only additional/ optional field here is "Speaker" denoting the name of the speaker.

> Below is a bird's eye view of the directory structure for this repository:

```
Hypa_Fleurs/
├── README.md
├── LICENSE
├── data/
│   ├── text/
│   └── audio/
│       ├── english/
│       ├── igbo/
│       ├── yoruba/
│       └── hausa/
├── metadata/
│   ├── text_metadata.json
│   └── audio_metadata.json
└── examples/
    └── load_dataset.py
```

---

## Usage

### Loading with Hugging Face Datasets

The dataset is available on Hugging Face and can be loaded using the [`datasets`](https://huggingface.co/docs/datasets/) library. For example:

```python
from datasets import load_dataset

# Load the text-to-text translation part
dataset = load_dataset("hypaai/Hypa_Fleurs", split="igbo")

print(dataset[0])
```

---

## Data Preparation

- **Source Data:** We started with the English split of Google Fleurs.
- **Translation:** Professional AfroVoices experts translated the texts into Igbo, Yoruba, and Hausa.
- **Recording:** The same experts recorded high-quality audio for both the original English texts and the translations.
- **Alignment:** Each text entry is aligned with its corresponding audio recording, ensuring consistency across modalities.
- **Preprocessing:** All data were processed to ensure uniformity in encoding (UTF-8 for text, standardized audio formats) and split distribution across each language.

---

## Applications

The Hypa_Fleurs dataset can be used for various research and development tasks, including but not limited to:

- **Machine Translation:** Training and evaluating translation models between English and African languages.
- **Speech Recognition (ASR):** Developing systems that can transcribe speech in under-resourced languages.
- **Text-to-Speech (TTS):** Creating natural-sounding TTS systems using paired audio-text data.
- **Cross-lingual Learning:** Supporting transfer learning and multilingual model training.
- **Language Identification (LI):** Identifying spoken or written languages (speech or text).

---

## Licensing and Citation

This dataset is released under an [Open Source License](./LICENSE) (apache-2.0). Please refer to the LICENSE file for full details.

When using **Hypa_Fleurs** in your work, please cite both this dataset and the original [Google Fleurs](https://ai.google/tools/datasets/google-fleurs) dataset as follows:

```bibtex
@inproceedings{googlefleurs,
  title={Google Fleurs: A Multilingual Speech Dataset},
  author={Google AI},
  booktitle={Conference on Speech and Language Processing},
  year={2021}
}

@misc{hypafleurs,
  title={Hypa_Fleurs: Multilingual Text and Speech Dataset for Low-Resource Languages},
  author={AfroVoices},
  note={Open-sourced on Hugging Face},
  year={2025},
  url={https://huggingface.co/datasets/hypaai/Hypa_Fleurs}
}
```

---

## Acknowledgements

- **Google Fleurs Team:** For creating the foundational dataset.
- **AfroVoices Experts:** For their translation expertise and high-quality audio recordings.
- **Community Contributions:** We thank all contributors and users who help improve this dataset.

---

## Contact and Contributions

For any questions, issues, or contributions, please open an issue in this repository or contact [hypa.ai.ng@gmail.com](mailto:hypa.ai.ng@gmail.com). Contributions are welcome!

---

## Closing Remarks

By making Hypa_Fleurs available, we hope to empower research and development in multilingual and speech technologies for African languages.

Hypa AI remains steadfast in its mission to pioneer intelligent solutions that are not just technologically advanced but are also culturally aware, ensuring that the future of AI is as diverse and inclusive as the world it serves.

AfroVoices, a subsidiary of Hypa AI, is dedicated to amplifying African voices, languages, and cultures in the intelligence age. Focused on bridging the digital representation gap, AfroVoices curates datasets and resources for African languages, promoting inclusivity and cultural appreciation in AI technologies. Their mission goes beyond technological innovation, aiming to celebrate the richness of African linguistic diversity on a global stage.

---