WaxalNLP / README.md
perrynelson's picture
Upload README.md with huggingface_hub
75c875b verified
metadata
license:
  - cc-by-sa-4.0
  - cc-by-4.0
annotation_creators:
  - human-annotated
  - crowdsourced
language_creators:
  - creator_1
tags:
  - audio
  - automatic-speech-recognition
  - text-to-speech
language:
  - ach
  - aka
  - dag
  - dga
  - ewe
  - fat
  - ful
  - hau
  - ibo
  - kpo
  - lin
  - lug
  - mas
  - mlg
  - nyn
  - sna
  - sog
  - swa
  - twi
  - yor
multilinguality:
  - multilingual
pretty_name: Waxal NLP Datasets
task_categories:
  - automatic-speech-recognition
  - text-to-speech
source_datasets:
  - UGSpeechData
  - DigitalUmuganda/AfriVoice
  - original
configs:
  - config_name: ach_asr
    data_files:
      - split: train
        path: data/ASR/ach/ach-train-*
      - split: validation
        path: data/ASR/ach/ach-validation-*
      - split: test
        path: data/ASR/ach/ach-test-*
      - split: unlabeled
        path: data/ASR/ach/ach-unlabeled-*
  - config_name: ach_tts
    data_files:
      - split: train
        path: data/TTS/ach/ach-train-*
      - split: validation
        path: data/TTS/ach/ach-validation-*
      - split: test
        path: data/TTS/ach/ach-test-*
  - config_name: aka_asr
    data_files:
      - split: train
        path: data/ASR/aka/aka-train-*
      - split: validation
        path: data/ASR/aka/aka-validation-*
      - split: test
        path: data/ASR/aka/aka-test-*
      - split: unlabeled
        path: data/ASR/aka/aka-unlabeled-*
  - config_name: dag_asr
    data_files:
      - split: train
        path: data/ASR/dag/dag-train-*
      - split: validation
        path: data/ASR/dag/dag-validation-*
      - split: test
        path: data/ASR/dag/dag-test-*
      - split: unlabeled
        path: data/ASR/dag/dag-unlabeled-*
  - config_name: dga_asr
    data_files:
      - split: train
        path: data/ASR/dga/dga-train-*
      - split: validation
        path: data/ASR/dga/dga-validation-*
      - split: test
        path: data/ASR/dga/dga-test-*
      - split: unlabeled
        path: data/ASR/dga/dga-unlabeled-*
  - config_name: ewe_asr
    data_files:
      - split: train
        path: data/ASR/ewe/ewe-train-*
      - split: validation
        path: data/ASR/ewe/ewe-validation-*
      - split: test
        path: data/ASR/ewe/ewe-test-*
      - split: unlabeled
        path: data/ASR/ewe/ewe-unlabeled-*
  - config_name: fat_tts
    data_files:
      - split: train
        path: data/TTS/fat/fat-train-*
      - split: validation
        path: data/TTS/fat/fat-validation-*
      - split: test
        path: data/TTS/fat/fat-test-*
  - config_name: ful_asr
    data_files:
      - split: train
        path: data/ASR/ful/ful-train-*
      - split: validation
        path: data/ASR/ful/ful-validation-*
      - split: test
        path: data/ASR/ful/ful-test-*
      - split: unlabeled
        path: data/ASR/ful/ful-unlabeled-*
  - config_name: ful_tts
    data_files:
      - split: train
        path: data/TTS/ful/ful-train-*
      - split: validation
        path: data/TTS/ful/ful-validation-*
      - split: test
        path: data/TTS/ful/ful-test-*
  - config_name: hau_tts
    data_files:
      - split: train
        path: data/TTS/hau/hau-train-*
      - split: validation
        path: data/TTS/hau/hau-validation-*
      - split: test
        path: data/TTS/hau/hau-test-*
  - config_name: ibo_tts
    data_files:
      - split: train
        path: data/TTS/ibo/ibo-train-*
      - split: validation
        path: data/TTS/ibo/ibo-validation-*
      - split: test
        path: data/TTS/ibo/ibo-test-*
  - config_name: kpo_asr
    data_files:
      - split: train
        path: data/ASR/kpo/kpo-train-*
      - split: validation
        path: data/ASR/kpo/kpo-validation-*
      - split: test
        path: data/ASR/kpo/kpo-test-*
      - split: unlabeled
        path: data/ASR/kpo/kpo-unlabeled-*
  - config_name: lin_asr
    data_files:
      - split: train
        path: data/ASR/lin/lin-train-*
      - split: validation
        path: data/ASR/lin/lin-validation-*
      - split: test
        path: data/ASR/lin/lin-test-*
      - split: unlabeled
        path: data/ASR/lin/lin-unlabeled-*
  - config_name: lug_asr
    data_files:
      - split: train
        path: data/ASR/lug/lug-train-*
      - split: validation
        path: data/ASR/lug/lug-validation-*
      - split: test
        path: data/ASR/lug/lug-test-*
      - split: unlabeled
        path: data/ASR/lug/lug-unlabeled-*
  - config_name: lug_tts
    data_files:
      - split: train
        path: data/TTS/lug/lug-train-*
      - split: validation
        path: data/TTS/lug/lug-validation-*
      - split: test
        path: data/TTS/lug/lug-test-*
  - config_name: mas_asr
    data_files:
      - split: train
        path: data/ASR/mas/mas-train-*
      - split: validation
        path: data/ASR/mas/mas-validation-*
      - split: test
        path: data/ASR/mas/mas-test-*
      - split: unlabeled
        path: data/ASR/mas/mas-unlabeled-*
  - config_name: mlg_asr
    data_files:
      - split: train
        path: data/ASR/mlg/mlg-train-*
      - split: validation
        path: data/ASR/mlg/mlg-validation-*
      - split: test
        path: data/ASR/mlg/mlg-test-*
      - split: unlabeled
        path: data/ASR/mlg/mlg-unlabeled-*
  - config_name: nyn_asr
    data_files:
      - split: train
        path: data/ASR/nyn/nyn-train-*
      - split: validation
        path: data/ASR/nyn/nyn-validation-*
      - split: test
        path: data/ASR/nyn/nyn-test-*
      - split: unlabeled
        path: data/ASR/nyn/nyn-unlabeled-*
  - config_name: nyn_tts
    data_files:
      - split: train
        path: data/TTS/nyn/nyn-train-*
      - split: validation
        path: data/TTS/nyn/nyn-validation-*
      - split: test
        path: data/TTS/nyn/nyn-test-*
  - config_name: sna_asr
    data_files:
      - split: train
        path: data/ASR/sna/sna-train-*
      - split: validation
        path: data/ASR/sna/sna-validation-*
      - split: test
        path: data/ASR/sna/sna-test-*
      - split: unlabeled
        path: data/ASR/sna/sna-unlabeled-*
  - config_name: sog_asr
    data_files:
      - split: train
        path: data/ASR/sog/sog-train-*
      - split: validation
        path: data/ASR/sog/sog-validation-*
      - split: test
        path: data/ASR/sog/sog-test-*
      - split: unlabeled
        path: data/ASR/sog/sog-unlabeled-*
  - config_name: swa_tts
    data_files:
      - split: train
        path: data/TTS/swa/swa-train-*
      - split: validation
        path: data/TTS/swa/swa-validation-*
      - split: test
        path: data/TTS/swa/swa-test-*
  - config_name: twi_tts
    data_files:
      - split: train
        path: data/TTS/twi/twi-train-*
      - split: validation
        path: data/TTS/twi/twi-validation-*
      - split: test
        path: data/TTS/twi/twi-test-*
  - config_name: yor_tts
    data_files:
      - split: train
        path: data/TTS/yor/yor-train-*
      - split: validation
        path: data/TTS/yor/yor-validation-*
      - split: test
        path: data/TTS/yor/yor-test-*
dataset_info:
  - config_name: ach_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: ach_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: aka_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: dag_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: dga_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: ewe_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: fat_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: ful_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: ful_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: hau_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: ibo_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: kpo_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: lin_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: lug_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: lug_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: mas_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: mlg_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: nyn_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: nyn_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: sna_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: sog_asr
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: transcription
        dtype: string
      - name: language
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: swa_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: twi_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio
  - config_name: yor_tts
    features:
      - name: id
        dtype: string
      - name: speaker_id
        dtype: string
      - name: text
        dtype: string
      - name: locale
        dtype: string
      - name: gender
        dtype: string
      - name: audio
        dtype: audio

Waxal Datasets

Table of Contents

Dataset Description

The Waxal project provides datasets for both Automated Speech Recognition (ASR) and Text-to-Speech (TTS) for African languages. The goal of this dataset's creation and release is to facilitate research that improves the accuracy and fluency of speech and language technology for these underserved languages, and to serve as a repository for digital preservation.

The Waxal datasets are collections acquired through partnerships with Makerere University, The University of Ghana, Digital Umuganda, and Media Trust. Acquisition was funded by Google and the Gates Foundation under an agreement to make the dataset openly accessible.

ASR Dataset

The Waxal ASR dataset is a collection of data in 14 African languages. It consists of approximately 1,250 hours of transcribed natural speech from a wide variety of voices. The 14 languages in this dataset represent over 100 million speakers across 40 Sub-Saharan African countries.

Provider Languages License
Makerere University Acholi, Luganda, Masaaba, Nyankole, Soga CC-BY-4.0
University of Ghana Akan, Ewe, Dagbani, Dagaare, Ikposo CC-BY-4.0
Digital Umuganda Fula, Lingala, Shona, Malagasy CC-BY-4.0

TTS Dataset

The Waxal TTS dataset is a collection of text-to-speech data in 10 African languages. It consists of approximately 240 hours of scripted natural speech from a wide variety of voices.

Provider Languages License
Makerere University Acholi, Luganda, Kiswahili, Nyankole CC-BY-4.0
University of Ghana Akan (Fante, Twi) CC-BY-4.0
Media Trust Fula, Igbo, Hausa, Yoruba CC-BY-4.0

How to Use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale.

First, ensure you have the necessary dependencies installed to handle audio data. You will need ffmpeg installed on your system.

Google Colab / Ubuntu

sudo apt-get install ffmpeg
pip install datasets[audio]

macOS

brew install ffmpeg
pip install datasets[audio]

Windows Download and install from ffmpeg.org and ensure it's in your PATH.

pip install datasets[audio]

If you encounter RuntimeError: Could not load libtorchcodec, please ensure ffmpeg is correctly installed or check for compatibility between your torch, torchaudio, and torchcodec versions.

Loading ASR Data

To load ASR data for a specific language, specify the configuration name, e.g. sna_asr for Shona ASR data.

from datasets import load_dataset, Audio

# Load Shona (sna) ASR dataset
asr_data = load_dataset("google/WaxalNLP", "sna_asr")

# Access splits
train = asr_data['train']
val = asr_data['validation']
test = asr_data['test']

# Example: Accessing audio bytes and other fields
example = train[0]
print(f"Transcription: {example['transcription']}")
print(f"Sampling Rate: {example['audio']['sampling_rate']}")
# 'array' contains the decoded audio bytes as a numpy array
print(f"Audio Array Shape: {example['audio']['array'].shape}")

Loading TTS Data

To load TTS data for a specific language, specify the configuration name, e.g. swa_tts for Swahili TTS data.

from datasets import load_dataset

# Load Swahili (swa) TTS dataset
tts_data = load_dataset("google/WaxalNLP", "swa_tts")

# Access splits
train = tts_data['train']

Dataset Structure

ASR Data Fields

{
  'id': 'sna_0',
  'speaker_id': '...',
  'audio': {
    'array': [...],
    'sample_rate': 16_000
  },
  'transcription': '...',
  'language': 'sna',
  'gender': 'Female',
}
  • id: Unique identifier.
  • speaker_id: Unique identifier for the speaker.
  • audio: Audio data.
  • transcription: Transcription of the audio.
  • language: ISO 639-2 language code.
  • gender: Speaker gender ('Male', 'Female', or empty).

TTS Data Fields

{
  'id': 'swa_0',
  'speaker_id': '...',
  'audio': {
    'array': [...],
    'sample_rate': 16_000
  },
  'text': '...',
  'locale': 'swa',
  'gender': 'Female',
}
  • id: Unique identifier.
  • speaker_id: Unique identifier for the speaker.
  • audio: Audio data.
  • text: Text script.
  • locale: ISO 639-2 language code.
  • gender: Speaker gender.

Data Splits

For the ASR Dataset, the data with transcriptions is split as follows: * train: 80% of labeled data. * validation: 10% of labeled data. * test: 10% of labeled data.

The unlabeled split contains all samples that do not have a corresponding transcription.

The TTS Dataset follows a similar structure, with data split into train, validation, and test sets.

Dataset Curation

The data was gathered by multiple partners:

Provider Dataset License
University of Ghana UGSpeechData CC BY 4.0
Digital Umuganda AfriVoice CC-BY 4.0
Makerere University Yogera Dataset CC-BY 4.0
Media Trust CC-BY 4.0

Considerations for Using the Data

Please check the license for the specific languages you are using, as they may differ between providers.

Affiliation: Google Research

Version and Maintenance

  • Current Version: 1.0.0
  • Last Updated: 01/2026