Datasets:
language:
- af
pretty_name: Afrikaans Speech Dataset for Whisper Fine-Tuning
tags:
- automatic-speech-recognition
- speech
- audio
- afrikaans
- low-resource
- multilingual
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio_id
dtype: string
- name: chunk_index
dtype: int32
- name: transcript_word_count
dtype: int32
- name: transcript_char_count
dtype: int32
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
splits:
- name: train
num_bytes: 5381293102
num_examples: 5603
- name: validation
num_bytes: 578182293
num_examples: 602
- name: test
num_bytes: 586814137
num_examples: 611
download_size: 6362237408
dataset_size: 6546289532
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Afrikaans Speech Dataset for Whisper Fine-Tuning
Dataset Card
Dataset Summary
This dataset consists of approximately 56 hours of Afrikaans speech extracted from church sermons, paired with cleaned and aligned transcripts. It is specifically prepared for fine-tuning multilingual ASR models like OpenAI's Whisper (particularly large-v3) on low-resource Afrikaans speech
The audio is segmented into fixed 30-second chunks (with 3-second overlaps for context preservation) at 16 kHz mono 16-bit PCM. Transcripts are normalized using Whisper's non-English text standardization rules (lowercase, no punctuation/diacritics, bracket/parenthesis removal, whitespace collapse).
This dataset aims to improve Whisper's performance on real-world, spontaneous Afrikaans speech (e.g., sermons, announcements, conversations) with varied accents and noise.
- Language: Afrikaans (
af) - Domain: Informal/spontaneous speech, primarily South African religious/community content
- Total Hours: ~50 hours (post-processing)
- Chunks: Thousands of 30-second segments across train/validation/test splits (80/10/10)
- License: CC-BY-4.0 (intended for research purposes)
Supported Tasks and Leaderboards
- Task: Automatic Speech Recognition (ASR) fine-tuning
- Models: Optimized for Whisper (tiny to large-v3); compatible with any seq2seq ASR model
- Evaluation Metric: Word Error Rate (WER) – expect reductions vs. base Whisper on Afrikaans test sets
Languages
- Primary: Afrikaans (South African variant)
- Speakers may use English words in sentences (minimal)
Dataset Structure
Data Splits
| Split | Hours (approx) | Chunks (approx) | Description |
|---|---|---|---|
| train | 46.6 | ~5603 | Training data |
| validation | 5.0 | ~622 | Validation (early stopping) |
| test | 5.0 | ~606 | Held-out evaluation |
Data Instances
Each instance is a 30-second audio chunk with transcription
audio_id: Original audio IDchunk_index: Sequential chunk number relative to raw audiotranscript_word_count: Count of transcript wordstranscript_char_count: Count of transcript charactersaudio: Audio file (16kHz mono WAV) with array and sampling ratetranscript: Normalized lowercase transcript (no punctuation)
Dataset Visualization Examples
Below are example visualizations you can generate from the dataset using Python. These help explore audio characteristics and transcript distributions.
1. Audio Waveform and Mel Spectrogram (Single Sample)
from datasets import load_dataset
import librosa.display
import matplotlib.pyplot as plt
import numpy as np
dataset = load_dataset("andreoosthuizen/afrikaans-30s", split="train")
sample = dataset[0]
audio = sample["audio"]["array"]
sr = sample["audio"]["sampling_rate"]
transcript = sample["transcript"]
print(f"Transcript: {transcript}")
# Waveform
plt.figure(figsize=(14, 5))
librosa.display.waveshow(audio, sr=sr)
plt.title("Audio Waveform")
plt.xlabel("Time (s)")
plt.ylabel("Amplitude")
plt.tight_layout()
plt.show()
# Mel Spectrogram (Whisper input representation)
S = librosa.feature.melspectrogram(y=audio, sr=sr, n_mels=128)
S_dB = librosa.power_to_db(S, ref=np.max)
plt.figure(figsize=(14, 5))
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=sr)
plt.colorbar(format='%+2.0f dB')
plt.title("Mel Spectrogram")
plt.tight_layout()
plt.show()
2. Transcript Length Distribution (Histogram)
import matplotlib.pyplot as plt
from datasets import load_dataset
dataset = load_dataset("andreoosthuizen/afrikaans-30s")
lengths = [len(example["transcript"].split()) for example in dataset["train"]]
plt.figure(figsize=(10, 6))
plt.hist(lengths, bins=50, color='skyblue', edgecolor='black')
plt.title("Distribution of Transcript Word Counts")
plt.xlabel("Number of Words")
plt.ylabel("Frequency")
plt.grid(axis='y', alpha=0.75)
plt.show()
3. Word Cloud of Common Words
from wordcloud import WordCloud
import matplotlib.pyplot as plt
from datasets import load_dataset
dataset = load_dataset("andreoosthuizen/afrikaans-30s", split="train")
text = " ".join(example["transcript"] for example in dataset)
wordcloud = WordCloud(width=800, height=400, background_color='white').generate(text)
plt.figure(figsize=(12, 6))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.title("Word Cloud of Common Afrikaans Terms")
plt.show()
These examples demonstrate typical audio quality, spectrogram features, and linguistic patterns in the dataset.
Dataset Creation
Source Data
- Afrikaans audio of the NG Kranztkloof and NG Westville communities
- Raw audio: Variable quality, resampled to 16kHz mono
Considerations
- Bias: Content skewed toward South African religious videos (sermons, announcements)
- Noise: Real-world audio (background noise, music, overlaps)
- Ethics: Derived from public sermons; no personal data; for research only
- Limitations: Normalization removes punctuation
Additional Information
Licensing
Released under CC-BY-4.0 for research/non-commercial use.
Citation
If using this dataset, please cite:
@dataset{afrikaans_30s_2026,
author = {André Oosthuizen},
title = {Afrikaans Speech Dataset for Whisper Fine-Tuning},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/andreoosthuizen/afrikaans-30s}
}
Thank you for using this dataset to improve Afrikaans ASR!


