Yapdo-Mini
Yapdo-Mini is a sample of the Yapdo dataset, a conversational speech corpus drawn from 109,804 hours of approved recordings from 17,008 speakers across 67 languages.
Yapdo Data Highlights
| Total approved audio | 109,804 hours |
| Unique speakers | 17,008 |
| Languages | 67 (human-verified labels) |
| Format | 48 kHz, 16-bit PCM WAV per speaker |
| Channel separation | Each speaker on a dedicated, time-aligned track |
| Speech type | Spontaneous, unscripted, multi-party conversations |
| Code-switching | Yoruba-English, Hindi-English, Swahili-English ("Sheng"), Tagalog-Cebuano, and more |
| Mean SNR | ~33 dB |
| Median RMS | -26 dBFS |
Top 10 Languages
| Language | Hours | Language | Hours | |
|---|---|---|---|---|
| English | 31,660 | Tagalog | 2,014 | |
| Hindi | 8,412 | Spanish | 1,651 | |
| Arabic | 2,427 | Nigerian Pidgin | 1,382 | |
| Swahili | 2,075 | Tamil | 1,288 | |
| Hausa | 2,074 | Cebuano | 848 |
Hindi alone exceeds FLEURS (12h) and Common Voice (18h) by over 100x.
Combined vs. Separated Audio
Each sample in this mini dataset is a combined mix of all speakers. The parent Yapdo corpus stores each speaker on a separate, time-aligned track. Here's what that difference sounds like — a Sheng (Swahili-English) conversation with 3 speakers:
Combined (all speakers mixed)
"Juu mbona iko iko ama ni pengine nmetoa hizi earphones ndo imeacha, imepunguza kurekodi. Unaniskia clear?"
Speaker 1 (isolated track)
Speaker 2 (isolated track)
Speaker 3 (isolated track)
All 17 Samples
| # | Language | Speakers | Speech % | Notes |
|---|---|---|---|---|
| 1 | sw | 3 | 62% | Sheng |
| 2 | hi | 3 | 69% | |
| 3 | tl | 3 | 59% | Tagalog-English |
| 4 | te | 3 | 56% | Telugu |
| 5 | sw | 3 | 63% | Sheng |
| 6 | te | 3 | 60% | Telugu |
| 7 | ar | 4 | 61% | Egyptian Arabic |
| 8 | ar | 4 | 68% | Egyptian Arabic |
| 9 | ta | 3 | 52% | Tamil |
| 10 | pcm | 3 | 58% | Nigerian Pidgin |
| 11 | en | 4 | 64% | Egyptian accent |
| 12 | pcm | 3 | 64% | Nigerian Pidgin |
| 13 | ta | 3 | 64% | Tamil |
| 14 | tl | 4 | 60% | Tagalog |
| 15 | hi | 3 | 66% | Hindi-English |
| 16 | en | 4 | 66% | Indian accent |
| 17 | en | 3 | 63% | Nigerian accent |
Schema
| Column | Type | Description |
|---|---|---|
audio |
Audio(16kHz) |
Combined multi-speaker audio, 16 kHz mono |
text |
string |
Combined transcript from all speakers (AI-generated) |
language |
string |
Primary ISO 639-1 language code |
num_speakers |
int |
Number of speakers in the clip |
accents_self_reported |
string |
Self-reported accent/dialect from user profiles |
recording_id |
string |
Session ID linking to the source corpus |
duration_s |
float |
Clip duration in seconds |
rms_dbfs |
float |
RMS loudness in dBFS |
peak_amplitude |
float |
Peak sample amplitude (0.0–1.0) |
speech_ratio |
float |
Fraction of frames containing speech |
full_recording_duration_s |
float |
Total duration of the original recording session in seconds |
notes |
string |
Additional context (accent, language variety) |
Usage
from datasets import load_dataset
ds = load_dataset("liva-ai/yapdo-mini", split="train")
for example in ds:
print(f"{example['language']:>3s} | {example['num_speakers']} speakers | {example['notes']}")
print(f" Transcript: {example['text'][:100]}...")
print()
- Downloads last month
- 79