yapdo-mini / README.md
NathanRoll's picture
Upload README.md with huggingface_hub
47c6fb2 verified
metadata
language:
  - en
  - hi
  - ar
  - sw
  - te
  - tl
  - pcm
  - es
license: cc-by-4.0
gated: auto
extra_gated_heading: Access Yapdo-Mini
extra_gated_description: >-
  Please share your contact information to access this dataset. Access is
  granted immediately.
extra_gated_button_content: Agree and access dataset
extra_gated_fields:
  Affiliation:
    type: text
    required: true
  Use case:
    type: select
    options:
      - Research
      - Commercial
      - Education
      - label: Other
        value: other
task_categories:
  - automatic-speech-recognition
  - audio-classification
tags:
  - conversational-speech
  - multilingual
  - accent
  - code-switching
  - multi-speaker
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Yapdo-Mini

Yapdo-Mini is a sample of the Yapdo dataset, a conversational speech corpus drawn from 109,804 hours of recordings from 17,008 speakers across 67 languages. The source audio is natively recorded with separate speaker channels; the samples here are presented as combined conversations.

Yapdo Data Highlights

Total audio 109,804 hours
Unique speakers 17,008
Languages 67
Format 48 kHz, 16-bit PCM WAV per speaker
Channel separation Each speaker on a dedicated, time-aligned track
Speech type Spontaneous, unscripted, multi-party conversations
Code-switching Yoruba-English, Hindi-English, Swahili-English ("Sheng"), Tagalog-Cebuano, and more
Mean SNR ~33 dB
Median RMS -26 dBFS

Top 10 Languages (estimated hours)

Language Hours Language Hours
English 31,660 Tagalog 2,014
Hindi 8,412 Spanish 1,651
Arabic 2,427 Nigerian Pidgin 1,382
Swahili 2,075 Tamil 1,288
Hausa 2,074 Cebuano 848

Note: These are estimated hours based on automated language detection. We are in the process of obtaining human-verified language and accent labels. The total number of languages and hours per language/accent are subject to change.


Combined vs. Separated Audio

Each sample in this mini dataset is a combined mix of all speakers. The parent Yapdo corpus stores each speaker on a separate, time-aligned track. Here's what that difference sounds like — a Telugu conversation with 2 speakers:

Combined (all speakers mixed)

Speaker 1 (isolated track)

Speaker 2 (isolated track)


All 12 Samples

# Language Speakers Variety Transcript
1 sw 2 Nairobi urban Yes
2 hi 2
3 tl 2 Central Visayas
4 sw 2 Nairobi urban Yes
5 ar 3 Cairene
6 te 2 Karnataka/Bangalore Yes
7 es 3 Venezuelan
8 pcm 2 Nigerian English Yes
9 en 3 Egyptian Arabic Yes
10 pcm 2 Nigerian English Yes
11 tl 3 Mindoreño
12 en 3 Indian

Schema

Column Type Description
audio Audio(16kHz) Combined multi-speaker audio, 16 kHz mono
text string Timestamped transcript with speaker IDs (human-reviewed where available, otherwise empty). Full human-validated transcripts are available upon request.
language string Primary ISO 639-1 language code
accent string Accent or dialect label (e.g. "Nairobi urban", "Cairene", "Mindoreño")
relationship string Speaker relationship (friends, acquaintances, colleagues, etc.)
topics string Topics discussed
speech_characteristics string Notable audio features (code-switching, laughter, etc.)
num_speakers int Number of speakers in the clip
duration_s float Clip duration in seconds
rms_dbfs float RMS loudness in dBFS
peak_amplitude float Peak sample amplitude (0.0–1.0)
speech_ratio float Fraction of frames containing speech

Usage

from datasets import load_dataset

ds = load_dataset("liva-ai/yapdo-mini", split="train")

for example in ds:
    print(f"{example['language']:>3s} | {example['num_speakers']} speakers | {example['variety']}")
    print(f"   Transcript: {example['text'][:100]}...")
    print()