vadette's picture
Create README.md
dcb7518 verified
metadata
license: cc0-1.0
task_categories:
  - feature-extraction
language:
  - ko
  - hi
tags:
  - audio
  - speech
  - prosody
  - acoustics
  - linguistics
  - phonetics
  - voice-analytics
pretty_name: Alexandria Voice Corpus  Korean & Hindi Macro-Prosody Telemetry
size_categories:
  - 10K<n<100K

Alexandria Voice Corpus — Korean & Hindi Macro-Prosody Telemetry

A sample release from the Alexandria Voice Corpus, a multilingual acoustic telemetry database spanning 60+ languages. This pack contains macro-prosodic feature extractions for Korean (6,998 clips) and Hindi (18,447 clips), derived from the Mozilla Common Voice CV24 corpus (CC0).

No audio is included. This is a structured feature dataset for linguistic research, speech technology development, and cross-linguistic prosody analysis.


Dataset Details

What is macro-prosody telemetry?

Macro-prosody refers to the suprasegmental properties of speech — pitch contour, rhythm, intensity, and voice quality — measured at the clip level rather than the phoneme level. Each row in this dataset represents one spoken utterance with 20+ acoustic features extracted from it.

This is distinct from transcription, alignment, or phoneme-level data. It is designed for population-level acoustic analysis, language typology research, and training prosody-aware speech models.

Dataset Description

  • Curated by: Orator Forge
  • Language(s): Korean (ko), Hindi (hi)
  • Source corpus: Mozilla Common Voice CV24 (CC0-1.0)
  • License: CC0-1.0
  • Clips: 25,445 total (Korean: 6,998 | Hindi: 18,447)
  • Anonymization standard: orator_forge_k5_v1

Dataset Sources


Uses

Direct Use

  • Cross-linguistic prosody comparison between Korean (language isolate) and Hindi (Indo-Aryan)
  • Training or evaluating prosody-aware TTS and ASR models
  • Rhythm typology research (e.g. mora-timed vs stress-timed speech)
  • Voice quality and breathiness studies
  • Speaker demographic modeling from acoustic features (population level)
  • Feature engineering for downstream speech classification tasks

Out-of-Scope Use

  • Speaker identification or re-identification — this dataset has been deliberately anonymized to prevent linking acoustic features back to individual speakers. Any attempt to do so violates the terms of use.
  • Direct audio reconstruction — no audio is present in this dataset.
  • Tasks requiring phoneme-level or word-level timing — use a force-aligned corpus instead.

Dataset Structure

Each parquet file contains one row per utterance. Files are Snappy-compressed.

Column Type Description
clip_id string Anonymized sequential ID (e.g. korean_cv24_004521)
lang string BCP-47 language code
lang_name string Language name
quality_tier int 1 (best) – 2 (good). Only T1/T2 clips included
duration_ms int Clip duration, bucketed to nearest 100ms
gender string male / female / unknown
gender_source string meta (self-reported) / inferred (pitch-based) / unknown
age string Age bracket (CV metadata where available)
syllable_count_approx int Approximate syllable count (vowel-count proxy)
pitch_mean float32 Mean F0 in Hz
pitch_std float32 F0 standard deviation
pitch_range float32 F0 range (max – min) in Hz
pitch_velocity_max float32 Max rate of F0 change (Hz/s)
intensity_mean float32 Mean RMS intensity (dB)
intensity_max float32 Peak intensity (dB)
intensity_range float32 Intensity dynamic range (dB)
hnr_mean float32 Harmonics-to-noise ratio (dB)
cpps float32 Cepstral peak prominence smoothed — breathiness indicator
jitter_local float32 Cycle-to-cycle pitch perturbation
shimmer_local float32 Cycle-to-cycle amplitude perturbation
spectral_centroid_mean float32 Mean spectral centroid (Hz)
spectral_tilt float32 Spectral slope (relates to voice effort)
mfcc_delta_mean float32 Mean MFCC delta (rate of spectral change)
zcr_mean float32 Zero-crossing rate
teo_mean float32 Teager energy operator mean
npvi float32 Normalized pairwise variability index (rhythm metric)
articulation_rate float32 Syllables per second (speech only)
speaking_rate float32 Syllables per second (total duration)
pause_rate float32 Pauses per second
speech_ratio float32 Proportion of clip containing voiced speech
snr_median float32 Signal-to-noise ratio, median (Brouhaha)
c50_median float32 C50 clarity metric, median (Brouhaha)
f1_mean float32 First formant mean (Hz) — note: may be 0.0 in this release
f2_mean float32 Second formant mean (Hz) — note: may be 0.0 in this release
f3_mean float32 Third formant mean (Hz) — note: may be 0.0 in this release

Quality Tiers

Clips were graded using Brouhaha (SNR + C50 + VAD scoring):

Tier SNR C50 Speech ratio Description
T1 ≥ 20 dB ≥ 20 dB ≥ 0.6 Studio quality
T2 ≥ 10 dB ≥ 5 dB ≥ 0.4 Clean field recording

Only T1 and T2 clips are included in this release.

Files

korean_cv24.parquet     — 6,998 rows
hindi_cv24.parquet      — 18,447 rows

Dataset Creation

Curation Rationale

There is a significant gap in publicly available acoustic feature datasets for non-Western and non-European languages. Korean and Hindi together represent over 600 million speakers across two typologically distinct language families — a language isolate and an Indo-Aryan branch of Indo-European. This release provides a free, CC0-licensed baseline for researchers who need structured prosodic features without needing to process raw audio.

Source Data

Data Collection and Processing

Source audio was drawn from Mozilla Common Voice CV24, a crowd-sourced corpus of read speech recorded by volunteers under a CC0 license.

Processing pipeline:

  1. MP3 source audio converted to 16kHz mono WAV (ffmpeg, -20 dBFS normalization)
  2. Quality grading via Brouhaha (SNR, C50, VAD) — only T1/T2 retained
  3. Acoustic feature extraction via Parselmouth/Praat at 16kHz
  4. Anonymization and precision degradation applied at export (see below)

Source Data Producers

Recordings were made by volunteer contributors to the Mozilla Common Voice project. Contributors self-reported demographic metadata (age, gender, accent) where willing.

Anonymization

This dataset applies the orator_forge_k5_v1 anonymization standard:

  • Original Mozilla filenames replaced with sequential anonymized clip IDs
  • Transcripts removed entirely (approximate syllable count provided as proxy)
  • All continuous acoustic variables truncated to 2 decimal places and stored as float32
  • Duration bucketed to nearest 100ms to prevent cross-referencing with source audio
  • K-anonymity suppression at k=5: rows where the combination of {gender, age_bucket, duration_bucket} has fewer than 5 members are excluded

Personal and Sensitive Information

  • No names, speaker IDs, or any directly identifying information is present
  • No original audio is included
  • Demographic fields (age, gender) are self-reported by Mozilla Common Voice contributors and are optional — many rows will show unknown
  • Formant data (F1/F2/F3) is present but returns 0.0 in this release due to a known extraction issue; this will be corrected in v1.1

Bias, Risks, and Limitations

  • Gender balance: Gender is inferred from pitch for clips lacking self-reported metadata. Pitch-based inference has known limitations for speakers with atypical voices, tonal language speakers, and non-binary individuals. The gender_source field distinguishes self-reported from inferred labels.
  • Recording conditions: Common Voice is read speech recorded in uncontrolled environments. Acoustic conditions vary significantly across contributors.
  • Age distribution: CV contributor demographics skew younger and technically literate. This dataset is not a representative sample of the full speaker population of either language.
  • Hindi script diversity: Hindi CV24 clips include speakers from a wide range of regional backgrounds with varying accent profiles. No regional stratification has been applied in this release.
  • Formant zeros: F1/F2/F3 return 0.0 across all clips in this release. Do not use formant columns until v1.1.
  • Prohibited use: Do not use this dataset to attempt speaker identification or re-linking to source audio. This violates the terms of use regardless of technical feasibility.

Recommendations

Use the gender_source field to filter to self-reported gender labels if demographic accuracy is important for your use case. For cross-linguistic rhythm comparisons, nPVI and articulation rate are the most reliable features in this release. Formant-dependent analyses should wait for v1.1.


Citation

If you use this dataset, please cite the Mozilla Common Voice project as the source corpus:

BibTeX:

@dataset{alexandria_korean_hindi_prosody_2026,
  title     = {Alexandria Voice Corpus — Korean \& Hindi Macro-Prosody Telemetry},
  author    = {Orator Forge},
  year      = {2026},
  license   = {CC0-1.0},
  note      = {Derived from Mozilla Common Voice CV24 (CC0). 
               Acoustic features extracted via Parselmouth/Praat.}
}

@misc{mozilla_common_voice,
  title     = {Common Voice: A Massively-Multilingual Speech Corpus},
  author    = {Ardila, Rosana and others},
  year      = {2020},
  url       = {https://commonvoice.mozilla.org}
}

Glossary

Term Definition
F0 / pitch_mean Fundamental frequency — the perceived pitch of the voice, measured in Hz
HNR Harmonics-to-noise ratio — higher values indicate cleaner, more tonal voice quality
CPPS Cepstral peak prominence smoothed — lower values indicate breathier voice
nPVI Normalized pairwise variability index — measures durational variability between adjacent syllables; higher in stress-timed languages
C50 Clarity metric from room acoustics; higher = less reverb/echo in the recording
SNR Signal-to-noise ratio — higher = cleaner recording
Brouhaha Quality scoring model used for grading: github.com/marianne-m/brouhaha-vad
T1/T2 Quality tiers assigned by Brouhaha grading (see Dataset Structure)
orator_forge_k5_v1 Anonymization standard: k=5 suppression + sequential IDs + 2dp truncation + 100ms duration bucketing

Dataset Card Contact

c.kleingertner@gmail.com ``