epadb / README.md
hashmin's picture
Add files using upload-large-folder tool
3f61b79 verified
|
raw
history blame
5.63 kB
metadata
ymi_version: 1
language:
  - en
tags:
  - speech
  - pronunciation
  - error-detection
  - forced-alignment
license: cc-by-nc-4.0
pretty_name: 'EPADB: English Pronunciation Assessment Dataset'
size_categories:
  - 1K<n<10K
task_categories:
  - automatic-speech-recognition
  - audio-classification
task_ids:
  - speech-recognition-other
  - audio-classification-other

EPADB: English Pronunciation Assessment Dataset for Batched Diagnosis

Dataset Summary

EPADB contains curated pronunciation assessment data collected from Spanish-speaking learners of English. Each utterance has manually aligned phone-level annotations from up to two expert annotators along with per-utterance global proficiency scores. Metadata links the aligned phones with MFA (Montreal Forced Aligner) timestamps, derived error classifications, and reference transcriptions. The corpus ships with a train and a test partition and includes speaker-wise waveform recordings resampled to 16 kHz.

Supported Tasks

  • Pronunciation Assessment – predict utterance-level global scores or speaker-level proficiency tiers.
  • Phone-level Error Detection – classify each phone as insertion, deletion, distortion, substitution, or correct.
  • Alignment Analysis – leverage MFA timings to study forced alignment quality or to refine pronunciation models.

Languages

  • L2 utterances: English
  • Speaker L1: Spanish

Dataset Structure

Data Instances

Each JSON entry describes one utterance:

  • Phone sequences for MFA reference (reference) and annotators (annot_1, optional annot_2).
  • Phone-level labels (label_1, label_2) and derived error_type categories.
  • MFA start/end timestamps per phone (start_mfa, end_mfa).
  • Per-utterance global scores (global_1, global_2) and propagated speaker levels (level_1, level_2).
  • Speaker metadata (speaker_id, gender).
  • Audio metadata (duration, sample_rate, wav_path) plus the waveform itself.
  • Reference sentence transcription (transcription).

Data Fields

Field Type Description
utt_id string Unique utterance identifier (e.g., spkr28_1).
speaker_id string Speaker identifier.
sentence_id string Reference sentence ID (matches reference_transcriptions.txt).
phone_ids sequence[string] Unique phone identifiers per utterance.
reference sequence[string] MFA reference phones.
annot_1 sequence[string] Annotator 1 phones (- marks deletions).
annot_2 sequence[string] Annotator 3 phones when available, empty otherwise.
label_1 sequence[string] Annotator 1 phone labels ("1" correct, "0" incorrect).
label_2 sequence[string] Annotator 3 phone labels when present.
error_type sequence[string] Derived categories: correct, insertion, deletion, distortion, substitution.
start_mfa sequence[float] Phone start times (seconds).
end_mfa sequence[float] Phone end times (seconds).
global_1 float or null Annotator 1 utterance-level score (1–4).
global_2 float or null Annotator 3 score when available.
level_1 string or null Speaker-level proficiency tier from annotator 1 ("A"/"B").
level_2 string or null Speaker tier from annotator 3.
gender string or null Speaker gender ("M"/"F").
duration float Utterance duration in seconds (after resampling to 16 kHz).
sample_rate int Sample rate in Hz (16,000).
wav_path string Waveform filename (<utt_id>.wav).
audio Audio Automatically loaded waveform (16 kHz).
transcription string or null Reference sentence text.

Data Splits

Split # Examples
train 1,903
test 1,263

Notes

  • When annotator 3 did not label an utterance, related fields (annot_2, label_2, global_2, level_2) are absent or set to null.
  • Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels.
  • Waveforms were resampled to 16 kHz using ffmpeg during manifest generation.

Data Processing

  1. Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition.
  2. create_db.py aggregates these into JSON manifests, adds error types, and resamples audio.
  3. Global scores are averaged per speaker to derive level_* tiers (A if mean ≥ 3, B otherwise).

Licensing

  • Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
  • Please ensure any downstream usage complies with participant consent and institutional policies.

Citation

@article{vidal2019epadb,
  title   = {EpaDB: a database for development of pronunciation assessment systems},
  author  = {Vidal, Jazmin and Ferrer, Luciana and Brambilla, Leonardo},
  journal = {Proc. Interspeech},
  pages   = {589--593},
  year    = {2019}
}

Usage

Install dependencies and load the dataset:

from datasets import load_dataset

# Local usage before uploading:
ds = load_dataset(
    "epadb_dataset/epadb.py",
    data_dir="/path/to/epadb",  # folder with train.json, test.json, WAV/
    split="train",
)
print(ds)
print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"])  # 16000

# After pushing to the Hugging Face Hub:
# ds = load_dataset("JazminVidal/epadb", split="train")

Acknowledgements

We thank the learners and expert annotators who contributed to EPADB, as well as the speech processing community for tools such as MFA and ffmpeg used in the data preparation pipeline.