Datasets:
Add files using upload-large-folder tool
Browse files
README.md
ADDED
|
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
ymi_version: 1.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- speech
|
| 7 |
+
- pronunciation
|
| 8 |
+
- error-detection
|
| 9 |
+
- forced-alignment
|
| 10 |
+
license: cc-by-nc-4.0
|
| 11 |
+
pretty_name: "EPADB: English Pronunciation Assessment Dataset"
|
| 12 |
+
size_categories:
|
| 13 |
+
- 1K<n<10K
|
| 14 |
+
task_categories:
|
| 15 |
+
- automatic-speech-recognition
|
| 16 |
+
- audio-classification
|
| 17 |
+
task_ids:
|
| 18 |
+
- speech-recognition-other
|
| 19 |
+
- audio-classification-other
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
# EPADB: English Pronunciation Assessment Dataset for Batched Diagnosis
|
| 23 |
+
|
| 24 |
+
## Dataset Summary
|
| 25 |
+
|
| 26 |
+
EPADB contains curated pronunciation assessment data collected from Spanish-speaking learners of English. Each utterance has manually aligned phone-level annotations from up to two expert annotators along with per-utterance global proficiency scores. Metadata links the aligned phones with MFA (Montreal Forced Aligner) timestamps, derived error classifications, and reference transcriptions. The corpus ships with a `train` and a `test` partition and includes speaker-wise waveform recordings resampled to 16 kHz.
|
| 27 |
+
|
| 28 |
+
## Supported Tasks
|
| 29 |
+
|
| 30 |
+
- **Pronunciation Assessment** – predict utterance-level global scores or speaker-level proficiency tiers.
|
| 31 |
+
- **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct.
|
| 32 |
+
- **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models.
|
| 33 |
+
|
| 34 |
+
## Languages
|
| 35 |
+
|
| 36 |
+
- L2 utterances: English
|
| 37 |
+
- Speaker L1: Spanish
|
| 38 |
+
|
| 39 |
+
## Dataset Structure
|
| 40 |
+
|
| 41 |
+
### Data Instances
|
| 42 |
+
|
| 43 |
+
Each JSON entry describes one utterance:
|
| 44 |
+
|
| 45 |
+
- Phone sequences for MFA reference (`reference`) and annotators (`annot_1`, optional `annot_2`).
|
| 46 |
+
- Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories.
|
| 47 |
+
- MFA start/end timestamps per phone (`start_mfa`, `end_mfa`).
|
| 48 |
+
- Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`).
|
| 49 |
+
- Speaker metadata (`speaker_id`, `gender`).
|
| 50 |
+
- Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself.
|
| 51 |
+
- Reference sentence transcription (`transcription`).
|
| 52 |
+
|
| 53 |
+
### Data Fields
|
| 54 |
+
|
| 55 |
+
| Field | Type | Description |
|
| 56 |
+
|-------|------|-------------|
|
| 57 |
+
| `utt_id` | string | Unique utterance identifier (e.g., `spkr28_1`). |
|
| 58 |
+
| `speaker_id` | string | Speaker identifier. |
|
| 59 |
+
| `sentence_id` | string | Reference sentence ID (matches `reference_transcriptions.txt`). |
|
| 60 |
+
| `phone_ids` | sequence[string] | Unique phone identifiers per utterance. |
|
| 61 |
+
| `reference` | sequence[string] | MFA reference phones. |
|
| 62 |
+
| `annot_1` | sequence[string] | Annotator 1 phones (`-` marks deletions). |
|
| 63 |
+
| `annot_2` | sequence[string] | Annotator 3 phones when available, empty otherwise. |
|
| 64 |
+
| `label_1` | sequence[string] | Annotator 1 phone labels (`"1"` correct, `"0"` incorrect). |
|
| 65 |
+
| `label_2` | sequence[string] | Annotator 3 phone labels when present. |
|
| 66 |
+
| `error_type` | sequence[string] | Derived categories: `correct`, `insertion`, `deletion`, `distortion`, `substitution`. |
|
| 67 |
+
| `start_mfa` | sequence[float] | Phone start times (seconds). |
|
| 68 |
+
| `end_mfa` | sequence[float] | Phone end times (seconds). |
|
| 69 |
+
| `global_1` | float or null | Annotator 1 utterance-level score (1–4). |
|
| 70 |
+
| `global_2` | float or null | Annotator 3 score when available. |
|
| 71 |
+
| `level_1` | string or null | Speaker-level proficiency tier from annotator 1 ("A"/"B"). |
|
| 72 |
+
| `level_2` | string or null | Speaker tier from annotator 3. |
|
| 73 |
+
| `gender` | string or null | Speaker gender (`"M"`/`"F"`). |
|
| 74 |
+
| `duration` | float | Utterance duration in seconds (after resampling to 16 kHz). |
|
| 75 |
+
| `sample_rate` | int | Sample rate in Hz (16,000). |
|
| 76 |
+
| `wav_path` | string | Waveform filename (`<utt_id>.wav`). |
|
| 77 |
+
| `audio` | Audio | Automatically loaded waveform (16 kHz). |
|
| 78 |
+
| `transcription` | string or null | Reference sentence text. |
|
| 79 |
+
|
| 80 |
+
### Data Splits
|
| 81 |
+
|
| 82 |
+
| Split | # Examples |
|
| 83 |
+
|-------|------------|
|
| 84 |
+
| train | 1,903 |
|
| 85 |
+
| test | 1,263 |
|
| 86 |
+
|
| 87 |
+
### Notes
|
| 88 |
+
|
| 89 |
+
- When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null.
|
| 90 |
+
- Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels.
|
| 91 |
+
- Waveforms were resampled to 16 kHz using `ffmpeg` during manifest generation.
|
| 92 |
+
|
| 93 |
+
## Data Processing
|
| 94 |
+
|
| 95 |
+
1. Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition.
|
| 96 |
+
2. `create_db.py` aggregates these into JSON manifests, adds error types, and resamples audio.
|
| 97 |
+
3. Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise).
|
| 98 |
+
|
| 99 |
+
## Licensing
|
| 100 |
+
|
| 101 |
+
- Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
|
| 102 |
+
- Please ensure any downstream usage complies with participant consent and institutional policies.
|
| 103 |
+
|
| 104 |
+
## Citation
|
| 105 |
+
|
| 106 |
+
```
|
| 107 |
+
@article{vidal2019epadb,
|
| 108 |
+
title = {EpaDB: a database for development of pronunciation assessment systems},
|
| 109 |
+
author = {Vidal, Jazmin and Ferrer, Luciana and Brambilla, Leonardo},
|
| 110 |
+
journal = {Proc. Interspeech},
|
| 111 |
+
pages = {589--593},
|
| 112 |
+
year = {2019}
|
| 113 |
+
}
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
## Usage
|
| 117 |
+
|
| 118 |
+
Install dependencies and load the dataset:
|
| 119 |
+
|
| 120 |
+
```python
|
| 121 |
+
from datasets import load_dataset
|
| 122 |
+
|
| 123 |
+
# Local usage before uploading:
|
| 124 |
+
ds = load_dataset(
|
| 125 |
+
"epadb_dataset/epadb.py",
|
| 126 |
+
data_dir="/path/to/epadb", # folder with train.json, test.json, WAV/
|
| 127 |
+
split="train",
|
| 128 |
+
)
|
| 129 |
+
print(ds)
|
| 130 |
+
print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"]) # 16000
|
| 131 |
+
|
| 132 |
+
# After pushing to the Hugging Face Hub:
|
| 133 |
+
# ds = load_dataset("JazminVidal/epadb", split="train")
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## Acknowledgements
|
| 137 |
+
|
| 138 |
+
We thank the learners and expert annotators who contributed to EPADB, as well as the speech processing community for tools such as MFA and ffmpeg used in the data preparation pipeline.
|