Datasets:
Add files using upload-large-folder tool
Browse files
README.md
CHANGED
|
@@ -19,22 +19,22 @@ task_ids:
|
|
| 19 |
- audio-classification-other
|
| 20 |
---
|
| 21 |
|
| 22 |
-
# EPADB: English Pronunciation
|
| 23 |
|
| 24 |
## Dataset Summary
|
| 25 |
|
| 26 |
-
|
| 27 |
|
| 28 |
## Supported Tasks
|
| 29 |
|
| 30 |
-
- **Pronunciation Assessment** – predict utterance-level global scores or
|
| 31 |
- **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct.
|
| 32 |
- **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models.
|
| 33 |
|
| 34 |
## Languages
|
| 35 |
|
| 36 |
- L2 utterances: English
|
| 37 |
-
- Speaker L1: Spanish
|
| 38 |
|
| 39 |
## Dataset Structure
|
| 40 |
|
|
@@ -42,13 +42,13 @@ EPADB contains curated pronunciation assessment data collected from Spanish-spea
|
|
| 42 |
|
| 43 |
Each JSON entry describes one utterance:
|
| 44 |
|
| 45 |
-
-
|
| 46 |
- Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories.
|
| 47 |
- MFA start/end timestamps per phone (`start_mfa`, `end_mfa`).
|
| 48 |
- Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`).
|
| 49 |
- Speaker metadata (`speaker_id`, `gender`).
|
| 50 |
- Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself.
|
| 51 |
-
- Reference
|
| 52 |
|
| 53 |
### Data Fields
|
| 54 |
|
|
@@ -56,9 +56,9 @@ Each JSON entry describes one utterance:
|
|
| 56 |
|-------|------|-------------|
|
| 57 |
| `utt_id` | string | Unique utterance identifier (e.g., `spkr28_1`). |
|
| 58 |
| `speaker_id` | string | Speaker identifier. |
|
| 59 |
-
| `sentence_id` | string | Reference sentence ID
|
| 60 |
| `phone_ids` | sequence[string] | Unique phone identifiers per utterance. |
|
| 61 |
-
| `reference` | sequence[string] |
|
| 62 |
| `annot_1` | sequence[string] | Annotator 1 phones (`-` marks deletions). |
|
| 63 |
| `annot_2` | sequence[string] | Annotator 3 phones when available, empty otherwise. |
|
| 64 |
| `label_1` | sequence[string] | Annotator 1 phone labels (`"1"` correct, `"0"` incorrect). |
|
|
@@ -66,7 +66,7 @@ Each JSON entry describes one utterance:
|
|
| 66 |
| `error_type` | sequence[string] | Derived categories: `correct`, `insertion`, `deletion`, `distortion`, `substitution`. |
|
| 67 |
| `start_mfa` | sequence[float] | Phone start times (seconds). |
|
| 68 |
| `end_mfa` | sequence[float] | Phone end times (seconds). |
|
| 69 |
-
| `global_1` | float or null | Annotator 1 utterance-level score (1–
|
| 70 |
| `global_2` | float or null | Annotator 3 score when available. |
|
| 71 |
| `level_1` | string or null | Speaker-level proficiency tier from annotator 1 ("A"/"B"). |
|
| 72 |
| `level_2` | string or null | Speaker tier from annotator 3. |
|
|
@@ -87,19 +87,14 @@ Each JSON entry describes one utterance:
|
|
| 87 |
### Notes
|
| 88 |
|
| 89 |
- When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null.
|
| 90 |
-
- Error types come from simple heuristics contrasting
|
| 91 |
-
- Waveforms were resampled to 16 kHz using `ffmpeg` during
|
| 92 |
-
|
| 93 |
-
## Data Processing
|
| 94 |
-
|
| 95 |
-
1. Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition.
|
| 96 |
-
2. `create_db.py` aggregates these into JSON manifests, adds error types, and resamples audio.
|
| 97 |
-
3. Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise).
|
| 98 |
|
| 99 |
## Licensing
|
| 100 |
|
| 101 |
- Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
|
| 102 |
-
|
| 103 |
|
| 104 |
## Citation
|
| 105 |
|
|
@@ -129,10 +124,8 @@ ds = load_dataset(
|
|
| 129 |
print(ds)
|
| 130 |
print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"]) # 16000
|
| 131 |
|
| 132 |
-
# After pushing to the Hugging Face Hub:
|
| 133 |
-
# ds = load_dataset("JazminVidal/epadb", split="train")
|
| 134 |
```
|
| 135 |
|
| 136 |
## Acknowledgements
|
| 137 |
|
| 138 |
-
|
|
|
|
| 19 |
- audio-classification-other
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# EPADB: English Pronunciation by Argentinians
|
| 23 |
|
| 24 |
## Dataset Summary
|
| 25 |
|
| 26 |
+
EpaDB is a speech database intended for research in pronunciation assessment at the phoneme level. The database includes audios from 50 Spanish speakers (25 males and 25 females) from Argentina reading phrases in English. Each speaker recorded 64 short phrases containing sounds hard to pronounce for this population adding up to ~3.5 hours of speech. Each utterance has phone-level annotations from up to two expert annotators along with per-utterance global proficiency scores. Additionally to MFA (Montreal Forced Aligner) timestamps, metadata includes derived error classifications, gender, and reference orthographic transcriptions. The database is organized in a `train` and a `test` partition and includes speaker-wise waveform recordings resampled to 16 kHz.
|
| 27 |
|
| 28 |
## Supported Tasks
|
| 29 |
|
| 30 |
+
- **Pronunciation Assessment** – predict utterance-level global scores or phone level accuracy scores
|
| 31 |
- **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct.
|
| 32 |
- **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models.
|
| 33 |
|
| 34 |
## Languages
|
| 35 |
|
| 36 |
- L2 utterances: English
|
| 37 |
+
- Speaker L1: Spanish from Río de la Plata region Argentina
|
| 38 |
|
| 39 |
## Dataset Structure
|
| 40 |
|
|
|
|
| 42 |
|
| 43 |
Each JSON entry describes one utterance:
|
| 44 |
|
| 45 |
+
- Manually chose phoneme reference sequences (`reference`) and manual phonetic transcriptions (`annot_1`, optional `annot_2`).
|
| 46 |
- Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories.
|
| 47 |
- MFA start/end timestamps per phone (`start_mfa`, `end_mfa`).
|
| 48 |
- Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`).
|
| 49 |
- Speaker metadata (`speaker_id`, `gender`).
|
| 50 |
- Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself.
|
| 51 |
+
- Reference orthographic transcription (`transcription`).
|
| 52 |
|
| 53 |
### Data Fields
|
| 54 |
|
|
|
|
| 56 |
|-------|------|-------------|
|
| 57 |
| `utt_id` | string | Unique utterance identifier (e.g., `spkr28_1`). |
|
| 58 |
| `speaker_id` | string | Speaker identifier. |
|
| 59 |
+
| `sentence_id` | string | Reference sentence ID. |
|
| 60 |
| `phone_ids` | sequence[string] | Unique phone identifiers per utterance. |
|
| 61 |
+
| `reference` | sequence[string] | reference phones. |
|
| 62 |
| `annot_1` | sequence[string] | Annotator 1 phones (`-` marks deletions). |
|
| 63 |
| `annot_2` | sequence[string] | Annotator 3 phones when available, empty otherwise. |
|
| 64 |
| `label_1` | sequence[string] | Annotator 1 phone labels (`"1"` correct, `"0"` incorrect). |
|
|
|
|
| 66 |
| `error_type` | sequence[string] | Derived categories: `correct`, `insertion`, `deletion`, `distortion`, `substitution`. |
|
| 67 |
| `start_mfa` | sequence[float] | Phone start times (seconds). |
|
| 68 |
| `end_mfa` | sequence[float] | Phone end times (seconds). |
|
| 69 |
+
| `global_1` | float or null | Annotator 1 utterance-level score (1–5). |
|
| 70 |
| `global_2` | float or null | Annotator 3 score when available. |
|
| 71 |
| `level_1` | string or null | Speaker-level proficiency tier from annotator 1 ("A"/"B"). |
|
| 72 |
| `level_2` | string or null | Speaker tier from annotator 3. |
|
|
|
|
| 87 |
### Notes
|
| 88 |
|
| 89 |
- When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null.
|
| 90 |
+
- Error types come from simple heuristics contrasting the reference phones with annotator 1 labels.
|
| 91 |
+
- Waveforms were resampled to 16 kHz using `ffmpeg` during data preparation.
|
| 92 |
+
- Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
## Licensing
|
| 95 |
|
| 96 |
- Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
|
| 97 |
+
|
| 98 |
|
| 99 |
## Citation
|
| 100 |
|
|
|
|
| 124 |
print(ds)
|
| 125 |
print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"]) # 16000
|
| 126 |
|
|
|
|
|
|
|
| 127 |
```
|
| 128 |
|
| 129 |
## Acknowledgements
|
| 130 |
|
| 131 |
+
The database is an effort of the Speech Lab at the Laboratorio de Inteligencia Artificial Aplicada from the Universidad de Buenos Aires and was partially funded by Google by a Google Latin America Reseach Award in 2018.
|