hashmin commited on
Commit
bbf9987
·
verified ·
1 Parent(s): 6826bf5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -17
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- ymi_version: 1.0
3
  language:
4
  - en
5
  tags:
@@ -8,9 +7,6 @@ tags:
8
  - error-detection
9
  - forced-alignment
10
  license: cc-by-nc-4.0
11
- pretty_name: "EPADB: English Pronunciation Assessment Dataset"
12
- size_categories:
13
- - 1K<n<10K
14
  task_categories:
15
  - automatic-speech-recognition
16
  - audio-classification
@@ -19,15 +15,18 @@ task_ids:
19
  - audio-classification-other
20
  ---
21
 
22
- # EPADB: English Pronunciation Assessment Dataset for Batched Diagnosis
23
 
24
  ## Dataset Summary
25
 
26
- EPADB contains curated pronunciation assessment data collected from Spanish-speaking learners of English. Each utterance has manually aligned phone-level annotations from up to two expert annotators along with per-utterance global proficiency scores. Metadata links the aligned phones with MFA (Montreal Forced Aligner) timestamps, derived error classifications, and reference transcriptions. The corpus ships with a `train` and a `test` partition and includes speaker-wise waveform recordings resampled to 16 kHz.
 
 
27
 
28
  ## Supported Tasks
29
 
30
- - **Pronunciation Assessment** – predict utterance-level global scores or speaker-level proficiency tiers.
 
31
  - **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct.
32
  - **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models.
33
 
@@ -42,13 +41,13 @@ EPADB contains curated pronunciation assessment data collected from Spanish-spea
42
 
43
  Each JSON entry describes one utterance:
44
 
45
- - Phone sequences for MFA reference (`reference`) and annotators (`annot_1`, optional `annot_2`).
46
  - Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories.
47
  - MFA start/end timestamps per phone (`start_mfa`, `end_mfa`).
48
  - Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`).
49
  - Speaker metadata (`speaker_id`, `gender`).
50
  - Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself.
51
- - Reference sentence transcription (`transcription`).
52
 
53
  ### Data Fields
54
 
@@ -89,17 +88,12 @@ Each JSON entry describes one utterance:
89
  - When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null.
90
  - Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels.
91
  - Waveforms were resampled to 16 kHz using `ffmpeg` during manifest generation.
92
-
93
- ## Data Processing
94
-
95
- 1. Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition.
96
- 2. `create_db.py` aggregates these into JSON manifests, adds error types, and resamples audio.
97
- 3. Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise).
98
 
99
  ## Licensing
100
 
101
  - Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
102
- - Please ensure any downstream usage complies with participant consent and institutional policies.
103
 
104
  ## Citation
105
 
@@ -135,4 +129,5 @@ print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"]) # 16000
135
 
136
  ## Acknowledgements
137
 
138
- We thank the learners and expert annotators who contributed to EPADB, as well as the speech processing community for tools such as MFA and ffmpeg used in the data preparation pipeline.
 
 
1
  ---
 
2
  language:
3
  - en
4
  tags:
 
7
  - error-detection
8
  - forced-alignment
9
  license: cc-by-nc-4.0
 
 
 
10
  task_categories:
11
  - automatic-speech-recognition
12
  - audio-classification
 
15
  - audio-classification-other
16
  ---
17
 
18
+ # EpaDB: English Pronunciation by Argentinians
19
 
20
  ## Dataset Summary
21
 
22
+ EpaDB is a speech database intended for research in pronunciation scoring. The corpus includes audios from 50 Spanish speakers (25 males and 25 females) from
23
+ Argentina reading phrases in English. Each speaker recorded 64 short phrases containing sounds hard to pronounce for this population adding up to ~3.5 hours of speech.
24
+
25
 
26
  ## Supported Tasks
27
 
28
+ - **Pronunciation Assessment** – predict utterance-level global scores or phoneme-level correct/incorrect
29
+ - **Phone Recognition** - predict phoneme sequences
30
  - **Phone-level Error Detection** – classify each phone as insertion, deletion, distortion, substitution, or correct.
31
  - **Alignment Analysis** – leverage MFA timings to study forced alignment quality or to refine pronunciation models.
32
 
 
41
 
42
  Each JSON entry describes one utterance:
43
 
44
+ - Phone sequences for reference transcription (`reference`) and annotators (`annot_1`, optional `annot_2`).
45
  - Phone-level labels (`label_1`, `label_2`) and derived `error_type` categories.
46
  - MFA start/end timestamps per phone (`start_mfa`, `end_mfa`).
47
  - Per-utterance global scores (`global_1`, `global_2`) and propagated speaker levels (`level_1`, `level_2`).
48
  - Speaker metadata (`speaker_id`, `gender`).
49
  - Audio metadata (`duration`, `sample_rate`, `wav_path`) plus the waveform itself.
50
+ - Reference sentence orthographic transcription (`transcription`).
51
 
52
  ### Data Fields
53
 
 
88
  - When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null.
89
  - Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels.
90
  - Waveforms were resampled to 16 kHz using `ffmpeg` during manifest generation.
91
+ - Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition.
92
+ - Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise).
 
 
 
 
93
 
94
  ## Licensing
95
 
96
  - Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution).
 
97
 
98
  ## Citation
99
 
 
129
 
130
  ## Acknowledgements
131
 
132
+ The database is an effort of the Speech Lab at the Laboratorio de Inteligencia Artificial Aplicada from
133
+ the Universidad de Buenos Aires and was partially funded by Google by a Google Latin America Reseach Award in 2018