--- license: cc-by-nc-4.0 task_categories: - automatic-speech-recognition language: - en tags: - non-native - pronunciation - speech - pronunciation assessment - phoneme pretty_name: EpaDB size_categories: - 1K.wav`). | | `audio` | Audio | Automatically loaded waveform (16 kHz). | | `transcription` | string or null | Reference sentence text. | ### Data Splits | Split | # Examples | |-------|------------| | train | 1,903 | | test | 1,263 | ### Notes - When annotator 3 did not label an utterance, related fields (`annot_2`, `label_2`, `global_2`, `level_2`) are absent or set to null. - Error types come from simple heuristics contrasting MFA reference phones with annotator 1 labels. - Waveforms were resampled to 16 kHz using `ffmpeg` during manifest generation. - Forced alignments and annotations were merged to produce enriched CSV files per speaker/partition. - Global scores are averaged per speaker to derive `level_*` tiers (`A` if mean ≥ 3, `B` otherwise). ## Licensing - Audio and annotations: CC BY-NC 4.0 (non-commercial use allowed with attribution). ## Citation ``` @article{vidal2019epadb, title = {EpaDB: a database for development of pronunciation assessment systems}, author = {Vidal, Jazmin and Ferrer, Luciana and Brambilla, Leonardo}, journal = {Proc. Interspeech}, pages = {589--593}, year = {2019} } ``` ## Usage Install dependencies and load the dataset: ```python from datasets import load_dataset # Local usage before uploading: ds = load_dataset( "epadb_dataset/epadb.py", data_dir="/path/to/epadb", # folder with train.json, test.json, WAV/ split="train", ) print(ds) print(ds[0]["utt_id"], ds[0]["audio"]["sampling_rate"]) # 16000 # After pushing to the Hugging Face Hub: # ds = load_dataset("JazminVidal/epadb", split="train") ``` ## Acknowledgements The database is an effort of the Speech Lab at the Laboratorio de Inteligencia Artificial Aplicada from the Universidad de Buenos Aires and was partially funded by Google by a Google Latin America Reseach Award in 2018