Improve dataset card: Add metadata, links, description, and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,4 +1,19 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
configs:
|
| 3 |
- config_name: moshaf_metadata
|
| 4 |
data_files:
|
|
@@ -1948,3 +1963,114 @@ configs:
|
|
| 1948 |
- dtype: float32
|
| 1949 |
name: match_ratio
|
| 1950 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
---
|
| 3 |
+
task_categories:
|
| 4 |
+
- automatic-speech-recognition
|
| 5 |
+
license: mit
|
| 6 |
+
language:
|
| 7 |
+
- ar
|
| 8 |
+
size_categories:
|
| 9 |
+
- 100h<n<1kh
|
| 10 |
+
tags:
|
| 11 |
+
- quran
|
| 12 |
+
- tajweed
|
| 13 |
+
- arabic
|
| 14 |
+
- speech
|
| 15 |
+
- pronunciation-error-detection
|
| 16 |
+
- speech-segmentation
|
| 17 |
configs:
|
| 18 |
- config_name: moshaf_metadata
|
| 19 |
data_files:
|
|
|
|
| 1963 |
- dtype: float32
|
| 1964 |
name: match_ratio
|
| 1965 |
---
|
| 1966 |
+
|
| 1967 |
+
# Holy Quran Recitations Dataset for Pronunciation Error Detection and Correction
|
| 1968 |
+
|
| 1969 |
+
This dataset provides `850+ hours` of audio (approximately `300K annotated utterances`) specifically designed for Automatic Pronunciation Error Detection and Correction of Holy Quran learners. The dataset was introduced in the paper [Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning](https://huggingface.co/papers/2509.00094).
|
| 1970 |
+
|
| 1971 |
+
The data generation process includes:
|
| 1972 |
+
1. Collection of recitations from expert reciters.
|
| 1973 |
+
2. Segmentation at pause points (waqf) using a fine-tuned wav2vec2-BERT model.
|
| 1974 |
+
3. Transcription of segments.
|
| 1975 |
+
4. Transcript verification via a novel Tasmeea algorithm.
|
| 1976 |
+
|
| 1977 |
+
A custom Quran Phonetic Script (QPS) is used to encode Tajweed rules, differentiating it from standard IPA for Modern Standard Arabic. This QPS uses a two-level script: Phoneme level (encodes Arabic letters with short/long vowels) and Sifa level (encodes articulation characteristics of every phoneme).
|
| 1978 |
+
|
| 1979 |
+
- **Paper:** [Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning](https://huggingface.co/papers/2509.00094)
|
| 1980 |
+
- **Project Page:** [https://obadx.github.io/prepare-quran-dataset/](https://obadx.github.io/prepare-quran-dataset/)
|
| 1981 |
+
- **Code (Recitations Segmenter):** [https://github.com/obadx/recitations-segmenter](https://github.com/obadx/recitations-segmenter)
|
| 1982 |
+
|
| 1983 |
+
## Sample Usage
|
| 1984 |
+
|
| 1985 |
+
You can use the `recitations-segmenter` library to process audio files and extract speech intervals.
|
| 1986 |
+
|
| 1987 |
+
First, ensure you have the necessary system-level dependencies (`ffmpeg`, `libsndfile`) installed. For Linux:
|
| 1988 |
+
```bash
|
| 1989 |
+
sudo apt-get update
|
| 1990 |
+
sudo apt-get install -y ffmpeg libsndfile1 portaudio19-dev
|
| 1991 |
+
```
|
| 1992 |
+
For Windows & Mac using Anaconda:
|
| 1993 |
+
```bash
|
| 1994 |
+
conda create -n segment python=3.12
|
| 1995 |
+
conda activate segment
|
| 1996 |
+
conda install -c conda-forge ffmpeg libsndfile
|
| 1997 |
+
```
|
| 1998 |
+
Then install the Python package:
|
| 1999 |
+
```bash
|
| 2000 |
+
pip install recitations-segmenter
|
| 2001 |
+
```
|
| 2002 |
+
|
| 2003 |
+
You can then segment recitations programmatically:
|
| 2004 |
+
|
| 2005 |
+
```python
|
| 2006 |
+
from pathlib import Path
|
| 2007 |
+
|
| 2008 |
+
from recitations_segmenter import segment_recitations, read_audio, clean_speech_intervals
|
| 2009 |
+
from transformers import AutoFeatureExtractor, AutoModelForAudioFrameClassification
|
| 2010 |
+
import torch
|
| 2011 |
+
|
| 2012 |
+
if __name__ == '__main__':
|
| 2013 |
+
device = torch.device('cuda')
|
| 2014 |
+
dtype = torch.bfloat16
|
| 2015 |
+
|
| 2016 |
+
processor = AutoFeatureExtractor.from_pretrained(
|
| 2017 |
+
"obadx/recitation-segmenter-v2")
|
| 2018 |
+
model = AutoModelForAudioFrameClassification.from_pretrained(
|
| 2019 |
+
"obadx/recitation-segmenter-v2",
|
| 2020 |
+
)
|
| 2021 |
+
|
| 2022 |
+
model.to(device, dtype=dtype)
|
| 2023 |
+
|
| 2024 |
+
# Change this to the file pathes of Holy Quran recitations
|
| 2025 |
+
# File pathes with the Holy Quran Recitations
|
| 2026 |
+
file_pathes = [
|
| 2027 |
+
'./assets/dussary_002282.mp3',
|
| 2028 |
+
'./assets/hussary_053001.mp3',
|
| 2029 |
+
]
|
| 2030 |
+
waves = [read_audio(p) for p in file_pathes]
|
| 2031 |
+
|
| 2032 |
+
# Extracting speech inervals in samples according to 16000 Sample rate
|
| 2033 |
+
sampled_outputs = segment_recitations(
|
| 2034 |
+
waves,
|
| 2035 |
+
model,
|
| 2036 |
+
processor,
|
| 2037 |
+
device=device,
|
| 2038 |
+
dtype=dtype,
|
| 2039 |
+
batch_size=8,
|
| 2040 |
+
)
|
| 2041 |
+
|
| 2042 |
+
for out, path in zip(sampled_outputs, file_pathes):
|
| 2043 |
+
# Clean The speech intervals by:
|
| 2044 |
+
# * merging small silence durations
|
| 2045 |
+
# * remove small speech durations
|
| 2046 |
+
# * add padding to each speech duration
|
| 2047 |
+
# Raises:
|
| 2048 |
+
# * NoSpeechIntervals: if the wav is complete silence
|
| 2049 |
+
# * TooHighMinSpeechDruation: if `min_speech_duration` is too high which
|
| 2050 |
+
# resuls for deleting all speech intervals
|
| 2051 |
+
clean_out = clean_speech_intervals(
|
| 2052 |
+
out.speech_intervals,
|
| 2053 |
+
out.is_complete,
|
| 2054 |
+
min_silence_duration_ms=30,
|
| 2055 |
+
min_speech_duration_ms=30,
|
| 2056 |
+
pad_duration_ms=30,
|
| 2057 |
+
return_seconds=True,
|
| 2058 |
+
)
|
| 2059 |
+
|
| 2060 |
+
print(f'Speech Intervals of: {Path(path).name}: ')
|
| 2061 |
+
print(clean_out.clean_speech_intervals)
|
| 2062 |
+
print(f'Is Recitation Complete: {clean_out.is_complete}')
|
| 2063 |
+
print('-' * 40)
|
| 2064 |
+
```
|
| 2065 |
+
|
| 2066 |
+
## Citation
|
| 2067 |
+
|
| 2068 |
+
```bibtex
|
| 2069 |
+
@article{yassine2024automatic,
|
| 2070 |
+
title={Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning},
|
| 2071 |
+
author={Yassine, Obada},
|
| 2072 |
+
journal={arXiv preprint arXiv:2509.00094},
|
| 2073 |
+
year={2024},
|
| 2074 |
+
url={https://huggingface.co/papers/2509.00094}
|
| 2075 |
+
}
|
| 2076 |
+
```
|