Abdo-Alshoki commited on
Commit
c87a649
·
verified ·
1 Parent(s): b8d9340

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -24,3 +24,67 @@ configs:
24
  - split: test
25
  path: data/test-*
26
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  - split: test
25
  path: data/test-*
26
  ---
27
+ # Ar-ASR
28
+
29
+ ## Dataset Description
30
+
31
+ This dataset is designed for Automatic Speech Recognition (ASR), focusing on Arabic speech with precise transcriptions including tashkeel (diacritics). It contains 33,607 audio samples from multiple sources: Microsoft Edge TTS API, Common Voice (validated Arabic subset), individual contributions, and manually transcribed YouTube videos (we also added the dataset [ClArTTS](https://huggingface.co/datasets/AtharvA7k/ClArTTS). The dataset is paired with aligned Arabic text transcriptions and is intended for training and evaluating ASR models, such as OpenAI's Whisper, with an emphasis on accurate recognition of Arabic pronunciation and diacritics.
32
+
33
+
34
+ - **Dataset Size**: 33,607 samples
35
+
36
+ - **Audio**: 16 kHz
37
+
38
+ - **Text**: Arabic transcriptions with tashkeel
39
+
40
+ - **Language**: Modern Standard Arabic (MSA)
41
+
42
+ ## Dataset Structure
43
+
44
+ The dataset is hosted on Hugging Face and consists of two columns:
45
+
46
+ - **audio**: Audio samples (arrays, 16 kHz sampling rate)
47
+ - **text**: Arabic text transcriptions with tashkeel, aligned with the audio
48
+
49
+ ### Example
50
+
51
+ ```json
52
+ {
53
+ "audio": {"array": [...], "sampling_rate": 16000},
54
+ "text": "ثَلَاثَةٌ فِي المِئَةِ مِنَ المَاءِ العَذْبِ فِي الأَنْهَارِ وَالبُحَيْرَاتِ وَفِي الغِلَافِ الجَوِّيّ"
55
+ }
56
+ ```
57
+
58
+ ## Usage
59
+
60
+ This dataset is ideal for:
61
+
62
+ - Training Arabic ASR models
63
+ - Evaluating transcription accuracy with tashkeel
64
+
65
+ ### Loading the Dataset
66
+
67
+ ```python
68
+ from datasets import load_dataset
69
+ dataset = load_dataset("CUAIStudents/Ar-ASR")
70
+ ```
71
+
72
+ ### Training with Whisper
73
+
74
+ The audio is pre-resampled to 16 kHz for Whisper compatibility:
75
+
76
+ ```python
77
+ from transformers import WhisperProcessor, WhisperForConditionalGeneration
78
+ processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
79
+ model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
80
+ sample = dataset["train"][0]
81
+ inputs = processor(sample["audio"]["array"], sampling_rate=16000, return_tensors="pt")
82
+ ```
83
+
84
+
85
+ ## Limitations
86
+
87
+
88
+ - **Quality**: Downsampling to 16 kHz may reduce high-frequency details, but speech remains clear
89
+
90
+ - **Scope**: Includes synthetic `edge_tts` voices, Common Voice validated Arabic subset, individual contributions, and manually transcribed YouTube videos, which may vary in recording quality