arabic-egy-cleaned / README.md
MAdel121's picture
Update README.md
e7d410a verified
# Egyptian Arabic ASR Clean 72 h
## Dataset Summary
This corpus contains **≈72 h** of Egyptian‑Arabic speech aligned to text.
Audio has been resampled to **16 kHz mono WAV**, transcripts are normalised Arabic (no diacritics, Tatweel, digits verbalised), and the data are split 80 / 10 / 10 into train / validation / test.
## Supported Tasks and Leaderboards
| Task | Tags | Notes |
|------|------|-------|
| **Automatic Speech Recognition** | `asr`, `speech-recognition` | Primary use‑case |
| **Forced Alignment / VAD** | `alignment`, `vad` | Clips ≤ 25 s |
## Languages
The dataset is **predominantly Egyptian Arabic** (`ar‑EG`).
~85 % of recorded hours are male speakers; speaker IDs are unavailable.
## Dataset Structure
### Data Fields
| Field | Type | Description |
|-------|------|-------------|
| `audio` | `Audio` | Pointer to WAV @ 16 kHz |
| `text` | `string` | Normalised Arabic transcript |
| `duration` | `float` | Seconds (post‑resample) |
| `dataset_source` | `string` | One‑letter code A–D |
### Splits
| Split | Hours |
|-------|-------|
| train | ≈57.6 |
| validation | ≈7.2 |
| test | ≈7.2 |
## Source Data
| Code | Raw Hours | Description |
|------|-----------|-------------|
| A | ~465 | Long clips, heavy overlap |
| B | ~65 | Similar to A, shorter |
| C | ~5 | Dual‑channel conversations |
| D | ~2.5 | YouTube excerpts |
| *Other* | <5 | Minor sources |
Only **≈12 %** of the original 570 h survived the cleaning pipeline.
## Data Collection and Processing
1. **Format Unification** – convert all audio to 16 kHz WAV.
2. **Deduplication** – drop exact audio/text duplicates; remove nulls.
3. **Metadata Pruning** – retain only core fields.
4. **Text Normalisation** – strip diacritics, Tatweel, punctuation, Latin letters; verbalise digits; fix common glyph errors; run CAMeL‑Tools morphology checks.
5. **Alignment Diagnostics** – compute chars/s and words/s; flag extreme values.
6. **Duration Filtering** – keep clips 0.5–25 s.
7. **Shuffle & Split** – 80 / 10 / 10 random split, uploaded as `datasets.DatasetDict`.
## Usage Example
```python
from datasets import load_dataset
ds = load_dataset("your-username/egyptian_arabic_asr_clean72h")
print(ds["train"][0]["audio"].sampling_rate) # 16000
print(ds["train"][0]["text"])