Ar-ASR / README.md
Abdo-Alshoki's picture
Update README.md
19ed8de verified
metadata
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 6224132402.741253
      num_examples: 33607
    - name: test
      num_bytes: 185203512.7434241
      num_examples: 1000
  download_size: 5848400127
  dataset_size: 6409335915.484677
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Ar-ASR

Dataset Description

This dataset is designed for Automatic Speech Recognition (ASR), focusing on Arabic speech with precise transcriptions including tashkeel (diacritics). It contains 33,607 audio samples from multiple sources: Microsoft Edge TTS API, Common Voice (validated Arabic subset), individual contributions, and manually transcribed YouTube videos (we also added the dataset ClArTTS). The dataset is paired with aligned Arabic text transcriptions and is intended for training and evaluating ASR models, such as OpenAI's Whisper, with an emphasis on accurate recognition of Arabic pronunciation and diacritics.

  • Dataset Size: 33,607 samples

  • Audio: 16 kHz

  • Text: Arabic transcriptions with tashkeel

  • Language: Modern Standard Arabic (MSA)

Dataset Structure

The dataset is hosted on Hugging Face and consists of two columns:

  • audio: Audio samples (arrays, 16 kHz sampling rate)
  • text: Arabic text transcriptions with tashkeel, aligned with the audio

Example

{
  "audio": {"array": [...], "sampling_rate": 16000},
  "text": "ثَلَاثَةٌ فِي المِئَةِ مِنَ المَاءِ العَذْبِ فِي الأَنْهَارِ وَالبُحَيْرَاتِ وَفِي الغِلَافِ الجَوِّيّ"
}

Usage

This dataset is ideal for:

  • Training Arabic ASR models
  • Evaluating transcription accuracy with tashkeel

Loading the Dataset

from datasets import load_dataset
dataset = load_dataset("CUAIStudents/Ar-ASR")

Training with Whisper

The audio is pre-resampled to 16 kHz for Whisper compatibility:

from transformers import WhisperProcessor, WhisperForConditionalGeneration
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
sample = dataset["train"][0]
inputs = processor(sample["audio"]["array"], sampling_rate=16000, return_tensors="pt")

Limitations

  • Quality: Downsampling to 16 kHz may reduce high-frequency details, but speech remains clear

  • Scope: Includes synthetic edge_tts voices, Common Voice validated Arabic subset, individual contributions, and manually transcribed YouTube videos, which may vary in recording quality