| | --- |
| | language: |
| | - ar |
| | license: apache-2.0 |
| | task_categories: |
| | - automatic-speech-recognition |
| | tags: |
| | - whisper |
| | - arabic |
| | - speech |
| | - asr |
| | - multidialect |
| | pretty_name: Arabic Whisper Multi-Dialect (Processed) - Small |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | |
| | # Arabic Whisper Multi-Dialect - Processed (Small) |
| |
|
| | ## Dataset Description |
| |
|
| | This is a **preprocessed** version of the Arabic multi-dialect speech dataset, ready for fine-tuning OpenAI's Whisper models. The dataset contains audio features extracted and formatted specifically for Whisper training. |
| |
|
| | - **Size**: 40% subset of the full `arabic-whisper-multidialect` dataset |
| | - **Total Examples**: 43,091 samples |
| | - **Format**: Pre-computed Whisper input features (mel spectrograms) and tokenized labels |
| | - **Purpose**: Direct use with Whisper training pipelines without additional preprocessing |
| |
|
| | ### Key Features |
| |
|
| | ✅ **Pre-processed** - Audio already converted to Whisper input features |
| | ✅ **Multi-dialect** - Covers various Arabic dialects |
| | ✅ **Training-ready** - No additional feature extraction needed |
| | ✅ **Memory-efficient** - Optimized for faster loading during training |
| |
|
| | ## Dataset Structure |
| |
|
| | ### Data Splits |
| |
|
| | | Split | Examples | Size (GB) | |
| | |-------|----------|-----------| |
| | | Train | 37,835 | 33.8 | |
| | | Validation | 2,628 | 2.35 | |
| | | Test | 2,628 | 2.35 | |
| | | **Total** | **43,091** | **38.5** | |
| |
|
| | ### Data Fields |
| |
|
| | - **`input_features`**: `Sequence[Sequence[float32]]` |
| | - Shape: `(80, 3000)` - 80 mel-frequency bins × 3000 time steps |
| | - Pre-computed log-mel spectrogram features for Whisper |
| | |
| | - **`labels`**: `Sequence[int64]` |
| | - Tokenized Arabic transcription using Whisper's tokenizer |
| | - Special tokens: `-100` for padding (ignored in loss computation) |
| | |
| | ## Usage |
| | |
| | ### Quick Start |
| | |
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the dataset |
| | dataset = load_dataset("MadLook/arabic-whisper-multidialect-processed-small") |
| | |
| | print(dataset) |
| | # DatasetDict({ |
| | # train: Dataset with 37,835 examples |
| | # validation: Dataset with 2,628 examples |
| | # test: Dataset with 2,628 examples |
| | # }) |
| | ``` |
| | |
| | |
| | ### Loading a Single Split |
| | |
| | ```python |
| | # Load only training data |
| | train_data = load_dataset("MadLook/arabic-whisper-multidialect-processed-small", split="train") |
| | |
| | # Load with streaming for large datasets |
| | train_stream = load_dataset("MadLook/arabic-whisper-multidialect-processed-small", split="train", streaming=True) |
| | ``` |
| | |
| | ## Preprocessing Details |
| | |
| | This dataset was created from the original Arabic multi-dialect dataset using the following preprocessing pipeline: |
| | |
| | 1. **Audio Loading**: Resampled to 16kHz mono audio |
| | 2. **Feature Extraction**: Converted to 80-channel log-mel spectrograms using Whisper's feature extractor |
| | 3. **Tokenization**: Arabic text transcriptions tokenized with Whisper's multilingual tokenizer |
| | 4. **Normalization**: Applied Whisper's standard audio normalization |
| | 5. **Subset Selection**: Selected 40% of the original dataset |
| | |
| | ### Processing Code |
| | |
| | ```python |
| | from transformers import WhisperProcessor |
| | |
| | processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="ar", task="transcribe") |
| | |
| | def prepare_dataset(batch): |
| | # Load and resample audio |
| | audio = batch["audio"] |
| | |
| | # Compute input features |
| | batch["input_features"] = processor.feature_extractor( |
| | audio["array"], |
| | sampling_rate=audio["sampling_rate"] |
| | ).input_features[0] |
| | |
| | # Tokenize transcription |
| | batch["labels"] = processor.tokenizer(batch["transcription"]).input_ids |
| | |
| | return batch |
| | ``` |
| | |
| | ## Source Dataset |
| | |
| | This is a processed subset of the Arabic multi-dialect speech recognition dataset, which includes recordings from various Arabic-speaking regions and dialects. |
| | |
| | ## Intended Use |
| | |
| | ### Primary Use Cases |
| | |
| | - Fine-tuning Whisper models for Arabic speech recognition |
| | - Research on multi-dialect Arabic ASR |
| | - Benchmarking Arabic speech recognition systems |
| | - Transfer learning for low-resource Arabic dialects |
| | |
| | ### Out-of-Scope Use |
| | |
| | - This dataset is **already preprocessed** - do not apply feature extraction again |
| | - Not suitable for training non-Whisper architectures without re-processing |
| | |
| | ## Limitations |
| | |
| | - Only 40% of the full dataset (subset for faster experimentation) |
| | - Pre-computed features are specific to Whisper architecture |
| | - Fixed audio length (30 seconds max due to Whisper constraints) |
| | |
| | |
| | ## Citation |
| | |
| | If you use this dataset, please cite both this preprocessed version and the original source dataset: |
| | |
| | ```bibtex |
| | @dataset{arabic_whisper_multidialect_processed, |
| | title={Arabic Whisper Multi-Dialect - Processed (Small)}, |
| | author={MadLook}, |
| | year={2025}, |
| | publisher={Hugging Face}, |
| | url={https://huggingface.co/datasets/MadLook/arabic-whisper-multidialect-processed-small} |
| | } |
| | ``` |
| | |
| | ## License |
| | |
| | Apache 2.0 |
| | |
| | ## Contact |
| | |
| | For questions or issues with this dataset, please open an issue on the dataset repository. |
| | |
| | --- |
| | |
| | **Dataset Version**: 1.0 |
| | **Last Updated**: 2025 |
| | **Preprocessor**: Whisper Feature Extractor (openai/whisper-small) |
| | **Compatible Models**: openai/whisper-tiny, openai/whisper-base, openai/whisper-small, openai/whisper-medium, openai/whisper-large |