Datasets:
input_features
listlengths 80
80
| labels
listlengths 4
258
|
|---|---|
[[0.6044873595237732,0.5635704398155212,0.6292911171913147,0.6045858860015869,0.5179706811904907,-0.(...TRUNCATED)
| [50258,50363,9778,1829,6027,3224,19446,25961,8592,3714,31747,11622,20449,9673,7435,39237,3615,9307,9(...TRUNCATED)
|
[[0.8262971639633179,0.7062954902648926,0.7197028398513794,0.737358808517456,0.6922059059143066,0.63(...TRUNCATED)
|
[
50258,
50363,
2407,
6027,
2655,
1211,
2655,
9957,
6055,
33753,
9673,
1829,
3224,
37893,
4724,
995,
9957,
9957,
50257
] |
[[0.17629963159561157,-0.051488637924194336,-0.28858375549316406,-0.6098978519439697,-0.672563433647(...TRUNCATED)
|
[
50258,
50363,
995,
20498,
2288,
35186,
11331,
21292,
21292,
50257
] |
[[0.5122555494308472,0.15301549434661865,0.2608591914176941,0.14016008377075195,-0.14446067810058594(...TRUNCATED)
| [50258,50363,6027,5016,26108,2655,9957,11778,12610,35186,6225,11622,1829,11622,1829,4724,25528,4117,(...TRUNCATED)
|
[[0.7714877128601074,0.6226975917816162,0.5541810989379883,0.49674397706985474,0.5402857065200806,0.(...TRUNCATED)
| [50258,50363,16254,12174,16572,3224,8032,995,16254,3224,8978,11296,8608,3555,3615,15042,1829,3224,47(...TRUNCATED)
|
[[-0.82425856590271,-0.82425856590271,-0.7897952795028687,-0.82425856590271,-0.33817994594573975,-0.(...TRUNCATED)
|
[
50258,
50363,
5172,
4117,
33911,
44945,
36178,
32771,
13546,
3224,
50257
] |
[[0.7111585140228271,-0.0009214878082275391,0.13289350271224976,0.16980034112930298,0.06282854080200(...TRUNCATED)
|
[
50258,
50363,
7649,
995,
12602,
26108,
1211,
11778,
4117,
2655,
13063,
50257
] |
[[-0.731982946395874,-0.731982946395874,-0.731982946395874,-0.731982946395874,0.0006683468818664551,(...TRUNCATED)
| [50258,50363,5172,7649,1975,17082,2288,46958,2423,8315,3794,20449,1975,46456,15040,5172,1975,3224,14(...TRUNCATED)
|
[[0.4148668646812439,0.14755553007125854,0.0090140700340271,-0.04658639430999756,-0.0496933460235595(...TRUNCATED)
| [50258,50363,15040,35186,6225,11622,1829,11622,1829,11778,995,8978,33251,12610,11778,13546,43242,247(...TRUNCATED)
|
[[0.3175637722015381,-0.09033048152923584,-0.012265920639038086,-0.09671235084533691,-0.102852106094(...TRUNCATED)
| [50258,50363,5016,26108,3224,3714,16095,3224,43242,2423,14851,9307,6156,3224,1863,42963,38436,16472,(...TRUNCATED)
|
Arabic Whisper Multi-Dialect - Processed (Small)
Dataset Description
This is a preprocessed version of the Arabic multi-dialect speech dataset, ready for fine-tuning OpenAI's Whisper models. The dataset contains audio features extracted and formatted specifically for Whisper training.
- Size: 40% subset of the full
arabic-whisper-multidialectdataset - Total Examples: 43,091 samples
- Format: Pre-computed Whisper input features (mel spectrograms) and tokenized labels
- Purpose: Direct use with Whisper training pipelines without additional preprocessing
Key Features
✅ Pre-processed - Audio already converted to Whisper input features
✅ Multi-dialect - Covers various Arabic dialects
✅ Training-ready - No additional feature extraction needed
✅ Memory-efficient - Optimized for faster loading during training
Dataset Structure
Data Splits
| Split | Examples | Size (GB) |
|---|---|---|
| Train | 37,835 | 33.8 |
| Validation | 2,628 | 2.35 |
| Test | 2,628 | 2.35 |
| Total | 43,091 | 38.5 |
Data Fields
input_features:Sequence[Sequence[float32]]- Shape:
(80, 3000)- 80 mel-frequency bins × 3000 time steps - Pre-computed log-mel spectrogram features for Whisper
- Shape:
labels:Sequence[int64]- Tokenized Arabic transcription using Whisper's tokenizer
- Special tokens:
-100for padding (ignored in loss computation)
Usage
Quick Start
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("MadLook/arabic-whisper-multidialect-processed-small")
print(dataset)
# DatasetDict({
# train: Dataset with 37,835 examples
# validation: Dataset with 2,628 examples
# test: Dataset with 2,628 examples
# })
Loading a Single Split
# Load only training data
train_data = load_dataset("MadLook/arabic-whisper-multidialect-processed-small", split="train")
# Load with streaming for large datasets
train_stream = load_dataset("MadLook/arabic-whisper-multidialect-processed-small", split="train", streaming=True)
Preprocessing Details
This dataset was created from the original Arabic multi-dialect dataset using the following preprocessing pipeline:
- Audio Loading: Resampled to 16kHz mono audio
- Feature Extraction: Converted to 80-channel log-mel spectrograms using Whisper's feature extractor
- Tokenization: Arabic text transcriptions tokenized with Whisper's multilingual tokenizer
- Normalization: Applied Whisper's standard audio normalization
- Subset Selection: Selected 40% of the original dataset
Processing Code
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="ar", task="transcribe")
def prepare_dataset(batch):
# Load and resample audio
audio = batch["audio"]
# Compute input features
batch["input_features"] = processor.feature_extractor(
audio["array"],
sampling_rate=audio["sampling_rate"]
).input_features[0]
# Tokenize transcription
batch["labels"] = processor.tokenizer(batch["transcription"]).input_ids
return batch
Source Dataset
This is a processed subset of the Arabic multi-dialect speech recognition dataset, which includes recordings from various Arabic-speaking regions and dialects.
Intended Use
Primary Use Cases
- Fine-tuning Whisper models for Arabic speech recognition
- Research on multi-dialect Arabic ASR
- Benchmarking Arabic speech recognition systems
- Transfer learning for low-resource Arabic dialects
Out-of-Scope Use
- This dataset is already preprocessed - do not apply feature extraction again
- Not suitable for training non-Whisper architectures without re-processing
Limitations
- Only 40% of the full dataset (subset for faster experimentation)
- Pre-computed features are specific to Whisper architecture
- Fixed audio length (30 seconds max due to Whisper constraints)
Citation
If you use this dataset, please cite both this preprocessed version and the original source dataset:
@dataset{arabic_whisper_multidialect_processed,
title={Arabic Whisper Multi-Dialect - Processed (Small)},
author={MadLook},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/datasets/MadLook/arabic-whisper-multidialect-processed-small}
}
License
Apache 2.0
Contact
For questions or issues with this dataset, please open an issue on the dataset repository.
Dataset Version: 1.0
Last Updated: 2025
Preprocessor: Whisper Feature Extractor (openai/whisper-small)
Compatible Models: openai/whisper-tiny, openai/whisper-base, openai/whisper-small, openai/whisper-medium, openai/whisper-large
- Downloads last month
- 5