Datasets:
Urdu-aud0 Dataset
Dataset Summary
The Urdu-aud0 dataset is a high-quality collection of synthetic Urdu audio samples paired with transcripts, designed for Text-to-Speech (TTS) model training, evaluation, and fine-tuning. The dataset contains approximately 45,000 audio clips generated using OpenAI's Audio API with the "dan" voice model.
Each entry includes:
- High-quality audio in WAV format (22,050 Hz, mono, 16-bit)
- Corresponding Urdu text transcripts
- Generation timestamps
- Voice model identifier
With approximately 147 hours (98 × 1.5 hours) of total audio duration, this substantial dataset addresses the scarcity of Urdu speech data, making it valuable for developing speech technologies in this low-resource language. The audio has been validated for quality, with corrupted entries removed and proper WAV headers added during processing.
Supported Tasks
Primary Tasks
- Text-to-Speech (TTS): Train or fine-tune TTS models to generate natural-sounding Urdu speech
- Automatic Speech Recognition (ASR): Train or evaluate ASR models on Urdu audio-transcript pairs
- Speech Synthesis Evaluation: Benchmark using metrics like Mean Opinion Score (MOS) or Word Error Rate (WER)
Secondary Tasks
- Language Modeling: Urdu text transcripts can support language model training
- Prosody Analysis: Study speech patterns in synthetic Urdu audio
- Voice Cloning Research: Reference data for Urdu voice synthesis
- Cross-voice Model Comparison: Compare with other voice models (e.g., "sage" in Urdu-aud01)
Languages
- Primary Language: Urdu (ur)
- Script: Urdu script (Nastaliq/Arabic-based)
- Variety: Modern Standard Urdu
- Audio Source: OpenAI Audio API (voice: "dan")
All transcripts are in Urdu script, and audio follows standard Urdu pronunciation patterns.
Dataset Structure
Data Instances
Each instance represents a single audio clip with metadata:
{
"id": 1,
"audio": {
"path": null,
"array": null,
"sampling_rate": 22050
},
"transcript": "\"یہ ایک نمونہ جملہ ہے۔\"",
"voice": "dan",
"text": "یہ ایک نمونہ جملہ ہے۔",
"timestamp": "2025-12-01T10:30:45.123456",
"error": null
}
Data Fields
| Field | Type | Description |
|---|---|---|
id |
int64 |
Unique identifier for the audio sample |
audio |
audio |
Audio data dictionary with bytes (WAV format) at 22,050 Hz sampling rate |
transcript |
string |
Quoted version of the transcript text |
voice |
string |
Voice model used (OpenAI "dan" voice) |
text |
string |
Clean, unquoted Urdu transcript |
timestamp |
string |
ISO-formatted generation timestamp |
error |
string |
Error message if any (mostly null) |
Audio Specifications
- Format: WAV (RIFF header)
- Sampling Rate: 22,050 Hz
- Channels: Mono (1)
- Bit Depth: 16-bit PCM
- Minimum Duration: ~0.1 seconds per clip
- Total Duration: Approximately 147 hours (98 × 1.5 hours)
- Average Duration: ~11.76 seconds per clip (147 hours ÷ 45,000 clips)
Data Splits
- Train: ~45,000 samples (full dataset)
- No predefined validation or test splits
Recommendation: Create custom splits (e.g., 80/10/10 for train/validation/test) based on your specific requirements.
Dataset Creation
Curation Rationale
This dataset was created to address the critical shortage of high-quality Urdu speech data for TTS and ASR applications. By leveraging OpenAI's Audio API with the "dan" voice model, we generated consistent, high-quality synthetic speech that can serve as training data for various speech processing tasks.
Key objectives:
- Provide reliable Urdu audio-text pairs for model training
- Ensure data quality through rigorous validation
- Support development of Urdu language technologies
- Enable research in low-resource language speech processing
- Offer voice diversity (complement to "sage" voice in Urdu-aud01)
Source Data
Generation Method: Synthetic audio generated using OpenAI's Audio API (voice model: "dan")
Processing Pipeline:
- Source data loaded from
humair025/aud0repository - Audio validation (minimum length, byte count verification)
- WAV header addition to raw PCM data
- Corrupted entries removed
- Batching by size (1.5-2GB per batch)
- Upload to Hugging Face Hub
Processing Script: Custom Python pipeline using:
- Hugging Face Datasets library
- Pandas for data manipulation
- Hugging Face Hub API for uploads
Annotations
Annotation Type: Automated during TTS generation
Annotators: OpenAI Audio API system (no human annotation)
Quality Assurance:
- Automated validation removes audio files smaller than 0.1 seconds
- Even byte count verification for PCM data
- WAV header verification after conversion
- Integrity checks during batching process
Usage
Loading the Dataset
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("humairawan/Urdu-aud0")
# Access a sample
sample = dataset["train"][0]
print(f"Text: {sample['text']}")
print(f"Voice: {sample['voice']}")
print(f"Audio sampling rate: {sample['audio']['sampling_rate']}")
Creating Train/Val/Test Splits
from datasets import load_dataset
dataset = load_dataset("humairawan/Urdu-aud0")
# Create 80/10/10 splits
train_test = dataset["train"].train_test_split(test_size=0.2, seed=42)
test_val = train_test["test"].train_test_split(test_size=0.5, seed=42)
splits = {
"train": train_test["train"], # ~36,000 samples
"validation": test_val["train"], # ~4,500 samples
"test": test_val["test"] # ~4,500 samples
}
print(f"Train: {len(splits['train'])} samples")
print(f"Validation: {len(splits['validation'])} samples")
print(f"Test: {len(splits['test'])} samples")
Example: Fine-tuning a TTS Model
from datasets import load_dataset
# Load dataset
dataset = load_dataset("humairawan/Urdu-aud0", split="train")
# Prepare for training
def prepare_example(example):
return {
"text": example["text"],
"audio": example["audio"]["array"],
"sampling_rate": example["audio"]["sampling_rate"]
}
prepared_dataset = dataset.map(prepare_example)
Combining Multiple Voice Models
from datasets import load_dataset, concatenate_datasets
# Load datasets with different voices
aud0_dan = load_dataset("humairawan/Urdu-aud0", split="train") # dan voice
aud01_sage = load_dataset("humairawan/Urdu-aud01", split="train") # sage voice
# Combine for multi-voice training
combined = concatenate_datasets([aud0_dan, aud01_sage])
print(f"Combined dataset: {len(combined)} samples")
Considerations for Using the Data
Social Impact
Positive Impacts:
- Enhances accessibility for Urdu speakers through voice assistants and screen readers
- Supports AI development in underrepresented languages
- Enables educational tools for Urdu language learning
- Facilitates content creation in Urdu
- Provides substantial hours of training data (147 hours)
Potential Concerns:
- Synthetic data may not capture full range of human speech variation
- Limited to single voice model "dan" (may not represent dialectal diversity)
- Should be supplemented with natural speech data for production systems
Bias Considerations
Voice Diversity: All audio generated using a single voice model ("dan"), which limits:
- Accent representation
- Gender diversity (male-leaning voice characteristics)
- Age range variation
- Emotional expression variety
Content Bias: Text content reflects the source material and may have:
- Topic skew towards certain domains
- Formality bias in language use
- Potential cultural or regional biases
Recommendations:
- Combine with natural speech datasets when possible
- Use alongside Urdu-aud01 (sage voice) for voice diversity
- Be aware of synthetic nature when deploying in real-world applications
- Test models on diverse natural speech samples before production deployment
Known Limitations
- Synthetic Nature: Does not capture real-world acoustic conditions, background noise, or natural prosody variations
- Single Voice: Limited diversity in voice characteristics (all "dan" voice)
- Language Variety: Focuses on Modern Standard Urdu; may not represent all dialects
- No Speaker Metadata: No demographic information about hypothetical speakers
- Error Field: Some entries may have error flags (check before use)
- Voice Gender: "Dan" voice has male-leaning characteristics, limiting gender representation
Ethical Considerations
- This dataset uses synthetic audio generated by OpenAI's API, which is subject to OpenAI's terms of service
- Users should be transparent about using synthetic data when deploying models
- For applications requiring natural speech, supplement with human-recorded data
- Consider privacy implications when generating similar synthetic datasets
- Be mindful of potential bias towards male voice characteristics
Additional Information
Dataset Curators
- Primary Curator: Humair Munir (@humairawan)
- Original Source: humair025 (@humair025)
Licensing Information
License: Apache 2.0
This dataset is released under the Apache License 2.0, which permits:
- Commercial use
- Modification
- Distribution
- Private use
See the LICENSE file for full terms.
Note: Audio generated using OpenAI's Audio API is subject to OpenAI's terms of service and usage policies.
Citation Information
If you use this dataset in your research or projects, please cite it as:
@dataset{humairmunir_urdu_aud0_2025,
author = {Humair Munir},
title = {Urdu-aud0: Synthetic Urdu TTS Audio Dataset with Dan Voice},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/humairawan/Urdu-aud0}},
note = {Audio generated using OpenAI Audio API (dan voice), 147 hours}
}
Acknowledgments
- OpenAI for providing the Audio API used to generate this dataset
- The Hugging Face team for hosting infrastructure and tools
- The original data curators and processors
- The Urdu language community for supporting low-resource language development
Contributions
We welcome contributions to improve this dataset:
- Issue Reports: Report problems or suggest improvements via GitHub Issues
- Pull Requests: Submit improvements to processing scripts or documentation
- Extensions: Consider creating complementary datasets (e.g., different voice models, domains)
Contact: For questions or collaboration, reach out through Hugging Face discussions or create an issue in the repository.
Version History
- v1.0 (December 2025): Initial release with ~45,000 validated audio samples
- Audio source: OpenAI Audio API (dan voice)
- Format: WAV, 22,050 Hz, mono, 16-bit
- Total duration: ~147 hours
- Processing: Validation, WAV header addition, batching
Related Resources
- Urdu-aud01 Dataset: Companion dataset with "sage" voice (~59,000 samples)
Dataset Statistics
| Metric | Value |
|---|---|
| Total Samples | ~45,000 + |
| Voice Model | dan (OpenAI) |
| Total Duration | ~147 hours |
| Average Clip Length | ~11.76 seconds |
| Sampling Rate | 22,050 Hz |
| Audio Format | WAV (16-bit PCM mono) |
| Language | Urdu (ur) |
| License | Apache 2.0 |
Last Updated: December 2025
- Downloads last month
- 17