You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Urdu-aud01 Dataset

Dataset Summary

The Urdu-aud01 dataset is a high-quality collection of synthetic Urdu audio samples paired with transcripts, designed for Text-to-Speech (TTS) model training, evaluation, and fine-tuning. The dataset contains approximately 59,000 audio clips generated using OpenAI's Audio API with the "sage" voice model.

Each entry includes:

  • High-quality audio in WAV format (22,050 Hz, mono, 16-bit)
  • Corresponding Urdu text transcripts
  • Generation timestamps
  • Voice model identifier

This dataset addresses the scarcity of Urdu speech data, making it valuable for developing speech technologies in this low-resource language. The audio has been validated for quality, with corrupted entries removed and proper WAV headers added during processing.

Supported Tasks

Primary Tasks

  • Text-to-Speech (TTS): Train or fine-tune TTS models to generate natural-sounding Urdu speech
  • Automatic Speech Recognition (ASR): Train or evaluate ASR models on Urdu audio-transcript pairs
  • Speech Synthesis Evaluation: Benchmark using metrics like Mean Opinion Score (MOS) or Word Error Rate (WER)

Secondary Tasks

  • Language Modeling: Urdu text transcripts can support language model training
  • Prosody Analysis: Study speech patterns in synthetic Urdu audio
  • Voice Cloning Research: Reference data for Urdu voice synthesis

Languages

  • Primary Language: Urdu (ur)
  • Script: Urdu script (Nastaliq/Arabic-based)
  • Variety: Modern Standard Urdu
  • Audio Source: OpenAI Audio API (voice: "sage")

All transcripts are in Urdu script, and audio follows standard Urdu pronunciation patterns.

Dataset Structure

Data Instances

Each instance represents a single audio clip with metadata:

{
  "id": 50001,
  "audio": {
    "path": null,
    "array": null,
    "sampling_rate": 22050
  },
  "transcript": "\"جب انسان کھیتی نہیں کرے گا، تو وہ کھائے گا کیا، اور پھر وہ زندہ کیسے رہے گا۔\"",
  "voice": "sage",
  "text": "جب انسان کھیتی نہیں کرے گا، تو وہ کھائے گا کیا، اور پھر وہ زندہ کیسے رہے گا۔",
  "timestamp": "2025-12-01T20:46:59.484163",
  "error": null
}

Data Fields

Field Type Description
id int64 Unique identifier for the audio sample
audio audio Audio data dictionary with bytes (WAV format) at 22,050 Hz sampling rate
transcript string Quoted version of the transcript text
voice string Voice model used (OpenAI "sage" voice)
text string Clean, unquoted Urdu transcript
timestamp string ISO-formatted generation timestamp
error string Error message if any (mostly null)

Audio Specifications

  • Format: WAV (RIFF header)
  • Sampling Rate: 22,050 Hz
  • Channels: Mono (1)
  • Bit Depth: 16-bit PCM
  • Minimum Duration: ~0.1 seconds per clip
  • Total Duration: Approximately 130-170 hours (estimated)

Data Splits

  • Train: ~59,000 samples (full dataset)
  • No predefined validation or test splits

Recommendation: Create custom splits (e.g., 80/10/10 for train/validation/test) based on your specific requirements.

Dataset Creation

Curation Rationale

This dataset was created to address the critical shortage of high-quality Urdu speech data for TTS and ASR applications. By leveraging OpenAI's Audio API, we generated consistent, high-quality synthetic speech that can serve as training data for various speech processing tasks.

Key objectives:

  • Provide reliable Urdu audio-text pairs for model training
  • Ensure data quality through rigorous validation
  • Support development of Urdu language technologies
  • Enable research in low-resource language speech processing

Source Data

Generation Method: Synthetic audio generated using OpenAI's Audio API (voice model: "sage")

Processing Pipeline:

  1. Source data loaded from humair025/aud01 repository
  2. Audio validation (minimum length, byte count verification)
  3. WAV header addition to raw PCM data
  4. Corrupted entries removed
  5. Batching by size (1.5-2GB per batch)
  6. Upload to Hugging Face Hub

Processing Script: Custom Python pipeline using:

  • Hugging Face Datasets library
  • Pandas for data manipulation
  • Hugging Face Hub API for uploads

Annotations

Annotation Type: Automated during TTS generation

Annotators: OpenAI Audio API system (no human annotation)

Quality Assurance:

  • Automated validation removes audio files smaller than 0.1 seconds
  • Even byte count verification for PCM data
  • WAV header verification after conversion

Usage

Loading the Dataset

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("humairawan/Urdu-aud01")

# Access a sample
sample = dataset["train"][0]
print(f"Text: {sample['text']}")
print(f"Audio sampling rate: {sample['audio']['sampling_rate']}")

Creating Train/Val/Test Splits

from datasets import load_dataset

dataset = load_dataset("humairawan/Urdu-aud01")

# Create 80/10/10 splits
train_test = dataset["train"].train_test_split(test_size=0.2, seed=42)
test_val = train_test["test"].train_test_split(test_size=0.5, seed=42)

splits = {
    "train": train_test["train"],
    "validation": test_val["train"],
    "test": test_val["test"]
}

Example: Fine-tuning a TTS Model

from datasets import load_dataset

# Load dataset
dataset = load_dataset("humairawan/Urdu-aud01", split="train")

# Prepare for training
def prepare_example(example):
    return {
        "text": example["text"],
        "audio": example["audio"]["array"],
        "sampling_rate": example["audio"]["sampling_rate"]
    }

prepared_dataset = dataset.map(prepare_example)

Considerations for Using the Data

Social Impact

Positive Impacts:

  • Enhances accessibility for Urdu speakers through voice assistants and screen readers
  • Supports AI development in underrepresented languages
  • Enables educational tools for Urdu language learning
  • Facilitates content creation in Urdu

Potential Concerns:

  • Synthetic data may not capture full range of human speech variation
  • Limited to single voice model (may not represent dialectal diversity)
  • Should be supplemented with natural speech data for production systems

Bias Considerations

Voice Diversity: All audio generated using a single voice model ("sage"), which limits:

  • Accent representation
  • Gender diversity
  • Age range variation
  • Emotional expression variety

Content Bias: Text content reflects the source material and may have:

  • Topic skew towards certain domains
  • Formality bias in language use
  • Potential cultural or regional biases

Recommendations:

  • Combine with natural speech datasets when possible
  • Be aware of synthetic nature when deploying in real-world applications
  • Test models on diverse natural speech samples before production deployment

Known Limitations

  1. Synthetic Nature: Does not capture real-world acoustic conditions, background noise, or natural prosody variations
  2. Single Voice: Limited diversity in voice characteristics
  3. Language Variety: Focuses on Modern Standard Urdu; may not represent all dialects
  4. No Speaker Metadata: No demographic information about hypothetical speakers
  5. Error Field: Some entries may have error flags (check before use)

Ethical Considerations

  • This dataset uses synthetic audio generated by OpenAI's API, which is subject to OpenAI's terms of service
  • Users should be transparent about using synthetic data when deploying models
  • For applications requiring natural speech, supplement with human-recorded data
  • Consider privacy implications when generating similar synthetic datasets

Additional Information

Dataset Curators

Licensing Information

License: Apache 2.0

This dataset is released under the Apache License 2.0, which permits:

  • Commercial use
  • Modification
  • Distribution
  • Private use

See the LICENSE file for full terms.

Note: Audio generated using OpenAI's Audio API is subject to OpenAI's terms of service and usage policies.

Citation Information

If you use this dataset in your research or projects, please cite it as:

@dataset{humairmunir_urdu_aud01_2025,
  author = {Humair Munir},
  title = {Urdu-aud01: Synthetic Urdu TTS Audio Dataset},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/humairawan/Urdu-aud01}},
  note = {Audio generated using OpenAI Audio API}
}

Acknowledgments

  • OpenAI for providing the Audio API used to generate this dataset
  • The Hugging Face team for hosting infrastructure and tools
  • The original data curators and processors
  • The Urdu language community for supporting low-resource language development

Contributions

We welcome contributions to improve this dataset:

  • Issue Reports: Report problems or suggest improvements via GitHub Issues
  • Pull Requests: Submit improvements to processing scripts or documentation
  • Extensions: Consider creating complementary datasets (e.g., different voice models, domains)

Contact: For questions or collaboration, reach out through Hugging Face discussions or create an issue in the repository.

Version History

  • v1.0 (December 2025): Initial release with ~59,000 validated audio samples
    • Audio source: OpenAI Audio API (sage voice)
    • Format: WAV, 22,050 Hz, mono, 16-bit
    • Processing: Validation, WAV header addition, batching

Last Updated: December 2025

Downloads last month
26