| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - text-to-speech |
| | - automatic-speech-recognition |
| | - audio-to-audio |
| | language: |
| | - ur |
| | tags: |
| | - Urdu |
| | - TTS |
| | - LargeScaleDataset |
| | pretty_name: Munch |
| | --- |
| | # Munch - Large-Scale Urdu Text-to-Speech Dataset |
| |
|
| | [](https://huggingface.co/datasets/humair025/Munch) |
| | [](https://huggingface.co/datasets/humair025/hashed_data) |
| | []() |
| | []() |
| | []() |
| |
|
| | ## π Dataset Description |
| |
|
| | **Munch** is a large-scale Urdu Text-to-Speech (TTS) dataset containing high-quality audio recordings paired with Urdu text transcripts. The dataset features multiple voice variations and natural pronunciation patterns suitable for training and evaluating Urdu TTS models. |
| |
|
| |
|
| |
|
| |
|
| | ### Rough Assumption : |
| |
|
| | 4.17 million audio clips, if each 20 seconds long, total about 11,500+ hours of audio. |
| | |
| |
|
| | ### Key Features |
| |
|
| | - π€ **13 Different Voices**: alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan |
| | - π£οΈ **Natural Urdu Pronunciation**: Proper handling of Urdu script, punctuation, and intonation |
| | - π **Large Scale**: 4,167,500 audio-text pairs |
| | - π΅ **High Quality Audio**: PCM16 format, 22.05 kHz sample rate |
| | - πΎ **Efficient Storage**: Parquet format with compression |
| | - π **Lightweight Index Available**: [Hashed index](https://huggingface.co/datasets/humair025/hashed_data) for exploration without downloading full dataset |
| |
|
| | ### Dataset Statistics |
| |
|
| | | Metric | Value | |
| | |--------|-------| |
| | | Total Size | 1.27 TB | |
| | | Total Rows | 4,167,500 | |
| | | Number of Files | ~8,300 parquet files | |
| | | Audio Format | PCM16 (raw audio bytes) | |
| | | Sample Rate | 22,050 Hz | |
| | | Bit Depth | 16-bit signed integer | |
| | | Text Language | Urdu (with occasional mixed language) | |
| | | Voice Count | 13 unique voices | |
| | | Avg Audio Size | ~50 KB to 5MB per sample | |
| | | Avg Duration | ~3-5 seconds per sample | |
| | | Total Duration | ~7,500-15,800 hours of audio | |
| |
|
| | ### π Companion Dataset |
| |
|
| | For efficient exploration without downloading the full 1.27 TB dataset, use the [**Munch Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data): |
| | - π Contains all metadata + SHA-256 hashes of audio |
| | - πΎ Only ~1 GB (99.92% smaller) |
| | - β‘ Search 4.17M records in seconds |
| | - π― Selectively download only what you need |
| |
|
| | ### Related Datasets |
| |
|
| | - **This Dataset (v1)**: [humair025/Munch](https://huggingface.co/datasets/humair025/Munch) - 1.27 TB, 4.17M samples |
| | - **Munch-1 (v2)**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - 3.28 TB, 3.86M samples (newer version) |
| | - **Hashed Index (v1)**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data) - Index for this dataset |
| | - **Hashed Index (v2)**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Index for Munch-1 |
| |
|
| | --- |
| |
|
| | ## π Quick Start |
| |
|
| | ### Installation |
| |
|
| | ```bash |
| | pip install datasets pandas numpy scipy |
| | ``` |
| |
|
| | ### Basic Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | import numpy as np |
| | import io |
| | from scipy.io import wavfile |
| | import IPython.display as ipd |
| | |
| | # Load a specific file |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_125841_0a26c418.parquet", |
| | split="train" |
| | ) |
| | |
| | # Helper function to convert PCM16 bytes to WAV |
| | def pcm16_bytes_to_wav(pcm_bytes, sample_rate=22050): |
| | audio_array = np.frombuffer(pcm_bytes, dtype=np.int16) |
| | wav_io = io.BytesIO() |
| | wavfile.write(wav_io, sample_rate, audio_array) |
| | wav_io.seek(0) |
| | return wav_io |
| | |
| | # Play first audio sample |
| | row = ds[0] |
| | wav_io = pcm16_bytes_to_wav(row['audio_bytes']) |
| | ipd.display(ipd.Audio(wav_io, rate=22050)) |
| | |
| | print(f"Text: {row['text']}") |
| | print(f"Voice: {row['voice']}") |
| | ``` |
| |
|
| | ### Efficient Exploration (Recommended) |
| |
|
| | Instead of downloading the full 1.27 TB dataset, start with the hashed index: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | import pandas as pd |
| | |
| | # Load the lightweight index (~1 GB) |
| | index_ds = load_dataset("humair025/hashed_data", split="train") |
| | index_df = pd.DataFrame(index_ds) |
| | |
| | # Explore the dataset |
| | print(f"Total samples: {len(index_df)}") |
| | print(f"Voices: {index_df['voice'].unique()}") |
| | print(f"Voice distribution:\n{index_df['voice'].value_counts()}") |
| | |
| | # Find specific samples |
| | ash_samples = index_df[index_df['voice'] == 'ash'] |
| | short_audio = index_df[index_df['audio_size_bytes'] < 40000] |
| | |
| | # Download only what you need |
| | files_needed = ash_samples['parquet_file_name'].unique()[:10] |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files=list(files_needed), |
| | split="train" |
| | ) |
| | ``` |
| |
|
| | ### Load Multiple Files |
| |
|
| | ```python |
| | # Load first 10 files |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_*.parquet", # Wildcard pattern |
| | split="train" |
| | ) |
| | |
| | print(f"Total samples: {len(ds)}") |
| | ``` |
| |
|
| | ### Batch Processing |
| |
|
| | ```python |
| | from huggingface_hub import HfApi |
| | |
| | # Get all parquet files |
| | api = HfApi() |
| | files = api.list_repo_files(repo_id="humair025/Munch", repo_type="dataset") |
| | parquet_files = [f for f in files if f.endswith('.parquet')] |
| | |
| | print(f"Total files: {len(parquet_files)}") |
| | |
| | # Load first 20 files |
| | batch = parquet_files[:20] |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files=batch, |
| | split="train" |
| | ) |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π Dataset Structure |
| |
|
| | ### Data Fields |
| |
|
| | Each row in the dataset contains: |
| |
|
| | | Field | Type | Description | |
| | |-------|------|-------------| |
| | | `id` | int | Paragraph ID (sequential) | |
| | | `text` | string | Original Urdu text | |
| | | `transcript` | string | TTS transcript (may differ slightly from input) | |
| | | `voice` | string | Voice name used (e.g., "ash", "sage", "coral") | |
| | | `audio_bytes` | bytes | Raw PCM16 audio data | |
| | | `timestamp` | string | ISO format timestamp of generation (nullable) | |
| | | `error` | string | Error message if generation failed (nullable) | |
| |
|
| | ### Example Row |
| |
|
| | ```python |
| | { |
| | 'id': 42, |
| | 'text': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ', |
| | 'transcript': 'ΫΫ Ψ§ΫΪ© ΩΩ
ΩΩΫ Ω
ΨͺΩ ΫΫΫ', |
| | 'voice': 'ash', |
| | 'audio_bytes': b'\x00\x01...', # PCM16 bytes |
| | 'timestamp': '2025-12-03T13:03:14.123456', |
| | 'error': None |
| | } |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π― Use Cases |
| |
|
| | ### 1. **TTS Model Training** |
| | Train Urdu text-to-speech models with diverse voice samples: |
| | - Fine-tune existing TTS models |
| | - Train voice cloning systems |
| | - Develop multi-speaker TTS |
| | - Create voice conversion models |
| |
|
| | ### 2. **Speech Recognition** |
| | Develop Urdu ASR systems: |
| | - Train speech-to-text models |
| | - Evaluate transcription accuracy |
| | - Research Urdu phonetics |
| | - Build pronunciation dictionaries |
| |
|
| | ### 3. **Voice Research** |
| | Study voice characteristics and patterns: |
| | - Analyze voice similarity |
| | - Research pronunciation patterns |
| | - Study Urdu phonetics and prosody |
| | - Compare voice quality metrics |
| |
|
| | ### 4. **Audio Processing** |
| | Develop audio processing pipelines: |
| | - Audio enhancement |
| | - Noise reduction |
| | - Speech synthesis evaluation |
| | - Audio quality assessment |
| |
|
| | ### 5. **Linguistic Analysis** |
| | Explore linguistic patterns: |
| | - Text analysis and corpus linguistics |
| | - Punctuation usage patterns |
| | - Sentence structure analysis |
| | - Code-switching research (Urdu-English) |
| |
|
| | --- |
| |
|
| | ## π§ Advanced Usage |
| |
|
| | ### Voice Distribution Analysis |
| |
|
| | ```python |
| | import pandas as pd |
| | from collections import Counter |
| | |
| | # Using the hashed index (recommended) |
| | index_ds = load_dataset("humair025/hashed_data", split="train") |
| | index_df = pd.DataFrame(index_ds) |
| | |
| | # Count voice usage |
| | voice_counts = index_df['voice'].value_counts() |
| | print("Voice Distribution:") |
| | for voice, count in voice_counts.items(): |
| | percentage = (count / len(index_df)) * 100 |
| | print(f" {voice}: {count:,} samples ({percentage:.2f}%)") |
| | ``` |
| |
|
| | ### Audio Length Analysis |
| |
|
| | ```python |
| | # Using the hashed index |
| | avg_size = index_df['audio_size_bytes'].mean() |
| | avg_duration = (avg_size / 2) / 22050 # bytes to seconds |
| | |
| | print(f"Average audio size: {avg_size/1024:.2f} KB") |
| | print(f"Average duration: {avg_duration:.2f} seconds") |
| | |
| | # Duration distribution |
| | durations = (index_df['audio_size_bytes'] / 2) / 22050 |
| | print(f"Min duration: {durations.min():.2f}s") |
| | print(f"Max duration: {durations.max():.2f}s") |
| | print(f"Median duration: {durations.median():.2f}s") |
| | ``` |
| |
|
| | ### Text Statistics |
| |
|
| | ```python |
| | # Text length analysis |
| | text_lengths = index_df['text'].str.len() |
| | word_counts = index_df['text'].str.split().str.len() |
| | |
| | print(f"Average characters: {text_lengths.mean():.0f}") |
| | print(f"Average words: {word_counts.mean():.0f}") |
| | print(f"Longest text: {text_lengths.max()} characters") |
| | ``` |
| |
|
| | ### Duplicate Detection |
| |
|
| | ```python |
| | # Find duplicate audio using hashes |
| | duplicates = index_df[index_df.duplicated(subset=['audio_bytes_hash'], keep=False)] |
| | |
| | if len(duplicates) > 0: |
| | print(f"Found {len(duplicates):,} duplicate rows") |
| | print(f"Unique audio: {index_df['audio_bytes_hash'].nunique():,}") |
| | redundancy = (1 - index_df['audio_bytes_hash'].nunique()/len(index_df)) * 100 |
| | print(f"Redundancy: {redundancy:.2f}%") |
| | else: |
| | print("No duplicates found!") |
| | ``` |
| |
|
| | ### Export to WAV Files |
| |
|
| | ```python |
| | import os |
| | from tqdm import tqdm |
| | |
| | # Load specific samples |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_*.parquet", |
| | split="train" |
| | ) |
| | |
| | os.makedirs("audio_files", exist_ok=True) |
| | |
| | for i, row in enumerate(tqdm(ds[:100])): # First 100 samples |
| | wav_io = pcm16_bytes_to_wav(row['audio_bytes']) |
| | filename = f"audio_files/sample_{i:04d}_{row['voice']}.wav" |
| | with open(filename, 'wb') as f: |
| | f.write(wav_io.read()) |
| | ``` |
| |
|
| | ### Selective Download by Voice |
| |
|
| | ```python |
| | # Using hashed index to find files |
| | voice_of_interest = 'ash' |
| | ash_files = index_df[index_df['voice'] == voice_of_interest]['parquet_file_name'].unique() |
| | |
| | print(f"Files containing '{voice_of_interest}' voice: {len(ash_files)}") |
| | |
| | # Download first 10 files with ash voice |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files=list(ash_files[:10]), |
| | split="train" |
| | ) |
| | |
| | print(f"Loaded {len(ds)} samples") |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π Dataset Creation |
| |
|
| | This dataset was generated using a high-performance parallel TTS pipeline with the following characteristics: |
| |
|
| | ### Generation Pipeline |
| |
|
| | - **Concurrent Processing**: 10-20 parallel workers |
| | - **Voice Rotation**: Sequential rotation through 13 voices |
| | - **Quality Control**: Automatic retry with exponential backoff |
| | - **Fault Tolerance**: Checkpoint-based resumption |
| | - **Smart Batching**: Efficient 500-row batches |
| | - **API**: OpenAI-compatible TTS endpoints |
| |
|
| | ### Pipeline Features |
| |
|
| | - β
Natural Urdu pronunciation with proper intonation |
| | - β
Punctuation-aware pausing: |
| | - `Ψ` (question mark): 400ms pause with higher pitch |
| | - `!` (exclamation): 300ms pause with emphasis |
| | - `Ψ` (comma): 500ms pause |
| | - `Ϋ` (full stop): 1000ms pause |
| | - β
Mixed-language support for technical terms |
| | - β
Variable pacing for natural flow |
| | - β
Error handling and logging |
| |
|
| | --- |
| |
|
| | ## β οΈ Important Notes |
| |
|
| | ### Audio Format |
| | - Audio is stored as **raw PCM16 bytes** (not WAV files) |
| | - Must be converted before playback (see examples above) |
| | - Sample rate: 22,050 Hz |
| | - Bit depth: 16-bit signed integer |
| | - Channels: Mono (1 channel) |
| |
|
| | ### Large Dataset Considerations |
| | - πΎ **Size**: 1.27 TB total - download selectively |
| | - π¦ **Files**: ~8,300 individual parquet files |
| | - β‘ **Streaming**: Recommended for full dataset access |
| | - π **Batching**: Load files in batches to manage memory |
| | - π **Index First**: Use [hashed index](https://huggingface.co/datasets/humair025/hashed_data) to explore before downloading |
| |
|
| | ### Recommended Workflow |
| |
|
| | 1. **Explore**: Load the [hashed index](https://huggingface.co/datasets/humair025/hashed_data) (~1 GB) |
| | 2. **Filter**: Find samples matching your criteria |
| | 3. **Download**: Selectively download only needed parquet files |
| | 4. **Process**: Work with manageable subsets |
| |
|
| | ### Potential Data Issues |
| |
|
| | β οΈ **Duplicates**: This dataset may contain duplicate audio samples. Use the hashed index for deduplication: |
| |
|
| | ```python |
| | # Get unique samples only |
| | unique_df = index_df.drop_duplicates(subset=['audio_bytes_hash'], keep='first') |
| | unique_files = unique_df['parquet_file_name'].unique() |
| | ``` |
| |
|
| | β οΈ **Quality Variance**: Some samples may have: |
| | - Low volume or clipping |
| | - Mispronunciations (especially for rare words) |
| | - Background noise |
| | - Transcription differences from input text |
| |
|
| | --- |
| |
|
| | ## π Performance Tips |
| |
|
| | ### Memory Management |
| |
|
| | ```python |
| | # DON'T: Load entire dataset at once |
| | # ds = load_dataset("humair025/Munch", split="train") # 1.27 TB! |
| | |
| | # DO: Use streaming mode |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_*.parquet", |
| | split="train", |
| | streaming=True # Stream data instead of loading all |
| | ) |
| | |
| | # Process in batches |
| | for i, batch in enumerate(ds.iter(batch_size=100)): |
| | # Process 100 samples at a time |
| | if i >= 10: # Process only first 1000 samples |
| | break |
| | ``` |
| |
|
| | ### Efficient File Selection |
| |
|
| | ```python |
| | # Select specific date range |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_*.parquet", # Only Dec 3rd files |
| | split="train" |
| | ) |
| | |
| | # Or specific time range |
| | ds = load_dataset( |
| | "humair025/Munch", |
| | data_files="tts_data_20251203_1303*.parquet", # Around 13:03 |
| | split="train" |
| | ) |
| | |
| | # Or use the index to find specific files |
| | target_files = index_df[index_df['voice'] == 'ash']['parquet_file_name'].unique()[:5] |
| | ds = load_dataset("humair025/Munch", data_files=list(target_files), split="train") |
| | ``` |
| |
|
| | ### Storage Optimization |
| |
|
| | ```python |
| | # If storage is limited, consider: |
| | # 1. Download only specific voices |
| | # 2. Download in batches and process incrementally |
| | # 3. Use the hashed index for metadata-only analysis |
| | # 4. Delete processed files after feature extraction |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π Citation |
| |
|
| | If you use this dataset in your research, please cite: |
| |
|
| | ### BibTeX |
| |
|
| | ```bibtex |
| | @dataset{munch_urdu_tts_2025, |
| | title={Munch: Large-Scale Urdu Text-to-Speech Dataset}, |
| | author={Munir, Humair}, |
| | year={2025}, |
| | publisher={Hugging Face}, |
| | howpublished={\url{https://huggingface.co/datasets/humair025/Munch}}, |
| | note={4.17M audio-text pairs across 13 voices} |
| | } |
| | ``` |
| |
|
| | ### APA Format |
| |
|
| | ``` |
| | Munir, H. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset]. |
| | Hugging Face. https://huggingface.co/datasets/humair025/Munch |
| | ``` |
| |
|
| | ### MLA Format |
| |
|
| | ``` |
| | Munir, Humair. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025, |
| | https://huggingface.co/datasets/humair025/Munch. |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π€ Contributing |
| |
|
| | Issues, suggestions, and contributions are welcome! Please: |
| | - π Report data quality issues |
| | - π‘ Suggest improvements |
| | - π Share your use cases and research |
| | - π§ Contribute analysis scripts or tools |
| |
|
| | ## π License |
| |
|
| | This dataset is released under the **Creative Commons Attribution 4.0 International (CC-BY-4.0)** license. |
| |
|
| | You are free to: |
| | - β
**Share** β copy and redistribute the material in any medium or format |
| | - β
**Adapt** β remix, transform, and build upon the material for any purpose |
| | - β
**Commercial use** β use the dataset for commercial purposes |
| |
|
| | Under the following terms: |
| | - π **Attribution** β You must give appropriate credit, provide a link to the license, and indicate if changes were made |
| |
|
| | --- |
| |
|
| | ## π Important Links |
| |
|
| | - π§ [**This Dataset (Full Audio)**](https://huggingface.co/datasets/humair025/Munch) - 1.27 TB |
| | - π [**Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data) - ~1 GB metadata + hashes |
| | - π [**Munch-1 (Newer Version)**](https://huggingface.co/datasets/humair025/munch-1) - 3.28 TB, 3.86M samples |
| | - π¬ [**Discussions**](https://huggingface.co/datasets/humair025/Munch/discussions) - Ask questions, share research |
| | - π [**Report Issues**](https://huggingface.co/datasets/humair025/Munch/discussions) - Data quality problems |
| |
|
| | --- |
| |
|
| | ## π Acknowledgments |
| |
|
| | - **TTS Generation**: OpenAI-compatible API endpoints |
| | - **Voices**: 13 high-quality voice models (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
| | - **Infrastructure**: HuggingFace Datasets platform |
| | - **Tools**: Python, datasets, pandas, numpy, scipy |
| |
|
| | --- |
| |
|
| | ## π Usage Statistics |
| |
|
| | Help us understand how the dataset is used: |
| | - Training TTS models |
| | - Speech recognition research |
| | - Voice cloning experiments |
| | - Linguistic analysis |
| | - Educational purposes |
| | - Other (please share in discussions!) |
| |
|
| | --- |
| |
|
| | ## β‘ Quick Start Tips |
| |
|
| | 1. **First Time Users**: Start with the [hashed index](https://huggingface.co/datasets/humair025/hashed_data) (~1 GB) to explore the dataset |
| | 2. **Download Smart**: Use the index to find specific samples, then download only those parquet files |
| | 3. **Memory Matters**: Use streaming mode if working with large subsets |
| | 4. **Deduplication**: Check for duplicates using audio hashes before training |
| | 5. **Voice Selection**: Each voice has ~320k samples - choose based on your needs |
| | 6. **Consider Munch-1**: A newer version with 3.86M samples is also available at [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) |
| |
|
| | --- |
| |
|
| | **Note**: This is a large dataset (1.27 TB, 4.17M samples). Please download selectively based on your needs. Consider using the [hashed index](https://huggingface.co/datasets/humair025/hashed_data) for exploration and selective downloading. |
| |
|
| | **Last Updated**: December 2025 |
| |
|
| | **Status**: β
Complete - All ~8,300 files uploaded |
| |
|
| | --- |
| |
|
| | π‘ **Pro Tip**: Download the lightweight [hashed index](https://huggingface.co/datasets/humair025/hashed_data) first to explore the dataset, find duplicates, and identify exactly which files you need - then download only those specific parquet files from this dataset! |