Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Server error while post-processing the rows. Please report the issue.
Error code:   RowsPostProcessingError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

๐Ÿ‡ฎ๐Ÿ‡ณ Indian Languages Speech Dataset

License: CC BY-NC 4.0 HuggingFace Languages

๐Ÿ“‹ Table of Contents

๐ŸŽฏ Overview

The Indian Languages Speech Dataset provides high-quality, real-world speech recordings from native speakers across 9 major Indian languages. We have 16 Indian languages available off-the-shelf, representing over 1.4 billion speakers globally. This dataset is designed for training and evaluating automatic speech recognition (ASR), text-to-speech (TTS), and speaker identification systems for Indic languages.

Key Features

โœ… Native speakers only - 100% L1 (first language) speakers
โœ… Rich demographic metadata - Gender, age, occupation, location, dialect
โœ… Free speech recordings - Spontaneous, natural speech with real-world disfluencies
โœ… Real-world acoustic conditions - Natural environments with background noise
โœ… Crowdsourced at scale - Collected from 2M+ global contributors
โœ… Diverse speaker demographics - Age range 18-70+, balanced gender distribution
โœ… Regional dialect coverage - Multiple dialect variants per language

Supported Languages

This dataset includes samples from 9 major Indian languages. We have 16 Indian languages available off-the-shelf (see inventory table below).

Languages with samples in this dataset:

  • ๐Ÿ”ต Hindi (hi) - Indo-Aryan, 600M speakers
  • ๐Ÿ”ต Bengali (bn) - Indo-Aryan, 265M speakers
  • ๐Ÿ”ต Urdu (ur) - Indo-Aryan, 170M speakers
  • ๐Ÿ”ต Marathi (mr) - Indo-Aryan, 83M speakers
  • ๐Ÿ”ต Telugu (te) - Dravidian, 82M speakers
  • ๐Ÿ”ต Gujarati (gu) - Indo-Aryan, 56M speakers
  • ๐Ÿ”ต Kannada (kn) - Dravidian, 44M speakers
  • ๐Ÿ”ต Odia (or) - Indo-Aryan, 38M speakers
  • ๐Ÿ”ต Malayalam (ml) - Dravidian, 38M speakers

Additional Indian languages available OTS (contact for samples): English (India accent), Assamese, Kashmiri, Santali, Nepali, Sanskrit, Sindhi โ€” 7 additional languages with 2,500+ hours available.

๐Ÿ—‚๏ธ This is a Sample Dataset

These 276 recordings represent a tiny sample of Silencio's capabilities. This dataset showcases our data quality and collection methodology, but it is not our full inventory.

Full Off-the-Shelf Inventory - All Indian Languages:

Language Script OTS Recordings OTS Hours In This Sample?
Hindi Devanagari 184,636 2,000 hours โœ… 41 files
English (India) Latin 115,609 2,428 hours โŒ Contact
Bengali Bengali 32,857 319 hours โœ… 45 files
Telugu Telugu 23,224 293 hours โœ… 25 files
Marathi Devanagari 14,994 198 hours โœ… 25 files
Urdu Perso-Arabic 10,395 111 hours โœ… 51 files
Odia Odia 9,408 105 hours โœ… 21 files
Assamese Assamese 8,793 65 hours โŒ Contact
Gujarati Gujarati 7,789 82 hours โœ… 18 files
Kannada Kannada 7,731 248 hours โœ… 25 files
Malayalam Malayalam 5,318 87 hours โœ… 25 files
Kashmiri Perso-Arabic 2,733 35 hours โŒ Contact
Santali Devanagari/Ol Chiki 1,481 10 hours โŒ Contact
Nepali Devanagari 736 11 hours โŒ Contact
Sanskrit Devanagari 55 0.4 hours โŒ Contact
Sindhi Devanagari/Arabic 37 1 hour โŒ Contact
TOTAL 16 languages 425,756 5,993 hours 276 files (~15 hours)

This sample = 0.06% of available inventory for Indian languages (updated: March 30, 2026)

Samples Available For:

  • โœ… Hindi (41 files) - 600M speakers, India's most widely spoken language
  • โœ… Bengali (45 files) - 265M speakers, West Bengal & Bangladesh
  • โœ… Urdu (51 files) - 170M speakers, Pakistan & India
  • โœ… Telugu (25 files) - 82M speakers, Andhra Pradesh & Telangana
  • โœ… Marathi (25 files) - 83M speakers, Maharashtra
  • โœ… Odia (21 files) - 38M speakers, Odisha
  • โœ… Gujarati (18 files) - 56M speakers, Gujarat
  • โœ… Kannada (25 files) - 44M speakers, Karnataka
  • โœ… Malayalam (25 files) - 38M speakers, Kerala

Available OTS (Contact for Samples):

  • English (India accent) - 2,428 hours available
  • Assamese - 65 hours
  • Kashmiri - 35 hours
  • Santali - 10 hours
  • Nepali - 11 hours
  • Sanskrit - 0.4 hours (ancient/classical language)
  • Sindhi - 1 hour

See our other datasets: south-asian-speech for additional South Asian language samples

Silencio's Complete OTS Catalog:

  • ๐Ÿ“Š 3.8M recordings across 1,708 language/country combinations
  • โฑ๏ธ 48,000 hours of audio (2,000+ days)
  • ๐ŸŒ 147,000+ unique speakers from 180+ countries
  • โœ… Immediate commercial licensing available

On-Demand Sourcing:

  • ๐ŸŒ Any language, any accent - leveraging our 2M+ contributor network
  • ๐Ÿ“Š Custom specifications - demographic targeting, acoustic environments, speech types
  • โšก Rapid turnaround - weeks, not months

Explore more datasets: View our other HuggingFace listings to see the breadth of our capabilities, or contact sofia@silencioai.com for the full off-the-shelf catalog and pricing.

๐Ÿ“Š Dataset Statistics

Metric Value
Total Audio Files 276
Total Languages 9
Total Speakers 200+ unique speakers
Native Speaker Population 1.4B speakers represented
Audio Format WAV (16-bit PCM)
Sample Rate 16 kHz / 44.1 kHz
Total Duration ~15 hours
Avg. File Duration 5-30 seconds
Geographic Coverage Pan-India (major language regions)

Language Distribution

Language Files Speakers Primary Regions
Urdu 51 40+ Pan-India, Pakistan
Bengali 45 35+ West Bengal, Bangladesh
Hindi 41 35+ Hindi belt (Delhi, UP, MP, Bihar, etc.)
Telugu 25 20+ Andhra Pradesh, Telangana
Marathi 25 20+ Maharashtra, Goa
Malayalam 25 20+ Kerala, Lakshadweep
Kannada 25 20+ Karnataka
Odia 21 15+ Odisha
Gujarati 18 15+ Gujarat, Mumbai
Total 276 220+ Pan-India

๐ŸŒ Language Coverage

Indo-Aryan Languages

Language ISO 639-1 Speakers Script Primary Regions In Sample?
Hindi hi 600M Devanagari Delhi, UP, MP, Bihar, Rajasthan, Haryana, Uttarakhand, HP, Jharkhand โœ… 41 files
Bengali bn 265M Bengali West Bengal, Bangladesh, Tripura, Assam โœ… 45 files
Urdu ur 170M Perso-Arabic Pan-India, Pakistan โœ… 51 files
Marathi mr 83M Devanagari Maharashtra, Goa, Dadra & Nagar Haveli โœ… 25 files
Gujarati gu 56M Gujarati Gujarat, Daman & Diu, Dadra & Nagar Haveli โœ… 18 files
Odia or 38M Odia Odisha, West Bengal โœ… 21 files
Assamese as 15M Assamese Assam, Arunachal Pradesh โŒ Contact
Kashmiri ks 7M Perso-Arabic Jammu & Kashmir โŒ Contact
Nepali ne 16M Devanagari Sikkim, West Bengal, Nepal โŒ Contact
Sindhi sd 2.7M Devanagari/Arabic Gujarat, Rajasthan, Pakistan โŒ Contact
Sanskrit sa - Devanagari Classical/liturgical language โŒ Contact

Dravidian Languages

Language ISO 639-1 Speakers Script Primary Regions In Sample?
Telugu te 82M Telugu Andhra Pradesh, Telangana, Puducherry โœ… 25 files
Kannada kn 44M Kannada Karnataka โœ… 25 files
Malayalam ml 38M Malayalam Kerala, Lakshadweep, Puducherry โœ… 25 files

Other Language Families

Language ISO 639-1 Speakers Family Script Primary Regions In Sample?
Santali sat 7.6M Austroasiatic Devanagari/Ol Chiki Jharkhand, West Bengal, Odisha, Assam โŒ Contact
English (India) en-IN 125M+ Germanic Latin Pan-India (second language) โŒ Contact

Dialect Coverage

The dataset includes multiple regional dialects per language:

  • Hindi: Delhi, Mumbai, Patna, Lucknow, Jaipur
  • Bengali: Kolkata, Dhaka, Sylhet
  • Urdu: Delhi, Hyderabad, Lucknow, Karachi
  • Telugu: Hyderabad, Vijayawada, Visakhapatnam
  • Marathi: Mumbai, Pune, Nagpur
  • Malayalam: Thiruvananthapuram, Kochi, Kozhikode
  • Kannada: Bangalore, Mysore, Mangalore
  • Odia: Cuttack, Bhubaneswar, Ganjam
  • Gujarati: Ahmedabad, Surat, Vadodara

๐ŸŽฏ Use Cases

Automatic Speech Recognition (ASR)

  • Train ASR models for 6 major Indian languages
  • Evaluate existing models on real-world Indic speech
  • Build multilingual ASR systems for South Asian markets
  • Develop dialect-aware recognition systems

Text-to-Speech (TTS)

  • Train neural TTS models for Indian languages
  • Create natural-sounding voice synthesis systems
  • Develop multi-speaker TTS with demographic control
  • Build dialect-specific TTS engines

Speaker Recognition & Verification

  • Train speaker identification models for Indic languages
  • Evaluate speaker verification systems on diverse demographics
  • Build voice biometrics for multilingual applications

Language Identification

  • Train language detection models for South Asian languages
  • Distinguish between closely related Indic languages
  • Build multilingual language identification systems

Accent & Dialect Research

  • Study phonetic variations across Indian regions
  • Analyze code-switching patterns in multilingual contexts
  • Research sociolinguistic patterns in Indian languages

Model Evaluation & Benchmarking

  • Evaluate multilingual models on underrepresented languages
  • Benchmark ASR/TTS performance on real-world Indic speech
  • Test robustness to background noise and speaker diversity

๐Ÿ“ Dataset Structure

indian-languages-speech/
โ”œโ”€โ”€ hindi_india/
โ”‚   โ””โ”€โ”€ free_speech/
โ”‚       โ”œโ”€โ”€ data/
โ”‚       โ”‚   โ”œโ”€โ”€ recording_*.wav
โ”‚       โ””โ”€โ”€ metadata.csv
โ”œโ”€โ”€ bengali_india/
โ”‚   โ””โ”€โ”€ free_speech/
โ”‚       โ”œโ”€โ”€ data/
โ”‚       โ””โ”€โ”€ metadata.csv
โ”œโ”€โ”€ urdu_india/
โ”‚   โ””โ”€โ”€ free_speech/
โ”‚       โ”œโ”€โ”€ data/
โ”‚       โ””โ”€โ”€ metadata.csv
โ”œโ”€โ”€ telugu_india/
โ”‚   โ””โ”€โ”€ free_speech/
โ”‚       โ”œโ”€โ”€ data/
โ”‚       โ”‚   โ”œโ”€โ”€ audio_*.wav
โ”‚       โ””โ”€โ”€ metadata.csv
โ”œโ”€โ”€ marathi_india/
โ”‚   โ””โ”€โ”€ [same structure as Telugu]
โ”œโ”€โ”€ malayalam_india/
โ”‚   โ””โ”€โ”€ [same structure]
โ”œโ”€โ”€ kannada_india/
โ”‚   โ””โ”€โ”€ [same structure]
โ”œโ”€โ”€ odia_india/
โ”‚   โ””โ”€โ”€ [same structure]
โ””โ”€โ”€ gujarati_india/
    โ””โ”€โ”€ [same structure]

File Naming Convention

  • Audio files: recording_*.wav or audio_*.wav (depending on source)
  • Metadata: metadata.csv (one per language, located in {language}_india/free_speech/)

๐Ÿ“‹ Data Fields

Each audio sample is accompanied by comprehensive metadata in CSV format:

Field Type Description
file_name string Relative path to audio file
id integer Unique recording ID
audio audio Audio data (automatically loaded by HF datasets)
gender string Speaker gender (male/female/other)
ethnicity string Speaker ethnicity
occupation string Speaker occupation
country_code string ISO country code (IN for India)
birth_place string Speaker's birthplace
mother_tongue string Speaker's native language
dialect string Regional dialect variant
year_of_birth integer Birth year (for age calculation)
years_at_birth_place integer Years lived at birthplace
languages_data json Other languages spoken (with proficiency)
os string Recording device OS
device string Device type (Mobile/Desktop/Tablet)
browser string Browser used for recording
duration float Audio duration in seconds
emotions json Emotional content labels
language string Recording language
location string Recording location type (office/outdoor/home)
noise_sources json Background noise description
script_id integer Script identifier (for prompted speech)
type_of_script string Script category (all recordings are free_speech)
script string Original script text (Indic script)
transcript string Speech transcription (may be "unknown" for free speech)
speaker_id string Unique speaker identifier (UUID)

Example Metadata Entry

{
  "file_name": "data/audio_6258613.wav",
  "id": 6258613,
  "gender": "female",
  "ethnicity": "Asian",
  "occupation": "voice over artist",
  "country_code": "IN",
  "birth_place": "India",
  "mother_tongue": "Gujarati",
  "dialect": "Gujarat - Surat",
  "year_of_birth": 1982,
  "years_at_birth_place": 44,
  "languages_data": [{"level": "native", "language": "Gujarati"}],
  "os": "Windows",
  "device": "Desktop",
  "browser": "Chrome",
  "duration": 300.0,
  "emotions": "{happy}",
  "language": "Gujarati",
  "location": "office",
  "noise_sources": "{silence}",
  "script_id": 77581,
  "type_of_script": "free_speech",
  "script": "# เชคเชฎเซ‡ เชšเชฟเชคเซเชฐ เชฆเซ‹เชฐเชตเชพเชจเซเช‚ เชชเชธเช‚เชฆ เช•เชฐเซ‹ เช›เซ‹?",
  "transcript": "unknown",
  "speaker_id": "e8bf8999-7c74-559c-863d-00a8e5531767"
}

๐Ÿš€ Getting Started

Installation

pip install datasets soundfile librosa

Basic Usage

from datasets import load_dataset

# Load entire dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech")

# Load specific language
gujarati = load_dataset("SilencioNetwork/indian-languages-speech", name="gujarati_india")

# Load the free_speech split (all recordings are free speech)
telugu_free = load_dataset(
    "SilencioNetwork/indian-languages-speech",
    name="telugu_india",
    split="free_speech"
)

Iterate Through Examples

from datasets import load_dataset

dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="kannada_india")

for example in dataset["free_speech"]:
    print(f"File: {example['file_name']}")
    print(f"Language: {example['language']}")
    print(f"Speaker: {example['gender']}, {2026 - example['year_of_birth']} years old")
    print(f"Dialect: {example['dialect']}")
    print(f"Duration: {example['duration']}s")
    print(f"Audio shape: {example['audio']['array'].shape}")
    print("---")

Load Audio with Librosa

import librosa
from datasets import load_dataset

dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="malayalam_india")

for example in dataset["free_speech"]:
    # Audio is pre-loaded by HuggingFace datasets
    audio_array = example["audio"]["array"]
    sampling_rate = example["audio"]["sampling_rate"]
    
    # Or load manually with librosa
    audio, sr = librosa.load(example["file_name"], sr=None)
    
    # Process audio
    mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13)
    print(f"MFCC shape: {mfcc.shape}")

Filter by Demographics

from datasets import load_dataset

dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="marathi_india")

# Filter female speakers
female_speakers = dataset["free_speech"].filter(lambda x: x["gender"] == "female")

# Filter by age (speakers born after 1990)
young_speakers = dataset["free_speech"].filter(lambda x: x["year_of_birth"] > 1990)

# Filter by location
outdoor_recordings = dataset["free_speech"].filter(lambda x: x["location"] == "outdoor")

print(f"Female speakers: {len(female_speakers)}")
print(f"Young speakers: {len(young_speakers)}")
print(f"Outdoor recordings: {len(outdoor_recordings)}")

Training Example (PyTorch)

import torch
from datasets import load_dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC

# Load dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="telugu_india")

# Load pre-trained model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base")

# Prepare data
def prepare_dataset(batch):
    audio = batch["audio"]
    batch["input_values"] = processor(
        audio["array"],
        sampling_rate=audio["sampling_rate"]
    ).input_values[0]
    return batch

dataset = dataset.map(prepare_dataset, remove_columns=["audio"])

# Training loop (simplified)
for batch in dataset["free_speech"]:
    inputs = torch.tensor(batch["input_values"]).unsqueeze(0)
    outputs = model(inputs)
    # ... training logic

๐Ÿ”ฌ Data Collection Methodology

Collection Platform

All recordings were collected through Silencio's crowdsourcing platform, which enables large-scale, diverse data collection from native speakers globally.

Contributor Requirements

  • Native speakers only: All contributors must be L1 (first language) speakers
  • Age verification: Contributors 18+ years old
  • Device requirements: Smartphone, tablet, or desktop with microphone
  • Consent: All contributors provided informed consent for data use

Recording Process

  1. Speaker onboarding: Contributors provide demographic information and language proficiency
  2. Script assignment: Random assignment of free speech prompts (spontaneous, unscripted speech)
  3. Recording: Browser-based or mobile app recording with quality checks
  4. Validation: Automated quality checks (duration, amplitude, noise level)
  5. Human review: Manual review of flagged recordings

Quality Control Checkpoints

  • Minimum duration requirements (5-30 seconds for free speech)
  • Amplitude threshold checks to filter silent recordings
  • Noise level assessment to ensure audibility
  • Language verification by native speaker reviewers
  • Duplicate detection and removal

โœ… Quality Assurance

Automated Quality Checks

  • Audio format validation: All files validated for correct format (WAV, 16-bit PCM)
  • Duration checks: Minimum and maximum duration thresholds enforced
  • Amplitude analysis: Files with insufficient audio energy flagged and removed
  • Metadata completeness: All required fields validated for presence and format

Manual Review Process

  • Language verification: Native speakers verify language accuracy
  • Transcription spot-checks: Random sample transcriptions reviewed for accuracy
  • Demographic validation: Spot-checks on demographic metadata consistency
  • Noise assessment: Recordings with excessive noise flagged for review

Known Issues

  • Some transcript fields marked as "unknown" for free speech recordings (spontaneous speech)
  • Occasional background noise in outdoor and office recordings (intentional for real-world robustness)
  • Minor audio clipping in a small subset of recordings (<1%)

โš ๏ธ Limitations & Biases

Geographic Limitations

  • Urban bias: Majority of speakers from urban centers (Ahmedabad, Bangalore, Mumbai, Hyderabad, etc.)
  • State coverage: Limited coverage from smaller towns and rural areas
  • Accent diversity: Regional accents within each state may be underrepresented

Demographic Limitations

  • Age distribution: Skewed toward 25-45 age group (typical of crowdsourcing platforms)
  • Occupation bias: Overrepresentation of tech workers and voice-over artists
  • Education level: Higher education levels than general population (self-selection bias)

Technical Limitations

  • Device variability: Mix of mobile and desktop recordings with varying microphone quality
  • Sampling rates: Non-uniform sampling rates (16 kHz and 44.1 kHz)
  • Background noise: Variable noise levels across recordings (office, outdoor, home)

Language & Linguistic Limitations

  • Script coverage: Limited coverage of some script variations and orthographic styles
  • Code-switching: Minimal English-Indic code-switching examples
  • Dialect representation: Some minor dialects may be underrepresented

Recommendations for Use

  • Data augmentation: Apply data augmentation techniques to address demographic imbalances
  • Domain-specific fine-tuning: Fine-tune on domain-specific data for production use cases
  • Bias mitigation: Monitor model performance across demographic groups and dialects
  • Validation on target domain: Test models on your specific use case data before deployment

โš–๏ธ License & Usage

License: CC BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International)

โœ… Allowed Uses

  • โœ… Research and academic use
  • โœ… Model training for non-commercial applications
  • โœ… Educational purposes and teaching
  • โœ… Personal projects and experimentation
  • โœ… Open-source projects (non-commercial)

โŒ Prohibited Uses

  • โŒ Commercial products or services
  • โŒ Selling or licensing the dataset
  • โŒ Commercial ASR/TTS systems without separate licensing
  • โŒ Voice cloning for commercial applications
  • โŒ Surveillance or law enforcement applications without consent

๐Ÿ’ผ Commercial Licensing

For commercial licensing inquiries, including:

  • Commercial ASR/TTS deployments
  • Enterprise voice AI applications
  • Voice biometrics systems
  • Custom data collection

Contact: sofia@silencioai.com

๐Ÿข About Silencio

Silencio is a voice AI data company providing scaled sourcing of real-world audio and speech data. With 2M+ contributors across 180+ countries, we specialize in:

  • โœ… Diverse accent and dialect coverage for 100+ languages
  • โœ… Real-world acoustic environments (office, outdoor, home, car)
  • โœ… Comprehensive demographic and sociolinguistic metadata
  • โœ… On-demand data collection for underrepresented languages
  • โœ… Rapid turnaround for custom dataset requirements

Our global contributor network enables us to source:

  • Rare languages and dialects
  • Specific demographic segments
  • Custom recording scenarios
  • Large-scale datasets in weeks, not months

Learn more: silencioai.com

๐Ÿ“š Citation

If you use this dataset in your research or applications, please cite:

@dataset{silencio_indian_languages_2026,
  title={Indian Languages Speech Dataset},
  author={Silencio Network},
  year={2026},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/SilencioNetwork/indian-languages-speech},
  note={6 major Indian languages (Gujarati, Kannada, Malayalam, Marathi, Odia, Telugu) with native speaker recordings and demographic metadata}
}

Example Citation in Text

"We trained our multilingual ASR model on the Indian Languages Speech Dataset (Silencio Network, 2026), which provides native speaker recordings across 6 major Indian languages with comprehensive demographic metadata."

๐Ÿค Contact

Dataset Issues & Feedback

Found issues with the data? Have suggestions for improvement?

Commercial Inquiries

For commercial licensing, custom data collection, or partnership opportunities:

Contributing

We welcome contributions to improve the dataset:

  • Report errors: Open a discussion on HuggingFace
  • Suggest improvements: Share feedback via email or discussion forum
  • Request features: Let us know what additional data or metadata would be valuable

Built by Silencio | Licensed under CC BY-NC 4.0

HuggingFace


Tags: #IndianLanguages #Gujarati #Kannada #Malayalam #Marathi #Odia #Telugu #IndicLanguages #SpeechData #ASR #TTS #Multilingual #VoiceAI #SouthAsia #CrowdsourcedData

Downloads last month
21