The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
๐ฎ๐ณ Indian Languages Speech Dataset
๐ Table of Contents
- Overview
- Off-the-Shelf Availability
- Dataset Statistics
- Language Coverage
- Use Cases
- Dataset Structure
- Data Fields
- Getting Started
- Data Collection
- Quality Assurance
- Limitations & Biases
- License & Usage
- Citation
- Contact
๐ฏ Overview
The Indian Languages Speech Dataset provides high-quality, real-world speech recordings from native speakers across 9 major Indian languages. We have 16 Indian languages available off-the-shelf, representing over 1.4 billion speakers globally. This dataset is designed for training and evaluating automatic speech recognition (ASR), text-to-speech (TTS), and speaker identification systems for Indic languages.
Key Features
โ
Native speakers only - 100% L1 (first language) speakers
โ
Rich demographic metadata - Gender, age, occupation, location, dialect
โ
Free speech recordings - Spontaneous, natural speech with real-world disfluencies
โ
Real-world acoustic conditions - Natural environments with background noise
โ
Crowdsourced at scale - Collected from 2M+ global contributors
โ
Diverse speaker demographics - Age range 18-70+, balanced gender distribution
โ
Regional dialect coverage - Multiple dialect variants per language
Supported Languages
This dataset includes samples from 9 major Indian languages. We have 16 Indian languages available off-the-shelf (see inventory table below).
Languages with samples in this dataset:
- ๐ต Hindi (hi) - Indo-Aryan, 600M speakers
- ๐ต Bengali (bn) - Indo-Aryan, 265M speakers
- ๐ต Urdu (ur) - Indo-Aryan, 170M speakers
- ๐ต Marathi (mr) - Indo-Aryan, 83M speakers
- ๐ต Telugu (te) - Dravidian, 82M speakers
- ๐ต Gujarati (gu) - Indo-Aryan, 56M speakers
- ๐ต Kannada (kn) - Dravidian, 44M speakers
- ๐ต Odia (or) - Indo-Aryan, 38M speakers
- ๐ต Malayalam (ml) - Dravidian, 38M speakers
Additional Indian languages available OTS (contact for samples): English (India accent), Assamese, Kashmiri, Santali, Nepali, Sanskrit, Sindhi โ 7 additional languages with 2,500+ hours available.
๐๏ธ This is a Sample Dataset
These 276 recordings represent a tiny sample of Silencio's capabilities. This dataset showcases our data quality and collection methodology, but it is not our full inventory.
Full Off-the-Shelf Inventory - All Indian Languages:
| Language | Script | OTS Recordings | OTS Hours | In This Sample? |
|---|---|---|---|---|
| Hindi | Devanagari | 184,636 | 2,000 hours | โ 41 files |
| English (India) | Latin | 115,609 | 2,428 hours | โ Contact |
| Bengali | Bengali | 32,857 | 319 hours | โ 45 files |
| Telugu | Telugu | 23,224 | 293 hours | โ 25 files |
| Marathi | Devanagari | 14,994 | 198 hours | โ 25 files |
| Urdu | Perso-Arabic | 10,395 | 111 hours | โ 51 files |
| Odia | Odia | 9,408 | 105 hours | โ 21 files |
| Assamese | Assamese | 8,793 | 65 hours | โ Contact |
| Gujarati | Gujarati | 7,789 | 82 hours | โ 18 files |
| Kannada | Kannada | 7,731 | 248 hours | โ 25 files |
| Malayalam | Malayalam | 5,318 | 87 hours | โ 25 files |
| Kashmiri | Perso-Arabic | 2,733 | 35 hours | โ Contact |
| Santali | Devanagari/Ol Chiki | 1,481 | 10 hours | โ Contact |
| Nepali | Devanagari | 736 | 11 hours | โ Contact |
| Sanskrit | Devanagari | 55 | 0.4 hours | โ Contact |
| Sindhi | Devanagari/Arabic | 37 | 1 hour | โ Contact |
| TOTAL | 16 languages | 425,756 | 5,993 hours | 276 files (~15 hours) |
This sample = 0.06% of available inventory for Indian languages (updated: March 30, 2026)
Samples Available For:
- โ Hindi (41 files) - 600M speakers, India's most widely spoken language
- โ Bengali (45 files) - 265M speakers, West Bengal & Bangladesh
- โ Urdu (51 files) - 170M speakers, Pakistan & India
- โ Telugu (25 files) - 82M speakers, Andhra Pradesh & Telangana
- โ Marathi (25 files) - 83M speakers, Maharashtra
- โ Odia (21 files) - 38M speakers, Odisha
- โ Gujarati (18 files) - 56M speakers, Gujarat
- โ Kannada (25 files) - 44M speakers, Karnataka
- โ Malayalam (25 files) - 38M speakers, Kerala
Available OTS (Contact for Samples):
- English (India accent) - 2,428 hours available
- Assamese - 65 hours
- Kashmiri - 35 hours
- Santali - 10 hours
- Nepali - 11 hours
- Sanskrit - 0.4 hours (ancient/classical language)
- Sindhi - 1 hour
See our other datasets: south-asian-speech for additional South Asian language samples
Silencio's Complete OTS Catalog:
- ๐ 3.8M recordings across 1,708 language/country combinations
- โฑ๏ธ 48,000 hours of audio (2,000+ days)
- ๐ 147,000+ unique speakers from 180+ countries
- โ Immediate commercial licensing available
On-Demand Sourcing:
- ๐ Any language, any accent - leveraging our 2M+ contributor network
- ๐ Custom specifications - demographic targeting, acoustic environments, speech types
- โก Rapid turnaround - weeks, not months
Explore more datasets: View our other HuggingFace listings to see the breadth of our capabilities, or contact sofia@silencioai.com for the full off-the-shelf catalog and pricing.
๐ Dataset Statistics
| Metric | Value |
|---|---|
| Total Audio Files | 276 |
| Total Languages | 9 |
| Total Speakers | 200+ unique speakers |
| Native Speaker Population | 1.4B speakers represented |
| Audio Format | WAV (16-bit PCM) |
| Sample Rate | 16 kHz / 44.1 kHz |
| Total Duration | ~15 hours |
| Avg. File Duration | 5-30 seconds |
| Geographic Coverage | Pan-India (major language regions) |
Language Distribution
| Language | Files | Speakers | Primary Regions |
|---|---|---|---|
| Urdu | 51 | 40+ | Pan-India, Pakistan |
| Bengali | 45 | 35+ | West Bengal, Bangladesh |
| Hindi | 41 | 35+ | Hindi belt (Delhi, UP, MP, Bihar, etc.) |
| Telugu | 25 | 20+ | Andhra Pradesh, Telangana |
| Marathi | 25 | 20+ | Maharashtra, Goa |
| Malayalam | 25 | 20+ | Kerala, Lakshadweep |
| Kannada | 25 | 20+ | Karnataka |
| Odia | 21 | 15+ | Odisha |
| Gujarati | 18 | 15+ | Gujarat, Mumbai |
| Total | 276 | 220+ | Pan-India |
๐ Language Coverage
Indo-Aryan Languages
| Language | ISO 639-1 | Speakers | Script | Primary Regions | In Sample? |
|---|---|---|---|---|---|
| Hindi | hi | 600M | Devanagari | Delhi, UP, MP, Bihar, Rajasthan, Haryana, Uttarakhand, HP, Jharkhand | โ 41 files |
| Bengali | bn | 265M | Bengali | West Bengal, Bangladesh, Tripura, Assam | โ 45 files |
| Urdu | ur | 170M | Perso-Arabic | Pan-India, Pakistan | โ 51 files |
| Marathi | mr | 83M | Devanagari | Maharashtra, Goa, Dadra & Nagar Haveli | โ 25 files |
| Gujarati | gu | 56M | Gujarati | Gujarat, Daman & Diu, Dadra & Nagar Haveli | โ 18 files |
| Odia | or | 38M | Odia | Odisha, West Bengal | โ 21 files |
| Assamese | as | 15M | Assamese | Assam, Arunachal Pradesh | โ Contact |
| Kashmiri | ks | 7M | Perso-Arabic | Jammu & Kashmir | โ Contact |
| Nepali | ne | 16M | Devanagari | Sikkim, West Bengal, Nepal | โ Contact |
| Sindhi | sd | 2.7M | Devanagari/Arabic | Gujarat, Rajasthan, Pakistan | โ Contact |
| Sanskrit | sa | - | Devanagari | Classical/liturgical language | โ Contact |
Dravidian Languages
| Language | ISO 639-1 | Speakers | Script | Primary Regions | In Sample? |
|---|---|---|---|---|---|
| Telugu | te | 82M | Telugu | Andhra Pradesh, Telangana, Puducherry | โ 25 files |
| Kannada | kn | 44M | Kannada | Karnataka | โ 25 files |
| Malayalam | ml | 38M | Malayalam | Kerala, Lakshadweep, Puducherry | โ 25 files |
Other Language Families
| Language | ISO 639-1 | Speakers | Family | Script | Primary Regions | In Sample? |
|---|---|---|---|---|---|---|
| Santali | sat | 7.6M | Austroasiatic | Devanagari/Ol Chiki | Jharkhand, West Bengal, Odisha, Assam | โ Contact |
| English (India) | en-IN | 125M+ | Germanic | Latin | Pan-India (second language) | โ Contact |
Dialect Coverage
The dataset includes multiple regional dialects per language:
- Hindi: Delhi, Mumbai, Patna, Lucknow, Jaipur
- Bengali: Kolkata, Dhaka, Sylhet
- Urdu: Delhi, Hyderabad, Lucknow, Karachi
- Telugu: Hyderabad, Vijayawada, Visakhapatnam
- Marathi: Mumbai, Pune, Nagpur
- Malayalam: Thiruvananthapuram, Kochi, Kozhikode
- Kannada: Bangalore, Mysore, Mangalore
- Odia: Cuttack, Bhubaneswar, Ganjam
- Gujarati: Ahmedabad, Surat, Vadodara
๐ฏ Use Cases
Automatic Speech Recognition (ASR)
- Train ASR models for 6 major Indian languages
- Evaluate existing models on real-world Indic speech
- Build multilingual ASR systems for South Asian markets
- Develop dialect-aware recognition systems
Text-to-Speech (TTS)
- Train neural TTS models for Indian languages
- Create natural-sounding voice synthesis systems
- Develop multi-speaker TTS with demographic control
- Build dialect-specific TTS engines
Speaker Recognition & Verification
- Train speaker identification models for Indic languages
- Evaluate speaker verification systems on diverse demographics
- Build voice biometrics for multilingual applications
Language Identification
- Train language detection models for South Asian languages
- Distinguish between closely related Indic languages
- Build multilingual language identification systems
Accent & Dialect Research
- Study phonetic variations across Indian regions
- Analyze code-switching patterns in multilingual contexts
- Research sociolinguistic patterns in Indian languages
Model Evaluation & Benchmarking
- Evaluate multilingual models on underrepresented languages
- Benchmark ASR/TTS performance on real-world Indic speech
- Test robustness to background noise and speaker diversity
๐ Dataset Structure
indian-languages-speech/
โโโ hindi_india/
โ โโโ free_speech/
โ โโโ data/
โ โ โโโ recording_*.wav
โ โโโ metadata.csv
โโโ bengali_india/
โ โโโ free_speech/
โ โโโ data/
โ โโโ metadata.csv
โโโ urdu_india/
โ โโโ free_speech/
โ โโโ data/
โ โโโ metadata.csv
โโโ telugu_india/
โ โโโ free_speech/
โ โโโ data/
โ โ โโโ audio_*.wav
โ โโโ metadata.csv
โโโ marathi_india/
โ โโโ [same structure as Telugu]
โโโ malayalam_india/
โ โโโ [same structure]
โโโ kannada_india/
โ โโโ [same structure]
โโโ odia_india/
โ โโโ [same structure]
โโโ gujarati_india/
โโโ [same structure]
File Naming Convention
- Audio files:
recording_*.wavoraudio_*.wav(depending on source) - Metadata:
metadata.csv(one per language, located in{language}_india/free_speech/)
๐ Data Fields
Each audio sample is accompanied by comprehensive metadata in CSV format:
| Field | Type | Description |
|---|---|---|
file_name |
string | Relative path to audio file |
id |
integer | Unique recording ID |
audio |
audio | Audio data (automatically loaded by HF datasets) |
gender |
string | Speaker gender (male/female/other) |
ethnicity |
string | Speaker ethnicity |
occupation |
string | Speaker occupation |
country_code |
string | ISO country code (IN for India) |
birth_place |
string | Speaker's birthplace |
mother_tongue |
string | Speaker's native language |
dialect |
string | Regional dialect variant |
year_of_birth |
integer | Birth year (for age calculation) |
years_at_birth_place |
integer | Years lived at birthplace |
languages_data |
json | Other languages spoken (with proficiency) |
os |
string | Recording device OS |
device |
string | Device type (Mobile/Desktop/Tablet) |
browser |
string | Browser used for recording |
duration |
float | Audio duration in seconds |
emotions |
json | Emotional content labels |
language |
string | Recording language |
location |
string | Recording location type (office/outdoor/home) |
noise_sources |
json | Background noise description |
script_id |
integer | Script identifier (for prompted speech) |
type_of_script |
string | Script category (all recordings are free_speech) |
script |
string | Original script text (Indic script) |
transcript |
string | Speech transcription (may be "unknown" for free speech) |
speaker_id |
string | Unique speaker identifier (UUID) |
Example Metadata Entry
{
"file_name": "data/audio_6258613.wav",
"id": 6258613,
"gender": "female",
"ethnicity": "Asian",
"occupation": "voice over artist",
"country_code": "IN",
"birth_place": "India",
"mother_tongue": "Gujarati",
"dialect": "Gujarat - Surat",
"year_of_birth": 1982,
"years_at_birth_place": 44,
"languages_data": [{"level": "native", "language": "Gujarati"}],
"os": "Windows",
"device": "Desktop",
"browser": "Chrome",
"duration": 300.0,
"emotions": "{happy}",
"language": "Gujarati",
"location": "office",
"noise_sources": "{silence}",
"script_id": 77581,
"type_of_script": "free_speech",
"script": "# เชคเชฎเซ เชเชฟเชคเซเชฐ เชฆเซเชฐเชตเชพเชจเซเช เชชเชธเชเชฆ เชเชฐเซ เชเซ?",
"transcript": "unknown",
"speaker_id": "e8bf8999-7c74-559c-863d-00a8e5531767"
}
๐ Getting Started
Installation
pip install datasets soundfile librosa
Basic Usage
from datasets import load_dataset
# Load entire dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech")
# Load specific language
gujarati = load_dataset("SilencioNetwork/indian-languages-speech", name="gujarati_india")
# Load the free_speech split (all recordings are free speech)
telugu_free = load_dataset(
"SilencioNetwork/indian-languages-speech",
name="telugu_india",
split="free_speech"
)
Iterate Through Examples
from datasets import load_dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="kannada_india")
for example in dataset["free_speech"]:
print(f"File: {example['file_name']}")
print(f"Language: {example['language']}")
print(f"Speaker: {example['gender']}, {2026 - example['year_of_birth']} years old")
print(f"Dialect: {example['dialect']}")
print(f"Duration: {example['duration']}s")
print(f"Audio shape: {example['audio']['array'].shape}")
print("---")
Load Audio with Librosa
import librosa
from datasets import load_dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="malayalam_india")
for example in dataset["free_speech"]:
# Audio is pre-loaded by HuggingFace datasets
audio_array = example["audio"]["array"]
sampling_rate = example["audio"]["sampling_rate"]
# Or load manually with librosa
audio, sr = librosa.load(example["file_name"], sr=None)
# Process audio
mfcc = librosa.feature.mfcc(y=audio, sr=sr, n_mfcc=13)
print(f"MFCC shape: {mfcc.shape}")
Filter by Demographics
from datasets import load_dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="marathi_india")
# Filter female speakers
female_speakers = dataset["free_speech"].filter(lambda x: x["gender"] == "female")
# Filter by age (speakers born after 1990)
young_speakers = dataset["free_speech"].filter(lambda x: x["year_of_birth"] > 1990)
# Filter by location
outdoor_recordings = dataset["free_speech"].filter(lambda x: x["location"] == "outdoor")
print(f"Female speakers: {len(female_speakers)}")
print(f"Young speakers: {len(young_speakers)}")
print(f"Outdoor recordings: {len(outdoor_recordings)}")
Training Example (PyTorch)
import torch
from datasets import load_dataset
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
# Load dataset
dataset = load_dataset("SilencioNetwork/indian-languages-speech", name="telugu_india")
# Load pre-trained model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base")
# Prepare data
def prepare_dataset(batch):
audio = batch["audio"]
batch["input_values"] = processor(
audio["array"],
sampling_rate=audio["sampling_rate"]
).input_values[0]
return batch
dataset = dataset.map(prepare_dataset, remove_columns=["audio"])
# Training loop (simplified)
for batch in dataset["free_speech"]:
inputs = torch.tensor(batch["input_values"]).unsqueeze(0)
outputs = model(inputs)
# ... training logic
๐ฌ Data Collection Methodology
Collection Platform
All recordings were collected through Silencio's crowdsourcing platform, which enables large-scale, diverse data collection from native speakers globally.
Contributor Requirements
- Native speakers only: All contributors must be L1 (first language) speakers
- Age verification: Contributors 18+ years old
- Device requirements: Smartphone, tablet, or desktop with microphone
- Consent: All contributors provided informed consent for data use
Recording Process
- Speaker onboarding: Contributors provide demographic information and language proficiency
- Script assignment: Random assignment of free speech prompts (spontaneous, unscripted speech)
- Recording: Browser-based or mobile app recording with quality checks
- Validation: Automated quality checks (duration, amplitude, noise level)
- Human review: Manual review of flagged recordings
Quality Control Checkpoints
- Minimum duration requirements (5-30 seconds for free speech)
- Amplitude threshold checks to filter silent recordings
- Noise level assessment to ensure audibility
- Language verification by native speaker reviewers
- Duplicate detection and removal
โ Quality Assurance
Automated Quality Checks
- Audio format validation: All files validated for correct format (WAV, 16-bit PCM)
- Duration checks: Minimum and maximum duration thresholds enforced
- Amplitude analysis: Files with insufficient audio energy flagged and removed
- Metadata completeness: All required fields validated for presence and format
Manual Review Process
- Language verification: Native speakers verify language accuracy
- Transcription spot-checks: Random sample transcriptions reviewed for accuracy
- Demographic validation: Spot-checks on demographic metadata consistency
- Noise assessment: Recordings with excessive noise flagged for review
Known Issues
- Some
transcriptfields marked as "unknown" for free speech recordings (spontaneous speech) - Occasional background noise in outdoor and office recordings (intentional for real-world robustness)
- Minor audio clipping in a small subset of recordings (<1%)
โ ๏ธ Limitations & Biases
Geographic Limitations
- Urban bias: Majority of speakers from urban centers (Ahmedabad, Bangalore, Mumbai, Hyderabad, etc.)
- State coverage: Limited coverage from smaller towns and rural areas
- Accent diversity: Regional accents within each state may be underrepresented
Demographic Limitations
- Age distribution: Skewed toward 25-45 age group (typical of crowdsourcing platforms)
- Occupation bias: Overrepresentation of tech workers and voice-over artists
- Education level: Higher education levels than general population (self-selection bias)
Technical Limitations
- Device variability: Mix of mobile and desktop recordings with varying microphone quality
- Sampling rates: Non-uniform sampling rates (16 kHz and 44.1 kHz)
- Background noise: Variable noise levels across recordings (office, outdoor, home)
Language & Linguistic Limitations
- Script coverage: Limited coverage of some script variations and orthographic styles
- Code-switching: Minimal English-Indic code-switching examples
- Dialect representation: Some minor dialects may be underrepresented
Recommendations for Use
- Data augmentation: Apply data augmentation techniques to address demographic imbalances
- Domain-specific fine-tuning: Fine-tune on domain-specific data for production use cases
- Bias mitigation: Monitor model performance across demographic groups and dialects
- Validation on target domain: Test models on your specific use case data before deployment
โ๏ธ License & Usage
License: CC BY-NC 4.0 (Creative Commons Attribution-NonCommercial 4.0 International)
โ Allowed Uses
- โ Research and academic use
- โ Model training for non-commercial applications
- โ Educational purposes and teaching
- โ Personal projects and experimentation
- โ Open-source projects (non-commercial)
โ Prohibited Uses
- โ Commercial products or services
- โ Selling or licensing the dataset
- โ Commercial ASR/TTS systems without separate licensing
- โ Voice cloning for commercial applications
- โ Surveillance or law enforcement applications without consent
๐ผ Commercial Licensing
For commercial licensing inquiries, including:
- Commercial ASR/TTS deployments
- Enterprise voice AI applications
- Voice biometrics systems
- Custom data collection
Contact: sofia@silencioai.com
๐ข About Silencio
Silencio is a voice AI data company providing scaled sourcing of real-world audio and speech data. With 2M+ contributors across 180+ countries, we specialize in:
- โ Diverse accent and dialect coverage for 100+ languages
- โ Real-world acoustic environments (office, outdoor, home, car)
- โ Comprehensive demographic and sociolinguistic metadata
- โ On-demand data collection for underrepresented languages
- โ Rapid turnaround for custom dataset requirements
Our global contributor network enables us to source:
- Rare languages and dialects
- Specific demographic segments
- Custom recording scenarios
- Large-scale datasets in weeks, not months
Learn more: silencioai.com
๐ Citation
If you use this dataset in your research or applications, please cite:
@dataset{silencio_indian_languages_2026,
title={Indian Languages Speech Dataset},
author={Silencio Network},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/datasets/SilencioNetwork/indian-languages-speech},
note={6 major Indian languages (Gujarati, Kannada, Malayalam, Marathi, Odia, Telugu) with native speaker recordings and demographic metadata}
}
Example Citation in Text
"We trained our multilingual ASR model on the Indian Languages Speech Dataset (Silencio Network, 2026), which provides native speaker recordings across 6 major Indian languages with comprehensive demographic metadata."
๐ค Contact
Dataset Issues & Feedback
Found issues with the data? Have suggestions for improvement?
- Email: sofia@silencioai.com
- HuggingFace Community: Discussion Forum
Commercial Inquiries
For commercial licensing, custom data collection, or partnership opportunities:
- Email: sofia@silencioai.com
- Website: silencioai.com
Contributing
We welcome contributions to improve the dataset:
- Report errors: Open a discussion on HuggingFace
- Suggest improvements: Share feedback via email or discussion forum
- Request features: Let us know what additional data or metadata would be valuable
Built by Silencio | Licensed under CC BY-NC 4.0
Tags: #IndianLanguages #Gujarati #Kannada #Malayalam #Marathi #Odia #Telugu #IndicLanguages #SpeechData #ASR #TTS #Multilingual #VoiceAI #SouthAsia #CrowdsourcedData
- Downloads last month
- 21