You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
This dataset contains audio from multiple sources with different licenses. Some sources (TSync2, GigaSpeech2) are restricted to non-commercial use only. By accessing this dataset, you agree to comply with each source's license terms.
Log in or Sign Up to review the conditions and access this dataset content.
Vaja-Thai (วาจา) — Combined Thai TTS Dataset
A unified, quality-filtered Thai speech dataset combining multiple sources for Text-to-Speech (TTS) research. All audio is resampled to 24 kHz WAV format.
Dataset Summary
| Metric | Value |
|---|---|
| Total samples | 289,916 |
| Total hours | 554.6h |
| Sampling rate | 24,000 Hz |
| Format | WAV 16-bit PCM |
| Language | Thai (ภาษาไทย) |
Sources
| Source | Samples | Hours | License | Description |
|---|---|---|---|---|
| tsync2 | 1,823 | 3.7h | CC-BY-NC-SA-3.0 | NECTEC professional TTS corpus, single female speaker |
| porjai_central | 177,714 | 412.5h | CC-BY-SA-4.0 | CMKL crowdsourced Central Thai speech |
| gigaspeech2 | 5,656 | 8.1h | non-commercial-research-only | GigaSpeech2 Thai dev+test (human-annotated) |
| commonvoice | 104,723 | 130.3h | CC-0 | Mozilla Common Voice Thai (validated split) |
Loading the Dataset
from datasets import load_dataset
# Load a specific source
ds = load_dataset("dubbing-ai/vaja-thai", "tsync2")
ds = load_dataset("dubbing-ai/vaja-thai", "porjai_central")
ds = load_dataset("dubbing-ai/vaja-thai", "gigaspeech2")
ds = load_dataset("dubbing-ai/vaja-thai", "commonvoice")
# Streaming mode (no full download needed)
ds = load_dataset("dubbing-ai/vaja-thai", "porjai_central", streaming=True)
for sample in ds["train"].take(10):
print(sample["text"])
# Load all sources combined
ds = load_dataset("dubbing-ai/vaja-thai", "all")
Schema
| Column | Type | Description |
|---|---|---|
id |
string | Unique sample ID ({source}_{original_id}) |
audio |
Audio(24000) | Audio waveform |
text |
string | Thai transcription |
source |
string | Origin dataset name |
speaker_id |
string | Speaker identifier |
speaker_gender |
string | Gender if known (male/female/None) |
duration_s |
float | Duration in seconds |
original_sr |
int | Original sampling rate before resampling |
quality_tier |
int | 1–4 refined quality tier (see below) |
snr_db |
float | Estimated Signal-to-Noise Ratio in dB |
whisper_cer |
float | Character Error Rate from Whisper validation (None if skipped) |
license |
string | License of the source dataset |
Quality Filtering
- Whisper validation: All sources transcribed with
openai/whisper-large-v3-turboand filtered by Character Error Rate (CER ≤ 0.15). - CER normalization (via
pythainlp): Thai character normalization, Arabic digits converted to Thai number words, spaces removed, non-Thai characters stripped. Whisper sometimes outputs English for loan words (~4–6% of samples) — these are stripped, which may inflate CER for affected samples. - Duration: 1.0s – 30.0s
- Audio energy: Minimum RMS > -50 dBFS (removes near-silent clips)
- Clipping: < 1% clipped samples
Upsampling
- Sources at 16 kHz (Porjai, GigaSpeech2) were upsampled using AP-BWE (IEEE/ACM Trans. ASLP 2024), a GAN-based bandwidth extension model with dual-stream amplitude-phase prediction. 292x real-time on GPU.
- TSync2 (44.1 kHz) was downsampled with
librosakaiser_best. - Common Voice (48 kHz MP3) was decoded and downsampled with
librosa.
Quality Tiers
Each sample has a quality_tier column (1–4) assigned based on both source provenance
and measured audio quality (CER + SNR). This ensures noisy ASR-origin samples don't
pollute TTS training, while clean ASR samples can still be promoted.
| Tier | Criteria | Description | Use case |
|---|---|---|---|
| 1 | Studio/human-annotated, OR ASR with CER ≤ 0.03 + SNR ≥ 25 dB | Highest quality | Fine-tuning, high-quality single/few-speaker TTS |
| 2 | CER ≤ 0.08 + SNR ≥ 15 dB | Clean ASR samples | Multi-speaker TTS with verified transcriptions |
| 3 | CER ≤ 0.15 + SNR ≥ 10 dB | Acceptable quality | Pre-training, data augmentation |
| 4 | Passes basic filters but lower measured quality | Marginal | Large-scale pre-training only, use with caution |
Base assignments (before refinement by CER + SNR):
- TSync2 → Tier 2 (studio recording, but some transcription issues found via CER validation)
- GigaSpeech2 → Tier 3 ("human-annotated" but high CER variance in Thai)
- Common Voice → Tier 2 (community up/down vote validated)
- Porjai Central → Tier 3 (crowdsourced, Whisper-filtered only)
Example — train only on tier 1+2 (recommended for TTS):
ds = load_dataset("dubbing-ai/vaja-thai", "porjai_central")
ds_high_quality = ds.filter(lambda x: x["quality_tier"] <= 2)
Example — filter by SNR directly:
ds_clean = ds.filter(lambda x: x["snr_db"] >= 20)
Speaker Labels
- tsync2: Single known professional female speaker (
tsync2_nun) - porjai_central: No speaker labels available (
porjai_central_unknown) - gigaspeech2: YouTube channel ID used as speaker proxy
- commonvoice:
client_idhash used as speaker proxy, with optional gender metadata
License
Each config has its own license. When combining configs, the most restrictive license applies (non-commercial):
| Config | License | Commercial use |
|---|---|---|
tsync2 |
CC-BY-NC-SA 3.0 | No |
porjai_central |
CC-BY-SA 4.0 | Yes |
gigaspeech2 |
Non-commercial research/education only | No |
commonvoice |
CC-0 (public domain) | Yes |
Check the license column in each sample for per-sample license info.
Citation
If you use this dataset, please cite the original source datasets:
@inproceedings{ardila-etal-2020-common,
title = "Common Voice: A Massively-Multilingual Speech Corpus",
author = "Ardila, Rosana and Branson, Megan and Davis, Kelly and Kohler, Michael
and Meyer, Josh and Henretty, Michael and Morais, Reuben and Saunders, Lindsay
and Tyers, Francis and Weber, Gregor",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
year = "2020",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.520/",
pages = "4218--4222"
}
@inproceedings{suwanbandit23_interspeech,
title = "Thai Dialect Corpus and Transfer-based Curriculum Learning
Investigation for Dialect Automatic Speech Recognition",
author = "Suwanbandit, Artit and Naowarat, Burin and Sangpetch, Orathai
and Chuangsuwanich, Ekapol",
booktitle = "Interspeech 2023",
year = "2023",
pages = "4069--4073",
doi = "10.21437/Interspeech.2023-1828"
}
@article{gigaspeech2,
title = "GigaSpeech 2: An Evolving, Large-Scale and Multi-domain ASR Corpus
for Low-Resource Languages with Automated Crawling, Transcription and Refinement",
author = "Yang, Yifan and Song, Zheshu and Zhuo, Jianheng and Cui, Mingyu
and Li, Jinpeng and Yang, Bo and Du, Yexing and Ma, Ziyang
and Liu, Xunying and Wang, Ziyuan and Li, Ke and Fan, Shuai
and Yu, Kai and Zhang, Wei-Qiang and Chen, Guoguo and Chen, Xie",
journal = "arXiv preprint arXiv:2406.11546",
year = "2024"
}
@inproceedings{wutiwiwatchai2007tsync,
title = "An Intensive Design of a Thai Speech Synthesis Corpus",
author = "Wutiwiwatchai, Chai and Saychum, Sudaporn and Rugchatjaroen, Anocha",
booktitle = "International Symposium on Natural Language Processing (SNLP 2007)",
year = "2007"
}
- Downloads last month
- 111