metadata
dataset_info:
features:
- name: ID
dtype: string
- name: speaker_id
dtype: string
- name: Language
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript
dtype: string
- name: length
dtype: float32
- name: dataset_name
dtype: string
- name: confidence_score
dtype: float64
splits:
- name: train
num_examples: 120
download_size: 0
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train/*.parquet
Thanarit/Thai-Voice
Combined Thai audio dataset from multiple sources
Dataset Details
- Total samples: 120
- Total duration: 0.13 hours
- Language: Thai (th)
- Audio format: 16kHz mono WAV
- Volume normalization: -20dB
Sources
Processed 1 datasets in streaming mode
Source Datasets
- GigaSpeech2: Large-scale multilingual speech corpus
Usage
from datasets import load_dataset
# Load with streaming to avoid downloading everything
dataset = load_dataset("Thanarit/Thai-Voice-Test-Viewer-Fix", streaming=True)
# Iterate through samples
for sample in dataset['train']:
print(sample['ID'], sample['transcript'][:50])
# Process audio: sample['audio']
break
Schema
ID: Unique identifier (S1, S2, S3, ...)speaker_id: Speaker identifier (SPK_00001, SPK_00002, ...)Language: Language code (always "th" for Thai)audio: Audio data with 16kHz sampling ratetranscript: Text transcript of the audiolength: Duration in secondsdataset_name: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")confidence_score: Confidence score of the transcript (0.0-1.0)- 1.0: Original transcript from source dataset
- <1.0: STT-generated transcript
- 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])
Processing Details
This dataset was created using streaming processing to handle large-scale data without requiring full downloads. Audio has been standardized to 16kHz mono with -20dB volume normalization.