TVSpeech / README.md
Warit's picture
Update README.md
b5cca17 verified
metadata
arxiv: 2601.13044

TVSpeech (Thai Video Speech)

TVSpeech is a Thai speech recognition benchmark dataset specifically designed as a Robustness Track for evaluating ASR models on real-world, in-the-wild Thai audio. The dataset consists of 570 utterances (3.75 hours) curated from diverse public media channels on YouTube under the Creative Commons Attribution (CC-BY) license, representing challenging acoustic and semantic complexity found in natural speech.

Dataset Overview

  • Language: Thai
  • Domain: In-the-wild video speech from public media (YouTube)
  • Samples: 570 utterances
  • Total Duration: 3.75 hours
  • Sampling Rate: 16 kHz
  • Task: ASR Robustness Evaluation (Robustness Track)
  • License: Creative Commons Attribution (CC-BY)
  • Source: Diverse public media channels on YouTube

Dataset Structure

The dataset contains a single test split intended for evaluation purposes only.

Data Fields

  • audio_id (string): Unique identifier for each audio sample
  • path (string): Original file path reference
  • audio (Audio): Audio file sampled at 16kHz
  • sentence (string): Manual transcription of the audio

Data Statistics

  • Total samples: 570
  • Average transcription length: 279.72 characters
  • Min/Max transcription length: 12 / 505 characters
  • Average word count: 14.04 words
  • Min/Max word count: 1 / 36 words

Example

from datasets import load_dataset

dataset = load_dataset("typhoon-ai/TVSpeech")

# Access the test split
test_data = dataset['test']

# View a sample
print(test_data[0])
# {
#   'audio_id': '5c6hg2Vn43M_24416_53728',
#   'path': '...',
#   'audio': {'path': '5c6hg2Vn43M_24416_53728.wav', 'sampling_rate': 16000, 'array': [...]},
#   'sentence': 'ของการเมือง มันมีปัจจัยสำคัญอยู่สามสี่อย่าง...'
# }

Dataset Characteristics

Robustness Track Design

TVSpeech is specifically designed to measure ASR performance in "in-the-wild" conditions that challenge both acoustic and semantic understanding.

Manual Curation Strategy

Unlike random sampling approaches, this test set was manually curated to maximize both acoustic and semantic complexity:

  • High Lexical Density: Annotators prioritized segments containing domain-specific terminology, proper names, and technical jargon
  • Diverse Content Categories: Includes Finance, Technology, and Variety Vlogs
  • Low-Frequency Entity Focus: Tests the model's ability to accurately resolve uncommon terms and proper nouns
  • Minimal Language Model Dependence: Designed to evaluate transcription accuracy without excessive reliance on language model hallucination

Key Evaluation Aspects

TVSpeech evaluates ASR models on:

  1. Acoustic Robustness: Performance on noisy, real-world audio conditions typical of video content
  2. Semantic Complexity: Handling of domain-specific terminology and technical vocabulary
  3. Entity Resolution: Accurate transcription of proper names and low-frequency terms
  4. Lexical Coverage: Ability to transcribe specialized content across multiple domains

This makes TVSpeech particularly valuable for:

  • Benchmarking ASR robustness in challenging real-world scenarios
  • Evaluating model performance on specialized and technical Thai content
  • Testing transcription accuracy on low-frequency vocabulary without language model over-correction
  • Comparing model performance across diverse content categories

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("typhoon-ai/TVSpeech")

# Access test samples
for sample in dataset['test']:
    audio = sample['audio']
    transcription = sample['sentence']
    # Your evaluation code here

Dataset Creation

Source and Curation

TVSpeech was curated from diverse public media channels on YouTube under the Creative Commons Attribution (CC-BY) license. Unlike random sampling approaches, the dataset was manually curated to maximize both acoustic and semantic complexity.

Selection Criteria

Annotators prioritized segments with high lexical density, specifically selecting clips containing:

  • Domain-specific terminology across Finance, Technology, and Variety Vlogs
  • Proper names and entity references
  • Technical jargon and specialized vocabulary
  • Low-frequency terms that challenge language model predictions

This selection strategy ensures the dataset evaluates ASR models not just on background noise robustness, but on their ability to accurately resolve low-frequency entities without relying on excessive language model hallucination or over-correction.

Processing

All audio samples were:

  • Sourced from YouTube public media channels (CC-BY licensed)
  • Manually transcribed by native Thai speakers
  • Normalized to 16 kHz sampling rate
  • Validated for transcription quality and accuracy
  • Processed using the Typhoon data pipeline for consistent formatting

Limitations

  • Test-only dataset: Contains only 570 samples intended for evaluation, not for training
  • Domain specificity: Focuses on conversational speech, may not represent all Thai speech domains
  • Size: Relatively small dataset suitable for benchmarking but not comprehensive coverage
  • Regional variations: May not fully represent all Thai dialects and accents

Citation

If you use this dataset in your research or application, please cite our technical report:

arXiv

@misc{warit2026typhoonasr,
      title={Typhoon ASR Real-time: FastConformer-Transducer for Thai Automatic Speech Recognition}, 
      author={Warit Sirichotedumrong and Adisai Na-Thalang and Potsawee Manakul and Pittawat Taveekitworachai and Sittipong Sripaisarnmongkol and Kunat Pipatanakul},
      year={2026},
      eprint={2601.13044},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={[https://arxiv.org/abs/2601.13044](https://arxiv.org/abs/2601.13044)}, 
}

Contact

For questions or feedback about this dataset, please visit opentyphoon.ai or open an issue on the dataset repository.