Thai-Voice-Test2 / README.md
Thanarit's picture
Upload README.md with huggingface_hub
4162133 verified
metadata
dataset_info:
  features:
    - name: ID
      dtype: string
    - name: speaker_id
      dtype: string
    - name: Language
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: transcript
      dtype: string
    - name: length
      dtype: float32
    - name: dataset_name
      dtype: string
    - name: confidence_score
      dtype: float64
  splits:
    - name: train
      num_examples: 0
  download_size: 0
  dataset_size: 0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train/*.parquet

Thanarit/Thai-Voice

Combined Thai audio dataset from multiple sources

Dataset Details

  • Total samples: 0
  • Total duration: 0.00 hours
  • Language: Thai (th)
  • Audio format: 16kHz mono WAV
  • Volume normalization: -20dB

Sources

Processed 3 datasets in streaming mode

Source Datasets

  1. GigaSpeech2: Large-scale multilingual speech corpus
  2. ProcessedVoiceTH: Thai voice dataset with processed audio
  3. MozillaCommonVoice: Mozilla Common Voice Thai dataset

Usage

from datasets import load_dataset

# Load with streaming to avoid downloading everything
dataset = load_dataset("Thanarit/Thai-Voice-Test2", streaming=True)

# Iterate through samples
for sample in dataset['train']:
    print(sample['ID'], sample['transcript'][:50])
    # Process audio: sample['audio']
    break

Schema

  • ID: Unique identifier (S1, S2, S3, ...)
  • speaker_id: Speaker identifier (SPK_00001, SPK_00002, ...)
  • Language: Language code (always "th" for Thai)
  • audio: Audio data with 16kHz sampling rate
  • transcript: Text transcript of the audio
  • length: Duration in seconds
  • dataset_name: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")
  • confidence_score: Confidence score of the transcript (0.0-1.0)
    • 1.0: Original transcript from source dataset
    • <1.0: STT-generated transcript
    • 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])

Processing Details

This dataset was created using streaming processing to handle large-scale data without requiring full downloads. Audio has been standardized to 16kHz mono with -20dB volume normalization.