seedtts_testset / README.md
hhqx's picture
Update README.md
9537716 verified
metadata
dataset_info:
  - config_name: en
    features:
      - name: audio_output_path
        dtype: string
      - name: prompt_text
        dtype: string
      - name: prompt_audio
        dtype: string
      - name: text_input
        dtype: string
      - name: audio_ground_truth
        dtype: string
    splits:
      - name: test_wer
        num_examples: 1088
      - name: test_sim
        num_examples: 1086
  - config_name: zh
    features:
      - name: audio_output_path
        dtype: string
      - name: prompt_text
        dtype: string
      - name: prompt_audio
        dtype: string
      - name: text_input
        dtype: string
      - name: audio_ground_truth
        dtype: string
    splits:
      - name: test_wer
        num_examples: 2020
      - name: test_sim
        num_examples: 2018
      - name: test_wer_hardcase
        num_examples: 400
configs:
  - config_name: en
    data_files:
      - split: test_wer
        path: en/meta.jsonl
      - split: test_sim
        path: en/non_para_reconstruct_meta.jsonl
  - config_name: zh
    data_files:
      - split: test_wer
        path: zh/meta.jsonl
      - split: test_sim
        path: zh/non_para_reconstruct_meta.jsonl
      - split: test_wer_hardcase
        path: zh/hardcase.jsonl

SeedTTS Evaluation Dataset

This dataset contains evaluation data for SeedTTS text-to-speech model testing in multiple languages.

Original repo from: https://github.com/BytedanceSpeech/seed-tts-eval


Languages

  • English (en): Contains test_wer and test_sim splits
  • Chinese (zh): Contains test_wer, test_sim, and test_wer_hardcase splits

Usage


# makesure: pip install datasets==3.5.1

import os
from datasets import load_dataset

repo_dir = "hhqx/seedtts_testset"

ds_en = load_dataset(repo_dir, 'en', trust_remote_code=True)
print(ds_en['test_wer'][0])

ds_zh = load_dataset(repo_dir, 'zh', trust_remote_code=True)
print(ds_zh['test_sim'][0])


# Access specific splits
en_wer = ds_en['test_wer']
en_sim = ds_en['test_sim']

zh_wer = ds_zh['test_wer']
zh_sim = ds_zh['test_sim']
zh_hardcase = ds_zh['test_wer_hardcase']


for config, split in [
    ['en', 'test_wer'],
    ['en', 'test_sim'],
    ['zh', 'test_wer'],
    ['zh', 'test_sim'],
    ['zh', 'test_wer_hardcase'],
]:
    data = load_dataset(repo_dir, config, trust_remote_code=True, split=split)
    for item in data:
        for key, value in item.items():
            if key in ['audio_ground_truth', 'prompt_audio', ] and value:
                assert os.path.exists(value), f'path not exist: {value}'
    print("len of {} {}: {}".format(config, split, len(data)))

Data Structure

Dataset Info (example)

dataset_info:
  - config_name: en
    features:
      - audio_output_path: string
      - prompt_text: string
      - prompt_audio: string
      - text_input: string
      - audio_ground_truth: string
    splits:
      - name: test_wer
        num_examples: 1088  # Update with actual numbers
      - name: test_sim
        num_examples: 1086  # Update with actual numbers

  - config_name: zh
    features:
      - audio_output_path: string
      - prompt_text: string
      - prompt_audio: string
      - text_input: string
      - audio_ground_truth: string
    splits:
      - name: test_wer
        num_examples: 2020  # Update with actual numbers
      - name: test_sim
        num_examples: 2018  # Update with actual numbers
      - name: test_wer_hardcase
        num_examples: 400   # Update with actual numbers

Configs & Data Files Mapping

configs:
  - config_name: en
    data_files:
      - split: test_wer
        path: data/en_meta.jsonl
      - split: test_sim
        path: data/en_non_para_reconstruct_meta.jsonl

  - config_name: zh
    data_files:
      - split: test_wer
        path: data/zh_meta.jsonl
      - split: test_sim
        path: data/zh_non_para_reconstruct_meta.jsonl
      - split: test_wer_hardcase
        path: data/zh_hardcase.jsonl

File Structure

.
├── seedtts_dataset.py           # Your dataset loading script
├── README.md                   # This file
├── data/
│   ├── en_meta.jsonl
│   ├── en_non_para_reconstruct_meta.jsonl
│   ├── en.tgz                  # Compressed wav/audio files for English
│   ├── zh_meta.jsonl
│   ├── zh_non_para_reconstruct_meta.jsonl
│   ├── zh_hardcase.jsonl
│   └── zh.tgz                  # Compressed wav/audio files for Chinese
├── convert_seedtts_to_dataset.py
├── test_demo.py

Notes

  • The .tgz files contain the audio .wav files and will be automatically extracted to the local Hugging Face cache directory during dataset loading.
  • To control where the data archive is extracted and cached, use the cache_dir argument in load_dataset, e.g.:
ds = load_dataset("path/to/seedtts-dataset-repo", "en", cache_dir="/your/fast/storage/path")