Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found seedtts_testset.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found seedtts_testset.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

SeedTTS Evaluation Dataset

This dataset contains evaluation data for SeedTTS text-to-speech model testing in multiple languages.

Original repo from: https://github.com/BytedanceSpeech/seed-tts-eval


Languages

  • English (en): Contains test_wer and test_sim splits
  • Chinese (zh): Contains test_wer, test_sim, and test_wer_hardcase splits

Usage


# makesure: pip install datasets==3.5.1

import os
from datasets import load_dataset

repo_dir = "hhqx/seedtts_testset"

ds_en = load_dataset(repo_dir, 'en', trust_remote_code=True)
print(ds_en['test_wer'][0])

ds_zh = load_dataset(repo_dir, 'zh', trust_remote_code=True)
print(ds_zh['test_sim'][0])


# Access specific splits
en_wer = ds_en['test_wer']
en_sim = ds_en['test_sim']

zh_wer = ds_zh['test_wer']
zh_sim = ds_zh['test_sim']
zh_hardcase = ds_zh['test_wer_hardcase']


for config, split in [
    ['en', 'test_wer'],
    ['en', 'test_sim'],
    ['zh', 'test_wer'],
    ['zh', 'test_sim'],
    ['zh', 'test_wer_hardcase'],
]:
    data = load_dataset(repo_dir, config, trust_remote_code=True, split=split)
    for item in data:
        for key, value in item.items():
            if key in ['audio_ground_truth', 'prompt_audio', ] and value:
                assert os.path.exists(value), f'path not exist: {value}'
    print("len of {} {}: {}".format(config, split, len(data)))

Data Structure

Dataset Info (example)

dataset_info:
  - config_name: en
    features:
      - audio_output_path: string
      - prompt_text: string
      - prompt_audio: string
      - text_input: string
      - audio_ground_truth: string
    splits:
      - name: test_wer
        num_examples: 1088  # Update with actual numbers
      - name: test_sim
        num_examples: 1086  # Update with actual numbers

  - config_name: zh
    features:
      - audio_output_path: string
      - prompt_text: string
      - prompt_audio: string
      - text_input: string
      - audio_ground_truth: string
    splits:
      - name: test_wer
        num_examples: 2020  # Update with actual numbers
      - name: test_sim
        num_examples: 2018  # Update with actual numbers
      - name: test_wer_hardcase
        num_examples: 400   # Update with actual numbers

Configs & Data Files Mapping

configs:
  - config_name: en
    data_files:
      - split: test_wer
        path: data/en_meta.jsonl
      - split: test_sim
        path: data/en_non_para_reconstruct_meta.jsonl

  - config_name: zh
    data_files:
      - split: test_wer
        path: data/zh_meta.jsonl
      - split: test_sim
        path: data/zh_non_para_reconstruct_meta.jsonl
      - split: test_wer_hardcase
        path: data/zh_hardcase.jsonl

File Structure

.
β”œβ”€β”€ seedtts_dataset.py           # Your dataset loading script
β”œβ”€β”€ README.md                   # This file
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ en_meta.jsonl
β”‚   β”œβ”€β”€ en_non_para_reconstruct_meta.jsonl
β”‚   β”œβ”€β”€ en.tgz                  # Compressed wav/audio files for English
β”‚   β”œβ”€β”€ zh_meta.jsonl
β”‚   β”œβ”€β”€ zh_non_para_reconstruct_meta.jsonl
β”‚   β”œβ”€β”€ zh_hardcase.jsonl
β”‚   └── zh.tgz                  # Compressed wav/audio files for Chinese
β”œβ”€β”€ convert_seedtts_to_dataset.py
β”œβ”€β”€ test_demo.py

Notes

  • The .tgz files contain the audio .wav files and will be automatically extracted to the local Hugging Face cache directory during dataset loading.
  • To control where the data archive is extracted and cached, use the cache_dir argument in load_dataset, e.g.:
ds = load_dataset("path/to/seedtts-dataset-repo", "en", cache_dir="/your/fast/storage/path")
Downloads last month
56