The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 249, in _split_generators
raise ValueError(
ValueError: `file_name` or `*_file_name` must be present as dictionary key (with type string) in metadata files
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Description
This dataset contains 50 Mandarin Chinese sentences recorded for Text-to-Speech (TTS) model training, totaling 3 minutes and 48 seconds of clean audio. Each sentence is saved as an individual 16-bit PCM WAV file at 44100Hz sample rate, accompanied by accurate transcriptions and English translations in the metadata.csv file. The collection process involved script preparation focusing on phonetic diversity, professional recording in a controlled acoustic environment using a Blue Yeti USB microphone, careful audio segmentation to isolate complete sentences, and thorough quality validation to ensure clarity and accuracy.
Issues Encountered & Solution
Two primary technical challenges arose during dataset preparation. The first was inaccurate audio segmentation using automatic silence detection. The tool frequently misidentified short pauses within sentences as boundaries, creating fragmented clips, or missed quick transitions between sentences, resulting in merged audio. To resolve this, I abandoned full automation. I used a conservative silence threshold (-40 dB, 300 ms) for an initial rough split and then manually listened to and adjusted every segment boundary in Audacity, ensuring each clip contained exactly one complete sentence.
The second issue was WAV format conversion and compatibility. My original high-quality recordings were in 48kHz, 32-bit format, but the specification required 16-bit PCM at 44.1kHz. Direct conversion sometimes introduced audible artifacts. My solution was a two-step process: I first used Audacity's export function with high-quality sample rate conversion and dithering enabled to create a correctly formatted test file. After confirming its integrity, I applied the same settings in a batch export process to convert all files consistently, followed by a validation check to ensure every file was playable and met the required specifications.
- Downloads last month
- 38