Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 237, in _split_generators
                  raise ValueError(
              ValueError: `file_name` or `*_file_name` must be present as dictionary key (with type string) in metadata files
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Step 1: Audio Acquisition and Preparation

  • Source: The primary audio source was a YouTube video titled "মানুহৰ চকু আৰু বাৰে বৰণীয়া পৃথিৱী Class 10 Science Chapter11||The Human Eye and The colourful World" available at https://www.youtube.com/watch?v=vhjQOBZIJlQ&t=1566s.
  • Download & Format: The video's audio was downloaded and converted to master.mp3. While lossless compression (like WAV) is generally preferred for audio processing, MP3 was chosen for practical purposes, acknowledging that it is an already compressed format.

Step 2: Accurate Transcription (Text Ground Truth)

  • Process: The master.mp3 audio was manually transcribed into a plain text file (transcript.txt) to ensure 100% accuracy. This manual transcription is crucial as it serves as the "ground truth" for all subsequent alignment and training.
  • Challenge: Initial attempts using generic AI transcription services failed to accurately process the Assamese language, often producing broken Bengali or mixed-language output. This highlighted the necessity of a human-verified transcript for low-resource languages.

Step 3: Forced Alignment (Word-Level Timestamps)

  • Goal: To achieve a "gold standard" dataset by obtaining word-level timestamps, linking every spoken word in master.mp3 to its precise start and end time.
  • Tool: The stable-whisper library was selected for this forced alignment task.
  • Challenges & Solutions:
    • Local GPU Failure: Initial attempts to run the process on a local Mac with a Metal GPU failed due to deep-level PyTorch incompatibilities (NotImplementedError, Invalid buffer size). This demonstrated the current limitations of the local hardware/software stack for this specific task.
    • Cloud GPU Solution: To overcome local issues, the process was moved to Google Colab with a T4 GPU runtime. This provided a stable, pre-configured environment.
    • Library Bugs: The stable-whisper library exhibited internal bugs related to FFmpeg and its own API. These were bypassed by surgically modifying the Python script to use the base whisper library for audio loading while still using stable-whisper for the alignment itself.
    • Output: The process successfully generated an aligned_transcription.json file containing the required word-level timestamp data.

Step 4: Audio Segmentation & Metadata Generation

  • Process: The aligned_transcription.json file was used by a Python script (segment.py) to automate the segmentation.
    • Using the pydub library, the master.mp3 file was loaded.
    • For each word entry in the JSON, the corresponding audio segment was extracted (chunked).
    • These audio chunks were saved as individual .wav files in a wavs/ directory.
    • A metadata.csv file was generated in the LJSpeech format (path|text), linking each .wav file to its corresponding transcribed word.
  • Audio Sample Rate: A critical aspect of audio preparation is ensuring the correct sample rate. The Whisper model family requires audio at 16 kHz. The workflow handled this by resampling the audio from its original rate (e.g., 44.1 kHz) to 16 kHz before processing.

Conclusion

This structured dataset, with its word/segment-level timestamps, is now complete and hosted for demonstration purposes.

Downloads last month
2,450