metadata
license: mit
tags:
- assamese
- audio
- physics
- class10
delimiter: '|'
column_names:
- file_name
- text
features:
- name: file_name
dtype: audio
- name: text
dtype: string
Step 1: Audio Acquisition and Preparation
- Source: The primary audio source was a YouTube video titled "মানুহৰ চকু আৰু বাৰে বৰণীয়া পৃথিৱী Class 10 Science Chapter11||The Human Eye and The colourful World" available at
https://www.youtube.com/watch?v=vhjQOBZIJlQ&t=1566s. - Download & Format: The video's audio was downloaded and converted to
master.mp3. While lossless compression (like WAV) is generally preferred for audio processing, MP3 was chosen for practical purposes, acknowledging that it is an already compressed format.
Step 2: Accurate Transcription (Text Ground Truth)
- Process: The
master.mp3audio was manually transcribed into a plain text file (transcript.txt) to ensure 100% accuracy. This manual transcription is crucial as it serves as the "ground truth" for all subsequent alignment and training. - Challenge: Initial attempts using generic AI transcription services failed to accurately process the Assamese language, often producing broken Bengali or mixed-language output. This highlighted the necessity of a human-verified transcript for low-resource languages.
Step 3: Forced Alignment (Word-Level Timestamps)
- Goal: To achieve a "gold standard" dataset by obtaining word-level timestamps, linking every spoken word in
master.mp3to its precise start and end time. - Tool: The
stable-whisperlibrary was selected for this forced alignment task. - Challenges & Solutions:
- Local GPU Failure: Initial attempts to run the process on a local Mac with a Metal GPU failed due to deep-level PyTorch incompatibilities (
NotImplementedError,Invalid buffer size). This demonstrated the current limitations of the local hardware/software stack for this specific task. - Cloud GPU Solution: To overcome local issues, the process was moved to Google Colab with a T4 GPU runtime. This provided a stable, pre-configured environment.
- Library Bugs: The
stable-whisperlibrary exhibited internal bugs related to FFmpeg and its own API. These were bypassed by surgically modifying the Python script to use the basewhisperlibrary for audio loading while still usingstable-whisperfor the alignment itself. - Output: The process successfully generated an
aligned_transcription.jsonfile containing the required word-level timestamp data.
- Local GPU Failure: Initial attempts to run the process on a local Mac with a Metal GPU failed due to deep-level PyTorch incompatibilities (
Step 4: Audio Segmentation & Metadata Generation
- Process: The
aligned_transcription.jsonfile was used by a Python script (segment.py) to automate the segmentation.- Using the
pydublibrary, themaster.mp3file was loaded. - For each word entry in the JSON, the corresponding audio segment was extracted (chunked).
- These audio chunks were saved as individual
.wavfiles in awavs/directory. - A
metadata.csvfile was generated in the LJSpeech format (path|text), linking each.wavfile to its corresponding transcribed word.
- Using the
- Audio Sample Rate: A critical aspect of audio preparation is ensuring the correct sample rate. The Whisper model family requires audio at 16 kHz. The workflow handled this by resampling the audio from its original rate (e.g., 44.1 kHz) to 16 kHz before processing.
Conclusion
This structured dataset, with its word/segment-level timestamps, is now complete and hosted for demonstration purposes.