1hr_demo / README.md
tuklu's picture
Update README.md
35eb25a verified
---
license: mit
tags:
- assamese
- audio
- physics
- class10
delimiter: "|"
column_names:
- "file_name"
- "text"
features:
- name: file_name
dtype: audio
- name: text
dtype: string
---
### Step 1: Audio Acquisition and Preparation
* **Source:** The primary audio source was a YouTube video titled "মানুহৰ চকু আৰু বাৰে বৰণীয়া পৃথিৱী Class 10 Science Chapter11||The Human Eye and The colourful World" available at `https://www.youtube.com/watch?v=vhjQOBZIJlQ&t=1566s`.
* **Download & Format:** The video's audio was downloaded and converted to `master.mp3`. While lossless compression (like WAV) is generally preferred for audio processing, MP3 was chosen for practical purposes, acknowledging that it is an already compressed format.
---
### Step 2: Accurate Transcription (Text Ground Truth)
* **Process:** The `master.mp3` audio was manually transcribed into a plain text file (`transcript.txt`) to ensure **100% accuracy**. This manual transcription is crucial as it serves as the "ground truth" for all subsequent alignment and training.
* **Challenge:** Initial attempts using generic AI transcription services failed to accurately process the Assamese language, often producing broken Bengali or mixed-language output. This highlighted the necessity of a human-verified transcript for low-resource languages.
---
### Step 3: Forced Alignment (Word-Level Timestamps)
* **Goal:** To achieve a "gold standard" dataset by obtaining **word-level timestamps**, linking every spoken word in `master.mp3` to its precise start and end time.
* **Tool:** The `stable-whisper` library was selected for this forced alignment task.
* **Challenges & Solutions:**
* **Local GPU Failure:** Initial attempts to run the process on a local Mac with a Metal GPU failed due to deep-level PyTorch incompatibilities (`NotImplementedError`, `Invalid buffer size`). This demonstrated the current limitations of the local hardware/software stack for this specific task.
* **Cloud GPU Solution:** To overcome local issues, the process was moved to **Google Colab** with a **T4 GPU** runtime. This provided a stable, pre-configured environment.
* **Library Bugs:** The `stable-whisper` library exhibited internal bugs related to FFmpeg and its own API. These were bypassed by surgically modifying the Python script to use the base `whisper` library for audio loading while still using `stable-whisper` for the alignment itself.
* **Output:** The process successfully generated an `aligned_transcription.json` file containing the required word-level timestamp data.
---
### Step 4: Audio Segmentation & Metadata Generation
* **Process:** The `aligned_transcription.json` file was used by a Python script (`segment.py`) to automate the segmentation.
* Using the `pydub` library, the `master.mp3` file was loaded.
* For each word entry in the JSON, the corresponding audio segment was extracted (chunked).
* These audio chunks were saved as individual `.wav` files in a `wavs/` directory.
* A `metadata.csv` file was generated in the **LJSpeech format (`path|text`)**, linking each `.wav` file to its corresponding transcribed word.
* **Audio Sample Rate:** A critical aspect of audio preparation is ensuring the correct sample rate. The Whisper model family requires audio at **16 kHz**. The workflow handled this by resampling the audio from its original rate (e.g., 44.1 kHz) to 16 kHz before processing.
---
### Conclusion
This structured dataset, with its word/segment-level timestamps, is now complete and hosted for demonstration purposes.