parquet
#2
by
tuklu
- opened
README.md
CHANGED
|
@@ -5,55 +5,12 @@ tags:
|
|
| 5 |
- audio
|
| 6 |
- physics
|
| 7 |
- class10
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
-
|
| 16 |
-
|
| 17 |
-
---
|
| 18 |
-
|
| 19 |
-
### Step 1: Audio Acquisition and Preparation
|
| 20 |
-
|
| 21 |
-
* **Source:** The primary audio source was a YouTube video titled "মানুহৰ চকু আৰু বাৰে বৰণীয়া পৃথিৱী Class 10 Science Chapter11||The Human Eye and The colourful World" available at `https://www.youtube.com/watch?v=vhjQOBZIJlQ&t=1566s`.
|
| 22 |
-
* **Download & Format:** The video's audio was downloaded and converted to `master.mp3`. While lossless compression (like WAV) is generally preferred for audio processing, MP3 was chosen for practical purposes, acknowledging that it is an already compressed format.
|
| 23 |
-
|
| 24 |
-
---
|
| 25 |
-
|
| 26 |
-
### Step 2: Accurate Transcription (Text Ground Truth)
|
| 27 |
-
|
| 28 |
-
* **Process:** The `master.mp3` audio was manually transcribed into a plain text file (`transcript.txt`) to ensure **100% accuracy**. This manual transcription is crucial as it serves as the "ground truth" for all subsequent alignment and training.
|
| 29 |
-
* **Challenge:** Initial attempts using generic AI transcription services failed to accurately process the Assamese language, often producing broken Bengali or mixed-language output. This highlighted the necessity of a human-verified transcript for low-resource languages.
|
| 30 |
-
|
| 31 |
-
---
|
| 32 |
-
|
| 33 |
-
### Step 3: Forced Alignment (Word-Level Timestamps)
|
| 34 |
-
|
| 35 |
-
* **Goal:** To achieve a "gold standard" dataset by obtaining **word-level timestamps**, linking every spoken word in `master.mp3` to its precise start and end time.
|
| 36 |
-
* **Tool:** The `stable-whisper` library was selected for this forced alignment task.
|
| 37 |
-
* **Challenges & Solutions:**
|
| 38 |
-
* **Local GPU Failure:** Initial attempts to run the process on a local Mac with a Metal GPU failed due to deep-level PyTorch incompatibilities (`NotImplementedError`, `Invalid buffer size`). This demonstrated the current limitations of the local hardware/software stack for this specific task.
|
| 39 |
-
* **Cloud GPU Solution:** To overcome local issues, the process was moved to **Google Colab** with a **T4 GPU** runtime. This provided a stable, pre-configured environment.
|
| 40 |
-
* **Library Bugs:** The `stable-whisper` library exhibited internal bugs related to FFmpeg and its own API. These were bypassed by surgically modifying the Python script to use the base `whisper` library for audio loading while still using `stable-whisper` for the alignment itself.
|
| 41 |
-
* **Output:** The process successfully generated an `aligned_transcription.json` file containing the required word-level timestamp data.
|
| 42 |
-
|
| 43 |
-
---
|
| 44 |
-
|
| 45 |
-
### Step 4: Audio Segmentation & Metadata Generation
|
| 46 |
-
|
| 47 |
-
* **Process:** The `aligned_transcription.json` file was used by a Python script (`segment.py`) to automate the segmentation.
|
| 48 |
-
* Using the `pydub` library, the `master.mp3` file was loaded.
|
| 49 |
-
* For each word entry in the JSON, the corresponding audio segment was extracted (chunked).
|
| 50 |
-
* These audio chunks were saved as individual `.wav` files in a `wavs/` directory.
|
| 51 |
-
* A `metadata.csv` file was generated in the **LJSpeech format (`path|text`)**, linking each `.wav` file to its corresponding transcribed word.
|
| 52 |
-
* **Audio Sample Rate:** A critical aspect of audio preparation is ensuring the correct sample rate. The Whisper model family requires audio at **16 kHz**. The workflow handled this by resampling the audio from its original rate (e.g., 44.1 kHz) to 16 kHz before processing.
|
| 53 |
-
|
| 54 |
-
---
|
| 55 |
-
|
| 56 |
-
### Conclusion
|
| 57 |
-
|
| 58 |
-
This structured dataset, with its word/segment-level timestamps, is now complete and hosted for demonstration purposes.
|
| 59 |
-
|
|
|
|
| 5 |
- audio
|
| 6 |
- physics
|
| 7 |
- class10
|
| 8 |
+
configs:
|
| 9 |
+
- config_name: default
|
| 10 |
+
data_files:
|
| 11 |
+
- split: train
|
| 12 |
+
path: metadata.csv
|
| 13 |
+
names:
|
| 14 |
+
- audio
|
| 15 |
+
- text
|
| 16 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
align.py
CHANGED
|
@@ -1,4 +1,3 @@
|
|
| 1 |
-
## this file need nvidia gpu, use google colab or your local with nvidia gpu
|
| 2 |
import stable_whisper
|
| 3 |
import whisper
|
| 4 |
import json
|
|
|
|
|
|
|
| 1 |
import stable_whisper
|
| 2 |
import whisper
|
| 3 |
import json
|