spapi commited on
Commit
779a041
·
verified ·
1 Parent(s): 8e9f10e

Add YouTube-Commons README

Browse files
Files changed (1) hide show
  1. scripts/YouTube-Commons-README.md +113 -0
scripts/YouTube-Commons-README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FAMA Training Data: YouTube-Commons
2
+
3
+ This is the README for the [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons) part of
4
+ the [FAMA training data](https://huggingface.co/datasets/FBK-MT/fama-data).
5
+ Refer to the [main FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data) for generic information and data format.
6
+
7
+ ## Prerequisites
8
+
9
+ Install the following dependencies:
10
+
11
+ - **sox**: Audio conversion utility
12
+ - [**yt-dlp**](https://github.com/yt-dlp): YouTube video downloader
13
+
14
+ ```bash
15
+ # Install sox (example for Ubuntu/Debian)
16
+ sudo apt-get install sox
17
+
18
+ # Install yt-dlp, more detailed instructions at: https://github.com/yt-dlp/yt-dlp/wiki/Installation
19
+ pip install -U "yt-dlp[default]"
20
+ ```
21
+
22
+ ## Files Included
23
+
24
+ - `SplitAudioUsingSileroLog.pl` - Perl script that processes audio using Silero logs
25
+ - `train_youtubecommons-en.ids` - List of English file IDs to download
26
+ - `yt-commons-en.silero.json.gz` - Compressed Silero log for English
27
+ - `train_youtubecommons-it.ids` - List of Italian file IDs to download
28
+ - `yt-commons-it.silero.json.gz` - Compressed Silero log for Italian
29
+
30
+ ⚠️ **Time markers in the final tsv files stored in this repository refer to the *reduced* audio files, not the original YouTube audio files.**
31
+
32
+ ## Instructions
33
+
34
+ Follow the steps below to generate the audio segments starting from the logs available in this folder.
35
+ If you are interested in replicating the logs, also follow the optional step for generating the logs.
36
+
37
+ ### Download Audio Files
38
+ Download in the folder `${DOWNLOAD_DIR}` the audio files listed in `${VIDEO_IDS}` using yt-dlp by running:
39
+ ```bash
40
+ for id in `cat ${VIDEO_IDS}` ; do
41
+ download_output=${DOWNLOAD_DIR}/${id}.wav
42
+ if ! test -f ${download_output} ; then
43
+ echo "Saving audio track in ${download_output}"
44
+ ${YT-DLP-PATH}/yt-dlp \
45
+ -r 500K \
46
+ --cookies-from-browser firefox \
47
+ --extract-audio \
48
+ --audio-format wav \
49
+ --postprocessor-args "-ar 16000 -ac 1" \
50
+ -o ${download_output} \
51
+ "https://www.youtube.com/watch?v=${id}"
52
+ else
53
+ echo "Skipping... ${download_output} already saved"
54
+ fi
55
+ done
56
+ ```
57
+ Where `${VIDEO_IDS}` is `train_youtubecommons-en.ids` for English, and `train_youtubecommons-it.ids` for Italian.
58
+
59
+ **Note**: Some videos may no longer be available on YouTube, resulting in a subset of the original dataset.
60
+ If an original audio file is missing due to download failure, it will be automatically skipped during processing.
61
+
62
+ ### (Optional) Generation of the Voice Activity Detection Logs
63
+ To reproduce the logs, install [**Silero**](https://github.com/snakers4/silero-vad),
64
+ which serves to remove non-speech phenomena (silence, noise, music):
65
+ ```bash
66
+ pip install silero-vad
67
+ ```
68
+
69
+ Once installed, run the script `speech_only.py` present in this folder:
70
+ ```bash
71
+ python ./speech_only.py \
72
+ --folder ${DOWNLOAD_FOLDER} \
73
+ --sfx ${WAV_SUFFIX} \
74
+ --out_folder ${OUT_FOLDER} \
75
+ --out_file ${OUT_JSON_FILE}
76
+ ```
77
+
78
+ The script processes the audio files in `${DOWNLOAD_FOLDER}` with suffix `${WAV_SUFFIX}` and stores the VAD-processed audios
79
+ (in wav format at 16kHz) in `${OUT_FOLDER}` along with the associated segmentation file (in json format) in `${OUT_JSON_FILE}`.
80
+
81
+ ### Segmentation based on Voice Activity Detection Logs
82
+ Segment the audio downloaded in `${DOWNLOAD_DIR}` using the logs `${LOG_FILE}` and store them in `${AUDIO_SEGMENT_DIR}` by running:
83
+ ```bash
84
+ perl ./SplitAudioUsingSileroLog.pl ${LOG_FILE} ${DOWNLOAD_DIR} ${AUDIO_SEGMENT_DIR}
85
+ ```
86
+ Where `${LOG_FILE}` is `yt-commons-en.silero.json.gz` for English, and `yt-commons-it.silero.json.gz` for Italian.
87
+
88
+ ### Audio Segmentation using SHAS
89
+ To split the reduced audio into segments with a controlled duration, download and install SHAS following
90
+ [the official README](https://github.com/mt-upc/SHAS?tab=readme-ov-file#usage) in the `${SHAS_ROOT}` folder.
91
+
92
+ ```bash
93
+ python ${SHAS_ROOT}/src/supervised_hybrid/segment.py \
94
+ -wavs ${PATH_TO_WAVS} \
95
+ -ckpt ${CHECKPOINT_PATH} \
96
+ -yaml ${OUTPUT_YAML} \
97
+ -max 30
98
+ ```
99
+
100
+ Where `${PATH_TO_WAVS}` is the path to the wav files obtained from Silero and stored in `${AUDIO_SEGMENT_DIR}`,
101
+ `${CHECKPOINT_PATH}` is the path to the
102
+ [SHAS Multilingual model](https://drive.google.com/u/0/uc?export=download&confirm=x9hB&id=1GzwhzbHBFtwDmQPKoDOdAfESvWBrv_wB),
103
+ `${OUTPUT_YAML}` is the path where to store the final audio segmentation saved as yaml files and used for training.
104
+
105
+ ### Transcription and Translation
106
+ Transcription is done following the same processing used for the [MOSEL dataset](https://huggingface.co/datasets/FBK-MT/mosel).
107
+ Translation is done following the same processing of the other ASR datasets,
108
+ described in the [FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data).
109
+
110
+
111
+ ## License and Citation
112
+
113
+ Please refer to the original [main FAMA data card](https://huggingface.co/datasets/FBK-MT/fama-data) for licensing information and citation.