Rijgersberg commited on
Commit
13b417d
·
verified ·
1 Parent(s): ed6a8df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md CHANGED
@@ -36,4 +36,130 @@ configs:
36
  data_files:
37
  - split: train
38
  path: data/train-*
 
 
 
 
 
 
 
 
 
 
39
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  data_files:
37
  - split: train
38
  path: data/train-*
39
+ license: cc-by-4.0
40
+ task_categories:
41
+ - automatic-speech-recognition
42
+ - audio-to-audio
43
+ - audio-classification
44
+ - text-to-speech
45
+ - text-to-audio
46
+ language:
47
+ - nl
48
+ pretty_name: Youtube Commons NL Audio
49
  ---
50
+
51
+ # YouTube Commons NL Audio
52
+ This dataset has audio files for the Dutch-language videos in [Rijgersberg/YouTube-Commons-nl-transcriptions](https://huggingface.co/datasets/Rijgersberg/YouTube-Commons-nl-transcriptions),
53
+ all under a CC BY 4.0 license.
54
+
55
+ It contains 11,669 files for a total runtime of 2493h 43m 5s, coming in at approximately 130 GB.
56
+
57
+ ## Source
58
+ The original source of the dataset (minus the titles, descriptions and audio files) is [YouTube Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons):
59
+ > YouTube-Commons is a collection of audio transcripts of 2,063,066 videos shared on YouTube under a CC BY 4.0 license.
60
+ >
61
+ > **Content**
62
+ >
63
+ > The collection comprises 22,709,724 original and automatically translated transcripts from 3,156,703 videos (721,136 individual channels).
64
+
65
+
66
+ ## Usage
67
+ The datasets into actually two datasets wrapped into one.
68
+ Firstly it is a HuggingFace Dataset of [Rijgersberg/Rijgersberg/YouTube-Commons-nl-transcriptions](https://huggingface.co/datasets/Rijgersberg/YouTube-Commons-nl-transcriptions).
69
+ Secondly all the audio files are present in the `audiofiles` subfolder of the dataset.
70
+
71
+ The use it, it is recommended to use Git LFS to clone the dataset locally as follows:
72
+
73
+ ```bash
74
+ $ git lfs install
75
+ $ git clone https://huggingface.co/datasets/Rijgersberg/YouTube-Commons-nl-audio
76
+ ```
77
+
78
+ You can then load the HuggingFace Dataset with the video info and transcriptions from disk just as you normally would,
79
+ and access the audio files directly with a bit of path mapping:
80
+
81
+ ```python
82
+ from pathlib import Path
83
+ from pprint import pprint
84
+
85
+ import datasets
86
+
87
+
88
+ folder = Path('/path/to/where/you/cloned/YouTube-Commons-nl-audio')
89
+
90
+ # load the dataset containing everything but the audio files
91
+ dataset = datasets.load_dataset('parquet', data_dir=folder, split='train')
92
+
93
+ # build up an index of audio file paths
94
+ paths = {
95
+ path.stem: path # the stem of the file is the video_id
96
+ for path in (folder / 'audiofiles').rglob('*')
97
+ if path.suffix in {".m4a", ".webm", '.mp4'}
98
+ }
99
+
100
+ # example: get the path to the audio file of the first video in the data set
101
+ video = dataset[0]
102
+ pprint(video)
103
+ print(f"The file can be found at {paths[video['video_id']]}")
104
+ ```
105
+
106
+ ## Acquisition
107
+
108
+ The audio files were acquired using Youtube-DLP and the following code:
109
+
110
+ ```python
111
+ import random
112
+ import time
113
+ from pathlib import Path
114
+
115
+ from datasets import load_dataset
116
+ from yt_dlp import YoutubeDL, DownloadError
117
+
118
+
119
+ def main():
120
+ dataset = load_dataset('Rijgersberg/YouTube-Commons-descriptions', split='train')
121
+ dataset = dataset.filter(lambda row: row['language'] == 'nl')
122
+
123
+ output_dir = Path('/path/to/output/dir/youtube-audio')
124
+ output_dir.mkdir(exist_ok=True, parents=True)
125
+
126
+ # clean up old partial downloads
127
+ parts = list(output_dir.glob('*.part'))
128
+ for f in parts:
129
+ f.unlink()
130
+
131
+ # skip ids that have already been downloaded
132
+ existing = set(p.stem for p in output_dir.glob('*.*'))
133
+ ids = set(dataset['id'])
134
+ to_get = ids - existing
135
+
136
+ urls = [f'https://www.youtube.com/watch?v={video_id}' for video_id in to_get]
137
+ random.shuffle(urls)
138
+
139
+ t = 5
140
+ while urls:
141
+ print(len(urls))
142
+ url = urls.pop()
143
+
144
+ try:
145
+ with YoutubeDL({'format': 'bestaudio/best',
146
+ 'extractaudio': True,
147
+ 'outtmpl': str(output_dir / '%(id)s.%(ext)s'),
148
+ 'quiet': True}) as ydl:
149
+ ydl.download([url])
150
+ t = 5
151
+ except DownloadError as e:
152
+ print(e)
153
+ if 'try again later' in str(e):
154
+ urls.append(url)
155
+ random.shuffle(urls)
156
+
157
+ time.sleep(t)
158
+ t = min(2*t, 10*60)
159
+ else:
160
+ continue
161
+
162
+
163
+ if __name__ == "__main__":
164
+ main()
165
+ ```