kfajdsl Satya10 commited on
Commit
b3c6724
·
verified ·
0 Parent(s):

Duplicate from LIUM/tedlium

Browse files

Co-authored-by: Satya <Satya10@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ language:
7
+ - en
8
+ license: []
9
+ multilinguality:
10
+ - monolingual
11
+ size_categories:
12
+ - 10K<n<100K
13
+ source_datasets:
14
+ - original
15
+ task_categories:
16
+ - automatic-speech-recognition
17
+ task_ids: []
18
+ pretty_name: TED-LIUM
19
+ ---
20
+
21
+ # Dataset Card for tedlium
22
+
23
+ ## Table of Contents
24
+ - [Dataset Description](#dataset-description)
25
+ - [Dataset Summary](#dataset-summary)
26
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
27
+ - [Languages](#languages)
28
+ - [Dataset Structure](#dataset-structure)
29
+ - [Data Instances](#data-instances)
30
+ - [Data Fields](#data-instances)
31
+ - [Data Splits](#data-instances)
32
+ - [Dataset Creation](#dataset-creation)
33
+ - [Curation Rationale](#curation-rationale)
34
+ - [Source Data](#source-data)
35
+ - [Annotations](#annotations)
36
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
37
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
38
+ - [Social Impact of Dataset](#social-impact-of-dataset)
39
+ - [Discussion of Biases](#discussion-of-biases)
40
+ - [Other Known Limitations](#other-known-limitations)
41
+ - [Additional Information](#additional-information)
42
+ - [Dataset Curators](#dataset-curators)
43
+ - [Licensing Information](#licensing-information)
44
+ - [Citation Information](#citation-information)
45
+
46
+ ## Dataset Description
47
+
48
+ - **Homepage:** [TED-LIUM homepage](https://www.openslr.org/7/)
49
+ - **Repository:** [Needs More Information]
50
+ - **Paper:** [TED-LIUM: an Automatic Speech Recognition dedicated corpus](https://aclanthology.org/L12-1405/)
51
+ - **Leaderboard:** [Paperswithcode Leaderboard](https://paperswithcode.com/sota/speech-recognition-on-tedlium)
52
+ - **Point of Contact:** [Sanchit Gandhi](mailto:sanchit@huggingface.co)
53
+
54
+ ### Dataset Summary
55
+
56
+ The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. The three releases of the corpus range from 118 to 452 hours of transcribed speech data.
57
+
58
+
59
+ ### Example
60
+
61
+ ```python
62
+ from datasets import load_dataset
63
+
64
+ tedlium = load_dataset("LIUM/tedlium", "release1") # for Release 1
65
+
66
+ # see structure
67
+ print(tedlium)
68
+
69
+ # load audio sample on the fly
70
+ audio_input = tedlium["train"][0]["audio"] # first decoded audio sample
71
+ transcription = tedlium["train"][0]["text"] # first transcription
72
+ ```
73
+
74
+ ### Supported Tasks and Leaderboards
75
+
76
+ - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/sota/speech-recognition-on-tedlium that ranks models based on their WER.
77
+
78
+ ### Languages
79
+
80
+ The audio and transcriptions are in English, as per the TED talks at http://www.ted.com.
81
+
82
+ ## Dataset Structure
83
+
84
+ ### Data Instances
85
+ ```
86
+ {'audio': {'path': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/sph/PaulaScher_2008P.sph',
87
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
88
+ 0.00091553, 0.00085449], dtype=float32),
89
+ 'sampling_rate': 16000},
90
+ 'text': '{COUGH} but <sil> i was so {COUGH} utterly unqualified for(2) this project and {NOISE} so utterly ridiculous {SMACK} and ignored the brief {SMACK} <sil>',
91
+ 'speaker_id': 'PaulaScher_2008P',
92
+ 'gender': 'female',
93
+ 'file': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/sph/PaulaScher_2008P.sph',
94
+ 'id': 'PaulaScher_2008P-1003.35-1011.16-<o,f0,female>'}
95
+ ```
96
+ ### Data Fields
97
+
98
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
99
+ - file: A path to the downloaded audio file in .sph format.
100
+ - text: the transcription of the audio file.
101
+ - gender: the gender of the speaker. One of: male, female or N/A.
102
+ - id: unique id of the data sample.
103
+ - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
104
+
105
+ ### Data Splits
106
+ There are three releases for the TED-LIUM corpus, progressively increasing the number of transcribed speech training data from 118 hours (Release 1), to 207 hours (Release 2), to 452 hours (Release 3).
107
+
108
+ Release 1:
109
+ - 774 audio talks and automatically aligned transcriptions.
110
+ - Contains 118 hours of speech audio data.
111
+ - Homepage: https://www.openslr.org/7/
112
+
113
+ Release 2:
114
+ - 1495 audio talks and automatically aligned transcriptions.
115
+ - Contains 207 hours of speech audio data.
116
+ - Dictionary with pronunciations (159848 entries).
117
+ - Selected monolingual data for language modeling from WMT12 publicly available corpora.
118
+ - Homepage: https://www.openslr.org/19/
119
+
120
+ Release 3:
121
+ - 2351 audio talks and automatically aligned transcriptions.
122
+ - Contains 452 hours of speech audio data.
123
+ - TED-LIUM 2 validation and test data: 19 TED talks with their corresponding manual transcriptions.
124
+ - Dictionary with pronunciations (159848 entries), the same file as the one included in TED-LIUM 2.
125
+ - Selected monolingual data for language modeling from WMT12 publicly available corpora: these files come from the TED-LIUM 2 release, but have been modified to produce a tokenization more relevant for English language.
126
+ - Homepage: https://www.openslr.org/51/
127
+
128
+ Release 3 contains two different corpus distributions:
129
+ - The ‘legacy’ one, on which the dev and test datasets are the same as in TED-LIUM 2 (and TED-LIUM 1).
130
+ - The ‘speaker adaptation’ one, specially designed for experiments on speaker adaptation.
131
+
132
+ Each release is split into a training, validation and test set:
133
+
134
+ | Split | Release 1 | Release 2 | Release 3 |
135
+ |------------|-----------|-----------|-----------|
136
+ | Train | 56,803 | 92,973 | 268,263 |
137
+ | Validation | 591 | 591 | 591 |
138
+ | Test | 1,469 | 1,469 | 1,469 |
139
+
140
+
141
+ ## Dataset Creation
142
+
143
+ ### Curation Rationale
144
+
145
+ TED-LIUM was built during [The International Workshop on Spoken Language Trans- lation (IWSLT) 2011 Evaluation Campaign](https://aclanthology.org/2011.iwslt-evaluation.1/), an annual workshop focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination.
146
+
147
+ ### Source Data
148
+
149
+ #### Initial Data Collection and Normalization
150
+
151
+ The data was obtained from publicly available TED talks at http://www.ted.com. Proper alignments between the speech and the transcribed text were generated using an in-house speaker segmentation and clustering tool (_LIUM_SpkDiarization_). Speech disfluencies (e.g. repetitions, hesitations, false starts) were treated in the following way: repetitions were transcribed, hesitations mapped to a specific filler word, and false starts not taken into account. For full details on the data collection and processing, refer to the [TED-LIUM paper](https://aclanthology.org/L12-1405/).
152
+
153
+ #### Who are the source language producers?
154
+
155
+ TED Talks are influential videos from expert speakers on education, business, science, tech and creativity.
156
+
157
+ ### Annotations
158
+
159
+ #### Annotation process
160
+
161
+ [Needs More Information]
162
+
163
+ #### Who are the annotators?
164
+
165
+ [Needs More Information]
166
+
167
+ ### Personal and Sensitive Information
168
+
169
+ [Needs More Information]
170
+
171
+ ## Considerations for Using the Data
172
+
173
+ ### Social Impact of Dataset
174
+
175
+ [Needs More Information]
176
+
177
+ ### Discussion of Biases
178
+
179
+ [Needs More Information]
180
+
181
+ ### Other Known Limitations
182
+
183
+ [Needs More Information]
184
+
185
+ ## Additional Information
186
+
187
+ ### Dataset Curators
188
+
189
+ [Needs More Information]
190
+
191
+ ### Licensing Information
192
+
193
+ Licensed under Creative Commons BY-NC-ND 3.0 (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en).
194
+
195
+ ### Citation Information
196
+
197
+ Release 1:
198
+ ```
199
+ @inproceedings{rousseau2012tedlium,
200
+ title={TED-LIUM: an Automatic Speech Recognition dedicated corpus},
201
+ author={Rousseau, Anthony and Del{\'e}glise, Paul and Est{\`e}ve, Yannick},
202
+ booktitle={Conference on Language Resources and Evaluation (LREC)},
203
+ pages={125--129},
204
+ year={2012}
205
+ }
206
+ ```
207
+
208
+ Release 2:
209
+ ```
210
+ @inproceedings{rousseau2014enhancing,
211
+ title={Enhancing the TED-LIUM corpus with selected data for language modeling and more TED talks.},
212
+ author={Rousseau, Anthony and Del{\'e}glise, Paul and Esteve, Yannick and others},
213
+ booktitle={LREC},
214
+ pages={3935--3939},
215
+ year={2014}
216
+ }
217
+ ```
218
+
219
+ Release 3:
220
+ ```
221
+ @inproceedings{hernandez2018ted,
222
+ author="Hernandez, Fran{\c{c}}ois
223
+ and Nguyen, Vincent
224
+ and Ghannay, Sahar
225
+ and Tomashenko, Natalia
226
+ and Est{\`e}ve, Yannick",
227
+ title="TED-LIUM 3: Twice as Much Data and Corpus Repartition for Experiments on Speaker Adaptation",
228
+ booktitle="Speech and Computer",
229
+ year="2018",
230
+ publisher="Springer International Publishing",
231
+ pages="198--208",
232
+ }
233
+ ```
TEDLIUM_release1/dev.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5632d2b3d397180b3251bf3dc840bc0d371ef3c8d105d0dc650ac62cde8f4d8b
3
+ size 174796814
TEDLIUM_release1/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9d948189597bc5d35b3aa4f0ad7e36dad82acbefe113b1d0c4b588dbb215b5a
3
+ size 306583283
TEDLIUM_release1/train.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dad97dac99a7e95f679cc08c8e2d341860270eeeb985a67d1d3f4d84a15eb65
3
+ size 20804239354
TEDLIUM_release2/dev.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4388c9d98747a355de2598b0c2b5d9900f0133a784d218fe39a148253d5221b2
3
+ size 174796815
TEDLIUM_release2/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba622758c0a489739ae0ffc045232eaf698fc557e7697bc6555f77f3fee6804e
3
+ size 306583350
TEDLIUM_release2/train_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed23fc9c0ae3f05ce9a3e726072383c9ac4fddc4f5ffedfb5482aaac37129575
3
+ size 18341102128
TEDLIUM_release2/train_2.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60f14d9b7ac5a483aeb57f27e4c7259edf4ee740f98e197b50510225deb76bb7
3
+ size 17463269560
TEDLIUM_release3/legacy/dev.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4333d359ec3a7271b1fae03acd22a49de7b6cf293835efeef4a0c4c04fa192d6
3
+ size 174796847
TEDLIUM_release3/legacy/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6be5e49a4dfc11ca4f4404acfefeadea3dda02f6d615342df4bb2e3cc37e530d
3
+ size 306583330
TEDLIUM_release3/legacy/train_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c8574e308126d0acf9f300596ef73e0889d86aa50ba2a385d59837f7f2f913d
3
+ size 34954279648
TEDLIUM_release3/legacy/train_2.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdf1bb869615688995bb239fbd3f928c5167b793f521c85327cdfa6aedcb1e06
3
+ size 18378707926
TEDLIUM_release3/speaker-adaptation/dev.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31adec52f4cf0cfe060e29ab7f382d27ee1013ce9072fbcfa542edace0c32d4a
3
+ size 489586089
TEDLIUM_release3/speaker-adaptation/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad1ae65c4c9eaabe5de41982fee31730b8014ce048a5e258221316b051900182
3
+ size 493451609
TEDLIUM_release3/speaker-adaptation/train_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fdcc13fbbcf5d8da2320af8c887a3303c198b06862f71752eacfe99472ea266
3
+ size 35911030324
TEDLIUM_release3/speaker-adaptation/train_2.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77d4d4f45f0bc8aad3de0f0995aae1d437c5141410dbf2260b7c0c828a5a67a0
3
+ size 15719586307
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"LIUM--tedlium": {"description": " The TED-LIUM corpus is English-language TED talks, with transcriptions,\n sampled at 16kHz. It contains about 118 hours of speech.\n\n This is the TED-LIUM corpus release 1,\n licensed under Creative Commons BY-NC-ND 3.0\n (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en).\n ", "citation": " @inproceedings{rousseau2012tedlium,\n title={TED-LIUM: an Automatic Speech Recognition dedicated corpus},\n author={Rousseau, Anthony and Del{\\'e}glise, Paul and Est{\\`e}ve, Yannick},\n booktitle={Conference on Language Resources and Evaluation (LREC)},\n pages={125--129},\n year={2012}\n }\n ", "homepage": "https://www.openslr.org/7/", "license": "licensed under Creative Commons BY-NC-ND 3.0 (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en)", "features": {"audio": {"sampling_rate": 16000, "mono": true, "decode": true, "id": null, "_type": "Audio"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"num_classes": 3, "names": ["unknown", "female", "male"], "id": null, "_type": "ClassLabel"}, "file": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "audio", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_column": "audio", "transcription_column": "text"}], "builder_name": "ted_lium", "config_name": "release1", "version": {"version_str": "1.0.1", "description": null, "major": 1, "minor": 0, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 13629609845.625, "num_examples": 56803, "dataset_name": "tedlium"}, "validation": {"name": "validation", "num_bytes": 197792095.0, "num_examples": 591, "dataset_name": "tedlium"}, "test": {"name": "test", "num_bytes": 352788386.375, "num_examples": 1469, "dataset_name": "tedlium"}}, "download_checksums": null, "download_size": 14151200272, "post_processing_size": null, "dataset_size": 14180190327.0, "size_in_bytes": 28331390599.0}}
tedlium.py ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ """TED-LIUM speech recognition dataset."""
16
+
17
+ import os
18
+ import re
19
+ from collections import defaultdict
20
+ from io import BytesIO
21
+ from pathlib import Path
22
+
23
+ import numpy as np
24
+ import soundfile as sf
25
+
26
+ import datasets
27
+
28
+
29
+ _DL_URL = "https://huggingface.co/datasets/LIUM/tedlium/resolve/main/"
30
+
31
+ _LICENSE = "licensed under Creative Commons BY-NC-ND 3.0 (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en)"
32
+
33
+
34
+ class TedliumReleaseConfig(datasets.BuilderConfig):
35
+ """BuilderConfig for a release of the TED-LIUM dataset."""
36
+
37
+ def __init__(self, *, url, download_urls, split_paths, citation, **kwargs):
38
+ super(TedliumReleaseConfig, self).__init__(version=datasets.Version("1.0.1"), **kwargs)
39
+ self.url = url
40
+ self.download_urls = download_urls
41
+ # List of split, path pairs containing the relative path within the
42
+ # extracted tarball to the data for each split.
43
+ self.split_paths = split_paths
44
+ self.citation = citation
45
+
46
+
47
+ def _make_builder_configs():
48
+ """Creates builder configs for all supported Tedlium dataset releases."""
49
+ release1 = TedliumReleaseConfig(
50
+ name="release1",
51
+ description="""\
52
+ The TED-LIUM corpus is English-language TED talks, with transcriptions,
53
+ sampled at 16kHz. It contains about 118 hours of speech.
54
+
55
+ This is the TED-LIUM corpus release 1,
56
+ licensed under Creative Commons BY-NC-ND 3.0
57
+ (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en).
58
+ """,
59
+ citation="""\
60
+ @inproceedings{rousseau2012tedlium,
61
+ title={TED-LIUM: an Automatic Speech Recognition dedicated corpus},
62
+ author={Rousseau, Anthony and Del{\\'e}glise, Paul and Est{\\`e}ve, Yannick},
63
+ booktitle={Conference on Language Resources and Evaluation (LREC)},
64
+ pages={125--129},
65
+ year={2012}
66
+ }
67
+ """,
68
+ url="https://www.openslr.org/7/",
69
+ download_urls={
70
+ "train": [_DL_URL + os.path.join("TEDLIUM_release1", "train.tar.gz")],
71
+ "validation": [_DL_URL + os.path.join("TEDLIUM_release1", "dev.tar.gz")],
72
+ "test": [_DL_URL + os.path.join("TEDLIUM_release1", "test.tar.gz")],
73
+ },
74
+ split_paths=[
75
+ (datasets.Split.TRAIN, "train"),
76
+ (datasets.Split.VALIDATION, "dev"),
77
+ (datasets.Split.TEST, "test"),
78
+ ],
79
+ )
80
+
81
+ release2 = TedliumReleaseConfig(
82
+ name="release2",
83
+ description="""\
84
+ This is the TED-LIUM corpus release 2,
85
+ licensed under Creative Commons BY-NC-ND 3.0
86
+ (http://creativecommons.org/licenses/by-nc-nd/3.0/deed.en).
87
+
88
+ All talks and text are property of TED Conferences LLC.
89
+
90
+ The TED-LIUM corpus was made from audio talks and their transcriptions
91
+ available on the TED website. We have prepared and filtered these data
92
+ in order to train acoustic models to participate to the International
93
+ Workshop on Spoken Language Translation 2011 (the LIUM English/French
94
+ SLT system reached the first rank in the SLT task).
95
+
96
+ Contains 1495 talks and transcripts.
97
+ """,
98
+ citation="""\
99
+ @inproceedings{rousseau2014tedlium2,
100
+ title={Enhancing the {TED-LIUM} Corpus with Selected Data for Language Modeling and More {TED} Talks},
101
+ author={Rousseau, Anthony and Del{\\'e}glise, Paul and Est{\\`e}ve, Yannick},
102
+ booktitle={Conference on Language Resources and Evaluation (LREC)},
103
+ year={2014}
104
+ }
105
+ """,
106
+ url="https://www.openslr.org/19/",
107
+ download_urls={
108
+ "train": [
109
+ _DL_URL + os.path.join("TEDLIUM_release2", "train_1.tar.gz"),
110
+ _DL_URL + os.path.join("TEDLIUM_release2", "train_2.tar.gz"),
111
+ ],
112
+ "validation": [_DL_URL + os.path.join("TEDLIUM_release2", "dev.tar.gz")],
113
+ "test": [_DL_URL + os.path.join("TEDLIUM_release2", "test.tar.gz")],
114
+ },
115
+ split_paths=[
116
+ (datasets.Split.TRAIN, "train"),
117
+ (datasets.Split.VALIDATION, "dev"),
118
+ (datasets.Split.TEST, "test"),
119
+ ],
120
+ )
121
+
122
+ release3 = TedliumReleaseConfig(
123
+ name="release3",
124
+ description="""\
125
+ This is the TED-LIUM corpus release 3, licensed under Creative Commons
126
+ BY-NC-ND 3.0. This is the 'legacy' version of the corpus, in which the dev and test datasets are the same as in
127
+ TED-LIUM 2 (and TED-LIUM 1).
128
+
129
+ All talks and text are property of TED Conferences LLC.
130
+
131
+ This new TED-LIUM release was made through a collaboration between the
132
+ Ubiqus company and the LIUM (University of Le Mans, France)
133
+
134
+ Contents:
135
+
136
+ - 2351 audio talks in NIST sphere format (SPH), including talks from
137
+ TED-LIUM 2: be careful, same talks but not same audio files (only
138
+ these audio file must be used with the TED-LIUM 3 STM files)
139
+ - 452 hours of audio
140
+ - 2351 aligned automatic transcripts in STM format
141
+ - TEDLIUM 2 dev and test data: 19 TED talks in SPH format with
142
+ corresponding manual transcriptions.
143
+ - Dictionary with pronunciations (159848 entries), same file as the one
144
+ included in TED-LIUM 2
145
+ - Selected monolingual data for language modeling from WMT12 publicly
146
+ available corpora: these files come from the TED-LIUM 2 release, but
147
+ have been modified to get a tokenization more relevant for English
148
+ language
149
+
150
+ """,
151
+ citation="""\
152
+ @inproceedings{hernandez2018tedlium3,
153
+ title={TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation},
154
+ author={Hernandez, Fran{\\c{c}}ois and Nguyen, Vincent and Ghannay, Sahar and Tomashenko, Natalia and Est{\\`e}ve, Yannick},
155
+ booktitle={International Conference on Speech and Computer},
156
+ pages={198--208},
157
+ year={2018},
158
+ organization={Springer}
159
+ }
160
+ """,
161
+ url="https://www.openslr.org/51/",
162
+ download_urls={
163
+ "train": [
164
+ _DL_URL + os.path.join("TEDLIUM_release3", "legacy", "train_1.tar.gz"),
165
+ _DL_URL + os.path.join("TEDLIUM_release3", "legacy", "train_2.tar.gz"),
166
+ ],
167
+ "validation": [_DL_URL + os.path.join("TEDLIUM_release3", "legacy", "dev.tar.gz")],
168
+ "test": [_DL_URL + os.path.join("TEDLIUM_release3", "legacy", "test.tar.gz")],
169
+ },
170
+ split_paths=[
171
+ (datasets.Split.TRAIN, "train"),
172
+ (datasets.Split.VALIDATION, "dev"),
173
+ (datasets.Split.TEST, "test"),
174
+ ],
175
+ )
176
+
177
+ release3_speaker_adaptation = TedliumReleaseConfig(
178
+ name="release3-speaker-adaptation",
179
+ description="""\
180
+ This is the TED-LIUM corpus release 3, licensed under Creative Commons
181
+ BY-NC-ND 3.0. This is the 'speaker adaptation' version of the corpus, specially designed for experiments on
182
+ speaker adaptation.
183
+
184
+ All talks and text are property of TED Conferences LLC.
185
+
186
+ This new TED-LIUM release was made through a collaboration between the
187
+ Ubiqus company and the LIUM (University of Le Mans, France)
188
+ """,
189
+ citation="""\
190
+ @inproceedings{hernandez2018tedlium3,
191
+ title={TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation},
192
+ author={Hernandez, Fran{\\c{c}}ois and Nguyen, Vincent and Ghannay, Sahar and Tomashenko, Natalia and Est{\\`e}ve, Yannick},
193
+ booktitle={International Conference on Speech and Computer},
194
+ pages={198--208},
195
+ year={2018},
196
+ organization={Springer}
197
+ }
198
+ """,
199
+ url="https://www.openslr.org/51/",
200
+ download_urls={
201
+ "train": [
202
+ _DL_URL + os.path.join("TEDLIUM_release3", "speaker-adaptation", "train_1.tar.gz"),
203
+ _DL_URL + os.path.join("TEDLIUM_release3", "speaker-adaptation", "train_2.tar.gz"),
204
+ ],
205
+ "validation": [_DL_URL + os.path.join("TEDLIUM_release3", "speaker-adaptation", "dev.tar.gz")],
206
+ "test": [_DL_URL + os.path.join("TEDLIUM_release3", "speaker-adaptation", "test.tar.gz")],
207
+ },
208
+ split_paths=[
209
+ (datasets.Split.TRAIN, "train"),
210
+ (datasets.Split.VALIDATION, "dev"),
211
+ (datasets.Split.TEST, "test"),
212
+ ],
213
+ )
214
+
215
+ return [release1, release2, release3, release3_speaker_adaptation]
216
+
217
+
218
+ class TedLium(datasets.GeneratorBasedBuilder):
219
+ """The TED-LIUM corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech."""
220
+
221
+ VERSION = datasets.Version("1.1.0")
222
+
223
+ BUILDER_CONFIGS = _make_builder_configs()
224
+
225
+ def _info(self):
226
+ features = datasets.Features(
227
+ {
228
+ "audio": datasets.features.Audio(sampling_rate=16_000),
229
+ "text": datasets.Value("string"),
230
+ "speaker_id": datasets.Value("string"),
231
+ "gender": datasets.features.ClassLabel(names=["unknown", "female", "male"]),
232
+ "file": datasets.Value("string"),
233
+ "id": datasets.Value("string"),
234
+ }
235
+ )
236
+ return datasets.DatasetInfo(
237
+ description=self.config.description,
238
+ features=features,
239
+ supervised_keys=("audio", "text"),
240
+ homepage=self.config.url,
241
+ license=_LICENSE,
242
+ citation=self.config.citation,
243
+ )
244
+
245
+ def _split_generators(self, dl_manager):
246
+ archive_path = dl_manager.download(self.config.download_urls)
247
+ # (Optional) In non-streaming mode, we can extract the archive locally to have actual local audio files:
248
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else {}
249
+ splits = []
250
+ for split, path in self.config.split_paths:
251
+ kwargs = {
252
+ "filepath": [dl_manager.iter_archive(sharded_path) for sharded_path in archive_path[split]],
253
+ "local_extracted_archive": local_extracted_archive.get(split),
254
+ "split_path": path,
255
+ }
256
+ splits.append(datasets.SplitGenerator(name=split, gen_kwargs=kwargs))
257
+ return splits
258
+
259
+ def _generate_examples(self, filepath, local_extracted_archive, split_path):
260
+ """Generate examples from a TED-LIUM stm file."""
261
+ if local_extracted_archive:
262
+ for local_archive in local_extracted_archive:
263
+ # The stm directory houses the speaker and transcription information in .stm format
264
+ split_dir = os.path.join(local_archive, split_path)
265
+ stm_files = [os.path.join(split_dir, f) for f in os.listdir(split_dir) if f.endswith(".stm")]
266
+ for file in stm_files:
267
+ # the .sph speaker file almost always has the same file name as the .stm file
268
+ speaker_file = Path(file).stem
269
+ audio_file = os.path.join(split_dir, speaker_file + ".sph")
270
+ segment, sampling_rate = sf.read(audio_file, dtype=np.int16)
271
+ with open(file) as f:
272
+ for line in f:
273
+ line = line.strip()
274
+ fn, channel, speaker, start, end, label, transcript = line.split(" ", 6)
275
+ transcript = _maybe_trim_suffix(transcript)
276
+ if speaker_file != fn:
277
+ # handle the case where the stm file does not have the same file name as the transcript
278
+ speaker_file = fn
279
+ audio_file = os.path.join(split_dir, speaker_file + ".sph")
280
+ segment, sampling_rate = sf.read(audio_file, dtype=np.int16)
281
+ samples = _extract_audio_segment(segment, sampling_rate, float(start), float(end))
282
+ key = "-".join([speaker, start, end, label])
283
+ example = {
284
+ "audio": {"path": audio_file, "array": samples, "sampling_rate": sampling_rate},
285
+ "text": transcript,
286
+ "speaker_id": speaker,
287
+ "gender": _parse_gender(label),
288
+ "file": audio_file,
289
+ "id": key,
290
+ }
291
+ yield key, example
292
+
293
+ else:
294
+ audio_data = {}
295
+ transcripts = defaultdict(list)
296
+ for file in filepath:
297
+ for path, f in file:
298
+ if path.endswith(".sph"):
299
+ # get the speaker id
300
+ fn = path.split("/")[-1].strip(".sph")
301
+ # read the audio data from raw byte form and add key-value pair to dict
302
+ audio_data[fn] = sf.read(BytesIO(f.read()), dtype=np.int16)
303
+ elif path.endswith(".stm"):
304
+ for line in f:
305
+ if line:
306
+ line = line.decode("utf-8").strip()
307
+ fn, channel, speaker, start, end, label, transcript = line.split(" ", 6)
308
+ transcript = _maybe_trim_suffix(transcript)
309
+ audio_file = path.replace("stm", "sph")
310
+ key = "-".join([speaker, start, end, label])
311
+ # append metadata information to the dict of transcripts for the associated speaker
312
+ transcripts[fn].append(
313
+ {
314
+ "text": transcript,
315
+ "speaker_id": speaker,
316
+ "gender": _parse_gender(label),
317
+ "file": audio_file,
318
+ "id": key,
319
+ "start": start,
320
+ "end": end,
321
+ "channel": channel,
322
+ "fn": fn,
323
+ }
324
+ )
325
+
326
+ if audio_data and audio_data.keys() == transcripts.keys():
327
+ for fn, speaker in transcripts.items():
328
+ for transcript in speaker:
329
+ segment, sampling_rate = audio_data[transcript["fn"]]
330
+ samples = _extract_audio_segment(
331
+ segment,
332
+ sampling_rate,
333
+ float(transcript["start"]),
334
+ float(transcript["end"]),
335
+ )
336
+ audio = {"path": transcript["file"], "array": samples, "sampling_rate": sampling_rate}
337
+ key = transcript["id"]
338
+ yield key, {
339
+ "audio": audio,
340
+ "text": transcript["text"],
341
+ "speaker_id": transcript["speaker_id"],
342
+ "gender": transcript["gender"],
343
+ "file": transcript["file"],
344
+ "id": transcript["id"],
345
+ }
346
+ audio_data = {}
347
+ transcripts = defaultdict(list)
348
+
349
+
350
+ def _maybe_trim_suffix(transcript):
351
+ # stm files for the TEDLIUM release 1 train split contain a key (enclosed in
352
+ # parens) at the end.
353
+ splits = transcript.rsplit(" ", 1)
354
+ transcript = splits[0]
355
+ if len(splits) > 1:
356
+ suffix = splits[-1]
357
+ if not suffix.startswith("("):
358
+ transcript += " " + suffix
359
+ return transcript
360
+
361
+
362
+ def _extract_audio_segment(segment, sampling_rate, start_sec, end_sec):
363
+ """Extracts segment of audio samples (as an ndarray) from the given segment."""
364
+ # The dataset only contains mono audio.
365
+ start_sample = int(start_sec * sampling_rate)
366
+ end_sample = min(int(end_sec * sampling_rate), segment.shape[0])
367
+ samples = segment[start_sample:end_sample]
368
+ return samples
369
+
370
+
371
+ def _parse_gender(label_str):
372
+ """Parse gender string from STM "<label>" field."""
373
+ gender = re.split(",|_", label_str)[-1][:-1]
374
+ # Fix inconsistencies in the data.
375
+ if not gender:
376
+ gender = -1 # Missing label.
377
+ elif gender == "<NA": # In TEDLIUM release 3 training data.
378
+ gender = -1 # Missing label.
379
+ elif gender == "F":
380
+ gender = "female"
381
+ elif gender == "M":
382
+ gender = "male"
383
+ return gender