kfajdsl polinaeterna commited on
Commit
bb76698
·
verified ·
0 Parent(s):

Duplicate from facebook/voxpopuli

Browse files

Co-authored-by: Polina Kazakova <polinaeterna@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +39 -0
  2. README.md +294 -0
  3. data/cs/asr_dev.tsv +3 -0
  4. data/cs/asr_test.tsv +3 -0
  5. data/cs/asr_train.tsv +3 -0
  6. data/cs/dev/dev_part_0.tar.gz +3 -0
  7. data/cs/test/test_part_0.tar.gz +3 -0
  8. data/cs/train/train_part_0.tar.gz +3 -0
  9. data/cs/train/train_part_1.tar.gz +3 -0
  10. data/cs/train/train_part_2.tar.gz +3 -0
  11. data/cs/train/train_part_3.tar.gz +3 -0
  12. data/de/asr_dev.tsv +3 -0
  13. data/de/asr_test.tsv +3 -0
  14. data/de/asr_train.tsv +3 -0
  15. data/de/dev/dev_part_0.tar.gz +3 -0
  16. data/de/test/test_part_0.tar.gz +3 -0
  17. data/de/train/train_part_0.tar.gz +3 -0
  18. data/de/train/train_part_1.tar.gz +3 -0
  19. data/de/train/train_part_10.tar.gz +3 -0
  20. data/de/train/train_part_11.tar.gz +3 -0
  21. data/de/train/train_part_12.tar.gz +3 -0
  22. data/de/train/train_part_13.tar.gz +3 -0
  23. data/de/train/train_part_14.tar.gz +3 -0
  24. data/de/train/train_part_15.tar.gz +3 -0
  25. data/de/train/train_part_16.tar.gz +3 -0
  26. data/de/train/train_part_17.tar.gz +3 -0
  27. data/de/train/train_part_18.tar.gz +3 -0
  28. data/de/train/train_part_19.tar.gz +3 -0
  29. data/de/train/train_part_2.tar.gz +3 -0
  30. data/de/train/train_part_20.tar.gz +3 -0
  31. data/de/train/train_part_21.tar.gz +3 -0
  32. data/de/train/train_part_3.tar.gz +3 -0
  33. data/de/train/train_part_4.tar.gz +3 -0
  34. data/de/train/train_part_5.tar.gz +3 -0
  35. data/de/train/train_part_6.tar.gz +3 -0
  36. data/de/train/train_part_7.tar.gz +3 -0
  37. data/de/train/train_part_8.tar.gz +3 -0
  38. data/de/train/train_part_9.tar.gz +3 -0
  39. data/en/asr_dev.tsv +3 -0
  40. data/en/asr_test.tsv +3 -0
  41. data/en/asr_train.tsv +3 -0
  42. data/en/dev/dev_part_0.tar.gz +3 -0
  43. data/en/test/test_part_0.tar.gz +3 -0
  44. data/en/train/train_part_0.tar.gz +3 -0
  45. data/en/train/train_part_1.tar.gz +3 -0
  46. data/en/train/train_part_10.tar.gz +3 -0
  47. data/en/train/train_part_11.tar.gz +3 -0
  48. data/en/train/train_part_12.tar.gz +3 -0
  49. data/en/train/train_part_13.tar.gz +3 -0
  50. data/en/train/train_part_14.tar.gz +3 -0
.gitattributes ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.onnx filter=lfs diff=lfs merge=lfs -text
13
+ *.ot filter=lfs diff=lfs merge=lfs -text
14
+ *.parquet filter=lfs diff=lfs merge=lfs -text
15
+ *.pb filter=lfs diff=lfs merge=lfs -text
16
+ *.pt filter=lfs diff=lfs merge=lfs -text
17
+ *.pth filter=lfs diff=lfs merge=lfs -text
18
+ *.rar filter=lfs diff=lfs merge=lfs -text
19
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
21
+ *.tflite filter=lfs diff=lfs merge=lfs -text
22
+ *.tgz filter=lfs diff=lfs merge=lfs -text
23
+ *.wasm filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ # Audio files - uncompressed
29
+ *.pcm filter=lfs diff=lfs merge=lfs -text
30
+ *.sam filter=lfs diff=lfs merge=lfs -text
31
+ *.raw filter=lfs diff=lfs merge=lfs -text
32
+ # Audio files - compressed
33
+ *.aac filter=lfs diff=lfs merge=lfs -text
34
+ *.flac filter=lfs diff=lfs merge=lfs -text
35
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
36
+ *.ogg filter=lfs diff=lfs merge=lfs -text
37
+ *.wav filter=lfs diff=lfs merge=lfs -text
38
+ # Voxpopuli audio meta
39
+ *.tsv filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators: []
3
+ language:
4
+ - en
5
+ - de
6
+ - fr
7
+ - es
8
+ - pl
9
+ - it
10
+ - ro
11
+ - hu
12
+ - cs
13
+ - nl
14
+ - fi
15
+ - hr
16
+ - sk
17
+ - sl
18
+ - et
19
+ - lt
20
+ language_creators: []
21
+ license:
22
+ - cc0-1.0
23
+ - other
24
+ multilinguality:
25
+ - multilingual
26
+ pretty_name: VoxPopuli
27
+ size_categories: []
28
+ source_datasets: []
29
+ tags: []
30
+ task_categories:
31
+ - automatic-speech-recognition
32
+ task_ids: []
33
+ ---
34
+
35
+ # Dataset Card for Voxpopuli
36
+
37
+ ## Table of Contents
38
+ - [Table of Contents](#table-of-contents)
39
+ - [Dataset Description](#dataset-description)
40
+ - [Dataset Summary](#dataset-summary)
41
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
42
+ - [Languages](#languages)
43
+ - [Dataset Structure](#dataset-structure)
44
+ - [Data Instances](#data-instances)
45
+ - [Data Fields](#data-fields)
46
+ - [Data Splits](#data-splits)
47
+ - [Dataset Creation](#dataset-creation)
48
+ - [Curation Rationale](#curation-rationale)
49
+ - [Source Data](#source-data)
50
+ - [Annotations](#annotations)
51
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
52
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
53
+ - [Social Impact of Dataset](#social-impact-of-dataset)
54
+ - [Discussion of Biases](#discussion-of-biases)
55
+ - [Other Known Limitations](#other-known-limitations)
56
+ - [Additional Information](#additional-information)
57
+ - [Dataset Curators](#dataset-curators)
58
+ - [Licensing Information](#licensing-information)
59
+ - [Citation Information](#citation-information)
60
+ - [Contributions](#contributions)
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage:** https://github.com/facebookresearch/voxpopuli
65
+ - **Repository:** https://github.com/facebookresearch/voxpopuli
66
+ - **Paper:** https://arxiv.org/abs/2101.00390
67
+ - **Point of Contact:** [changhan@fb.com](mailto:changhan@fb.com), [mriviere@fb.com](mailto:mriviere@fb.com), [annl@fb.com](mailto:annl@fb.com)
68
+
69
+ ### Dataset Summary
70
+
71
+ VoxPopuli is a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.
72
+ The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home). We acknowledge the European Parliament for creating and sharing these materials.
73
+ This implementation contains transcribed speech data for 18 languages.
74
+ It also contains 29 hours of transcribed speech data of non-native English intended for research in ASR for accented speech (15 L2 accents)
75
+
76
+ ### Example usage
77
+
78
+ VoxPopuli contains labelled data for 18 languages. To load a specific language pass its name as a config name:
79
+
80
+ ```python
81
+ from datasets import load_dataset
82
+
83
+ voxpopuli_croatian = load_dataset("facebook/voxpopuli", "hr")
84
+ ```
85
+
86
+ To load all the languages in a single dataset use "multilang" config name:
87
+
88
+ ```python
89
+ voxpopuli_all = load_dataset("facebook/voxpopuli", "multilang")
90
+ ```
91
+
92
+ To load a specific set of languages, use "multilang" config name and pass a list of required languages to `languages` parameter:
93
+
94
+ ```python
95
+ voxpopuli_slavic = load_dataset("facebook/voxpopuli", "multilang", languages=["hr", "sk", "sl", "cs", "pl"])
96
+ ```
97
+
98
+ To load accented English data, use "en_accented" config name:
99
+
100
+ ```python
101
+ voxpopuli_accented = load_dataset("facebook/voxpopuli", "en_accented")
102
+ ```
103
+
104
+ **Note that L2 English subset contains only `test` split.**
105
+
106
+
107
+ ### Supported Tasks and Leaderboards
108
+
109
+ * automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
110
+
111
+ Accented English subset can also be used for research in ASR for accented speech (15 L2 accents)
112
+
113
+ ### Languages
114
+
115
+ VoxPopuli contains labelled (transcribed) data for 18 languages:
116
+
117
+ | Language | Code | Transcribed Hours | Transcribed Speakers | Transcribed Tokens |
118
+ |:---:|:---:|:---:|:---:|:---:|
119
+ | English | En | 543 | 1313 | 4.8M |
120
+ | German | De | 282 | 531 | 2.3M |
121
+ | French | Fr | 211 | 534 | 2.1M |
122
+ | Spanish | Es | 166 | 305 | 1.6M |
123
+ | Polish | Pl | 111 | 282 | 802K |
124
+ | Italian | It | 91 | 306 | 757K |
125
+ | Romanian | Ro | 89 | 164 | 739K |
126
+ | Hungarian | Hu | 63 | 143 | 431K |
127
+ | Czech | Cs | 62 | 138 | 461K |
128
+ | Dutch | Nl | 53 | 221 | 488K |
129
+ | Finnish | Fi | 27 | 84 | 160K |
130
+ | Croatian | Hr | 43 | 83 | 337K |
131
+ | Slovak | Sk | 35 | 96 | 270K |
132
+ | Slovene | Sl | 10 | 45 | 76K |
133
+ | Estonian | Et | 3 | 29 | 18K |
134
+ | Lithuanian | Lt | 2 | 21 | 10K |
135
+ | Total | | 1791 | 4295 | 15M |
136
+
137
+
138
+ Accented speech transcribed data has 15 various L2 accents:
139
+
140
+ | Accent | Code | Transcribed Hours | Transcribed Speakers |
141
+ |:---:|:---:|:---:|:---:|
142
+ | Dutch | en_nl | 3.52 | 45 |
143
+ | German | en_de | 3.52 | 84 |
144
+ | Czech | en_cs | 3.30 | 26 |
145
+ | Polish | en_pl | 3.23 | 33 |
146
+ | French | en_fr | 2.56 | 27 |
147
+ | Hungarian | en_hu | 2.33 | 23 |
148
+ | Finnish | en_fi | 2.18 | 20 |
149
+ | Romanian | en_ro | 1.85 | 27 |
150
+ | Slovak | en_sk | 1.46 | 17 |
151
+ | Spanish | en_es | 1.42 | 18 |
152
+ | Italian | en_it | 1.11 | 15 |
153
+ | Estonian | en_et | 1.08 | 6 |
154
+ | Lithuanian | en_lt | 0.65 | 7 |
155
+ | Croatian | en_hr | 0.42 | 9 |
156
+ | Slovene | en_sl | 0.25 | 7 |
157
+
158
+ ## Dataset Structure
159
+
160
+ ### Data Instances
161
+
162
+ ```python
163
+ {
164
+ 'audio_id': '20180206-0900-PLENARY-15-hr_20180206-16:10:06_5',
165
+ 'language': 11, # "hr"
166
+ 'audio': {
167
+ 'path': '/home/polina/.cache/huggingface/datasets/downloads/extracted/44aedc80bb053f67f957a5f68e23509e9b181cc9e30c8030f110daaedf9c510e/train_part_0/20180206-0900-PLENARY-15-hr_20180206-16:10:06_5.wav',
168
+ 'array': array([-0.01434326, -0.01055908, 0.00106812, ..., 0.00646973], dtype=float32),
169
+ 'sampling_rate': 16000
170
+ },
171
+ 'raw_text': '',
172
+ 'normalized_text': 'poast genitalnog sakaenja ena u europi tek je jedna od manifestacija takve tetne politike.',
173
+ 'gender': 'female',
174
+ 'speaker_id': '119431',
175
+ 'is_gold_transcript': True,
176
+ 'accent': 'None'
177
+ }
178
+ ```
179
+
180
+ ### Data Fields
181
+
182
+ * `audio_id` (string) - id of audio segment
183
+ * `language` (datasets.ClassLabel) - numerical id of audio segment
184
+ * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
185
+ * `raw_text` (string) - original (orthographic) audio segment text
186
+ * `normalized_text` (string) - normalized audio segment transcription
187
+ * `gender` (string) - gender of speaker
188
+ * `speaker_id` (string) - id of speaker
189
+ * `is_gold_transcript` (bool) - ?
190
+ * `accent` (string) - type of accent, for example "en_lt", if applicable, else "None".
191
+
192
+ ### Data Splits
193
+
194
+ All configs (languages) except for accented English contain data in three splits: train, validation and test. Accented English `en_accented` config contains only test split.
195
+
196
+ ## Dataset Creation
197
+
198
+ ### Curation Rationale
199
+
200
+ [More Information Needed]
201
+
202
+ ### Source Data
203
+
204
+ The raw data is collected from 2009-2020 [European Parliament event recordings](https://multimedia.europarl.europa.eu/en/home)
205
+
206
+ #### Initial Data Collection and Normalization
207
+
208
+ The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions. Official timestamps
209
+ are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture
210
+ of fragments from the preceding or the succeeding speeches. To calibrate the original timestamps,
211
+ we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al.2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.
212
+ Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.
213
+
214
+ The speech paragraphs have an average duration of 197 seconds, which leads to significant. We hence further segment these paragraphs into utterances with a
215
+ maximum duration of 20 seconds. We leverage speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts.
216
+ The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house deidentified video data.
217
+
218
+ The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.
219
+ We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate(CER).
220
+
221
+ #### Who are the source language producers?
222
+
223
+ Speakers are participants of the European Parliament events, many of them are EU officials.
224
+
225
+ ### Annotations
226
+
227
+ #### Annotation process
228
+
229
+ [More Information Needed]
230
+
231
+ #### Who are the annotators?
232
+
233
+ [More Information Needed]
234
+
235
+ ### Personal and Sensitive Information
236
+
237
+ [More Information Needed]
238
+
239
+ ## Considerations for Using the Data
240
+
241
+ ### Social Impact of Dataset
242
+
243
+ [More Information Needed]
244
+
245
+ ### Discussion of Biases
246
+
247
+ Gender speakers distribution is imbalanced, percentage of female speakers is mostly lower than 50% across languages, with the minimum of 15% for the Lithuanian language data.
248
+
249
+ VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.
250
+ The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials.
251
+
252
+
253
+ ### Other Known Limitations
254
+
255
+
256
+ ## Additional Information
257
+
258
+ ### Dataset Curators
259
+
260
+ [More Information Needed]
261
+
262
+ ### Licensing Information
263
+
264
+ The dataset is distributet under CC0 license, see also [European Parliament's legal notice](https://www.europarl.europa.eu/legal-notice/en/) for the raw data.
265
+
266
+ ### Citation Information
267
+
268
+ Please cite this paper:
269
+
270
+ ```bibtex
271
+ @inproceedings{wang-etal-2021-voxpopuli,
272
+ title = "{V}ox{P}opuli: A Large-Scale Multilingual Speech Corpus for Representation Learning, Semi-Supervised Learning and Interpretation",
273
+ author = "Wang, Changhan and
274
+ Riviere, Morgane and
275
+ Lee, Ann and
276
+ Wu, Anne and
277
+ Talnikar, Chaitanya and
278
+ Haziza, Daniel and
279
+ Williamson, Mary and
280
+ Pino, Juan and
281
+ Dupoux, Emmanuel",
282
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
283
+ month = aug,
284
+ year = "2021",
285
+ address = "Online",
286
+ publisher = "Association for Computational Linguistics",
287
+ url = "https://aclanthology.org/2021.acl-long.80",
288
+ pages = "993--1003",
289
+ }
290
+ ```
291
+
292
+ ### Contributions
293
+
294
+ Thanks to [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
data/cs/asr_dev.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b32aae1163416f089acaeef2e86da31f52dcca8920941ed77f9cc9b6f5daa63
3
+ size 436996
data/cs/asr_test.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6ca7483d43af0d815872c817131ba06b580ad6c7e1ebdcfb4a0fc9e70b39a8d
3
+ size 430811
data/cs/asr_train.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c72ed3fd840134cef285be4140bd5c5036722312d02a99cd3d624572ed125cc0
3
+ size 7495148
data/cs/dev/dev_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75142d043cf2fecd8691036270028243dd3a2ff987291b5185d0bf977fcc9206
3
+ size 369932654
data/cs/test/test_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d21bd818ff20152f7fab13984cb3b002bda035c3aba85988be182151a8c301e8
3
+ size 376276050
data/cs/train/train_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f951c78ca68ff9b1b09f35f48f76bda8ad0a92e3f74a403d79208b7b4b48d41c
3
+ size 1753165337
data/cs/train/train_part_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc9ecba9b792cacf00881116367106460b53e317f04cb2ee0413232965cc66ad
3
+ size 1735751268
data/cs/train/train_part_2.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a4d83c19fac7bc0b388973a8172778f184395d44de6a154760e707cc4281e27
3
+ size 1734441888
data/cs/train/train_part_3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad34cd4a5c42af09455559260e5f9356509df5e102e6c296b65ae8d999b63d54
3
+ size 1365548092
data/de/asr_dev.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ac0b2eb2a294ba9efff7d3ea5cf0ea4a2a51087f8a90d0eb7d773cdeb97140b
3
+ size 761187
data/de/asr_test.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce7f53304297487d3ddf8f52acf164baca203d36a6656ead2e33797c82beccd7
3
+ size 727782
data/de/asr_train.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bbcf9c2cc27d285f6d3b39feb3528d4518f14bce6a4c42385399c76ac184662
3
+ size 40207714
data/de/dev/dev_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583a2d2214cdd036f7a988a35c907cba0df88804b23724dfb4fdea5d53735985
3
+ size 603198668
data/de/test/test_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:714e50769cd8bc65f9bb496d26edc0fba27d1cc08c4824449811a6241dc857b3
3
+ size 594556098
data/de/train/train_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2416c672a1632ea973b4d4d545301dbe9e58cc38c7f63b58b0136ba4aa29de48
3
+ size 1496814624
data/de/train/train_part_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1ba0447026639262ed448914830c7a7d4cccef18c8e0b3bd1c30b34bb9fe847
3
+ size 1471928174
data/de/train/train_part_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d205c97e887e5f6149ae8ffc9cad0c71b1cd96a0cec8662acf200e044ded097
3
+ size 1474115177
data/de/train/train_part_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c459cb41031fef91c25faf7392c649f8c2d95c12ada30e71b724ce8d1cc1e7a
3
+ size 1508315768
data/de/train/train_part_12.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac19f642bd6b9d31b4a22ef23cbd1804e51052cec30f52cf2f1c6589cb073479
3
+ size 1468315310
data/de/train/train_part_13.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82ef2de882a440fc8f0e300e062de2b7bb31dca7f2e099187739e45073f84381
3
+ size 1481303225
data/de/train/train_part_14.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4741ed0b784cba4c722d7680cbca62aa0f54dd1daa282d87fc68bcf828131d19
3
+ size 1479352737
data/de/train/train_part_15.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d38c8b5537558062a78ed81931315d3c61ab087c0477cc9daebda162e3ab4eb
3
+ size 1476547571
data/de/train/train_part_16.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22e3b012273cf4436ab2d05ddd48fdfb04a481807fe0e8232ea3df728ad44db5
3
+ size 1497656936
data/de/train/train_part_17.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f51edf66455893c07f48c087f1fb3b6427cc326533f5d71e703ab0d03ba1577
3
+ size 1484004412
data/de/train/train_part_18.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44b4114698ec288ca16b449abf571d9f3b8f36555db1b6dee03e2133edaa71f7
3
+ size 1473197453
data/de/train/train_part_19.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4106827ac27bd020523012c69b860492b4ed934f7a9ee12b3811bdd1a8c8d595
3
+ size 1484576338
data/de/train/train_part_2.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:772e84987f00d4952ac230fbdc8095ba1cbb70409e9d38236368ee99d1d798b5
3
+ size 1518286436
data/de/train/train_part_20.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9ad0bf8c4e930d482fa2dfbf1aaff55fa1d2354da9e9e57a782d72e053d3d35
3
+ size 1465099805
data/de/train/train_part_21.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1428e44c57f8a9c8f094b033a0667e3d6503e431580caf43d1d017fd6a317d2
3
+ size 1025781293
data/de/train/train_part_3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7d447076efa0819ff1ec225ab0c2d765a887da07872fc06a46f7a215b72d639
3
+ size 1520329494
data/de/train/train_part_4.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e84e3c04a1a7fbfabe13779263c40d2ebe0978f7efd5754ebc8bcc55759415ef
3
+ size 1485032795
data/de/train/train_part_5.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85e5f78385b9ddd6ad5243f903d5ba963259d574dfd1d2f152995925194d54ea
3
+ size 1504901660
data/de/train/train_part_6.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:894c64d7ab9a9f5ae8febfb824813950a5f22195964b29ab7a7d39b9ebbe512a
3
+ size 1508091867
data/de/train/train_part_7.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea799355fe7d2fcaee625ed9956f3e5ca72018e0484949ee312fe72b889f8646
3
+ size 1452622973
data/de/train/train_part_8.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d05e85d4c4c26672db186961953632bbf105563b4a21cac54d6b18c6707c9e11
3
+ size 1450566259
data/de/train/train_part_9.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06afcd6d153a37e40d871d96918de35611a56198249fc6f0c9aa25ff9e45ab0e
3
+ size 1475955827
data/en/asr_dev.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccd1d31a6e9f2bebb8194727f7dc3662466bdca77bec5be8ed6d5e65f9b73f4f
3
+ size 637274
data/en/asr_test.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00244dafa3acf8a092aa3aa1b0a0af43961d3184028a3930d9b2445c24be1c52
3
+ size 649106
data/en/asr_train.tsv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f08960b3a033cbce11871ce7f083f001539744de07ce1e5401422c075ad0b168
3
+ size 69127031
data/en/dev/dev_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdd07e938c464b45d08b36d8b07a1eb0a5b8ec2eb35f42367699ab08ef1422bd
3
+ size 590789273
data/en/test/test_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9215acb6c49bbbb6ab4e20048faafae229aed2665fff7a8d559c7c9335e0ef73
3
+ size 595445510
data/en/train/train_part_0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8fdea55a9ddbe44a901809d9a4c56020a6043ea934b74a3b6a27181d2b5d973
3
+ size 1694185607
data/en/train/train_part_1.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c64b4d92d45865da1a8b3810e73212840489cbfcb4745222567bc4f10d78f5d
3
+ size 1706724335
data/en/train/train_part_10.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c61ee83ce2ffcd9b55dcc1788ba44568cd75fde7000fb9ec2b29b06f3312850
3
+ size 1695328783
data/en/train/train_part_11.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8822a5d8e94f33319dfee6497c63534f93b0c9e94f6807e343c575ea7109f898
3
+ size 1665729683
data/en/train/train_part_12.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35bb7d572b009411a7434b21220271057cd2dc3354abf33e66e86cbd856eaeea
3
+ size 1692740211
data/en/train/train_part_13.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66c8079fefb10309a693219890eca039a0eda3bb6c0c09cec3065e9c79d53b24
3
+ size 1701655428
data/en/train/train_part_14.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cc1736fee4c8ef6134c0657204df1dea40481c5769a8634da449d36821d4107
3
+ size 1694727067