Tabys Shiry commited on
Commit
3566d6a
·
0 Parent(s):

Duplicate from Shiry/ATC_combined

Browse files

Co-authored-by: Yonash <Shiry@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: id
5
+ dtype: string
6
+ - name: audio
7
+ dtype:
8
+ audio:
9
+ sampling_rate: 16000
10
+ - name: text
11
+ dtype: string
12
+ - name: segment_start_time
13
+ dtype: float32
14
+ - name: segment_end_time
15
+ dtype: float32
16
+ - name: duration
17
+ dtype: float32
18
+ splits:
19
+ - name: test
20
+ num_bytes: 612270626
21
+ num_examples: 4723
22
+ - name: train
23
+ num_bytes: 2543440112
24
+ num_examples: 18929
25
+ tags:
26
+ - audio
27
+ - automatic-speech-recognition
28
+ - en-atc
29
+ - en
30
+ - noisy-speech-recognition
31
+ - speech-recognition
32
+ task_categories:
33
+ - automatic-speech-recognition
34
+ language:
35
+ - en
36
+ multilinguality:
37
+ - monolingual
38
+ license:
39
+ - cc-by-nc-sa-4.0
40
+ ---
41
+
42
+ # Dataset Card for UWB-ATCC corpus
43
+
44
+ ## Table of Contents
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
48
+ - [Languages and Other Details](#languages-and-other-details)
49
+ - [Dataset Structure](#dataset-structure)
50
+ - [Data Fields](#data-fields)
51
+ - [Additional Information](#additional-information)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+
55
+
56
+ ## Dataset Description
57
+ - **Homepage:** [UWB-ATCC corpus homepage](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0)
58
+ - **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
59
+ - **Paper:** [Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development](https://link.springer.com/article/10.1007/s10579-019-09449-5)
60
+ - **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
61
+
62
+ ### Dataset Summary
63
+
64
+ The UWB-ATCC Corpus is provided provided by University of West Bohemia, Department of Cybernetics. The corpus contains recordings of communication between air traffic controllers and pilots. The speech is manually transcribed and labeled with the information about the speaker (pilot/controller, not the full identity of the person). The corpus is currently small (20 hours) but we plan to search for additional data next year. The audio data format is: 8kHz, 16bit PCM, mono.
65
+
66
+ Important, from the `<id (string)>` field, you can obtain the speaker roles. For instance:
67
+ - `_PI`: segment with only pilot speech
68
+ - `_AT`: segment with only ATCO speech
69
+ - `PIAT`: segment with both, ATCO and pilot speech
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ - `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
74
+
75
+ ### Languages and other details
76
+
77
+ The text and the recordings are in English. The authors took advantage of the fact that one of their industrial partners develops complex IT solutions for several ATC authorities and airports and, as such, has access to the ATC communication recordings collected in the Czech airspace. This partner was able to secure the following data:
78
+
79
+ - Ground control—communication before takeoff and after landing—19.2 h of data.
80
+ - Tower control—communication during takeoff, landing and landing standby—22.5 h.
81
+ - Approach control—communication during landing approach—25.5 h.
82
+ - Area control—communication during overflights and cruises—71.3 h.
83
+
84
+ (Not all data is released. Check their website [here](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0))
85
+ ## Dataset Structure
86
+
87
+ ### Data Fields
88
+
89
+ - `id (string)`: a string of recording identifier for each example, corresponding to its.
90
+ - `audio (audio)`: audio data for the given ID
91
+ - `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
92
+ - `segment_start_time (float32)`: segment start time (normally 0)
93
+ - `segment_end_time (float32): segment end time
94
+ - `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
95
+
96
+ ## Additional Information
97
+
98
+ ### Licensing Information
99
+
100
+ The licensing status of the dataset hinges on the legal status of the [UWB-ATCC corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0) creators.
101
+
102
+ They used [Creative Commons - Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) licensing.
103
+
104
+ ### Citation Information
105
+
106
+ Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
107
+
108
+ ```
109
+ @article{zuluaga2022how,
110
+ title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
111
+ author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
112
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
113
+ year={2022}
114
+ }
115
+
116
+ @article{zuluaga2022bertraffic,
117
+ title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
118
+ author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
119
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
120
+ year={2022}
121
+ }
122
+
123
+ @article{zuluaga2022atco2,
124
+ title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
125
+ author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
126
+ journal={arXiv preprint arXiv:2211.04054},
127
+ year={2022}
128
+ }
129
+ ```
130
+
131
+ Authors of the dataset:
132
+ ```
133
+ @article{vsmidl2019air,
134
+ title={Air traffic control communication (ATCC) speech corpora and their use for ASR and TTS development},
135
+ author={{\v{S}}m{\'\i}dl, Lubo{\v{s}} and {\v{S}}vec, Jan and Tihelka, Daniel and Matou{\v{s}}ek, Jind{\v{r}}ich and Romportl, Jan and Ircing, Pavel},
136
+ journal={Language Resources and Evaluation},
137
+ volume={53},
138
+ number={3},
139
+ pages={449--464},
140
+ year={2019},
141
+ publisher={Springer}
142
+ }
143
+ ```
atc_data_loader.py ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ # -*- coding: utf-8 -*-
3
+ #
4
+ # SPDX-FileCopyrightText: Copyright © <2022> Idiap Research Institute <contact@idiap.ch>
5
+ #
6
+ # SPDX-FileContributor: Juan Zuluaga-Gomez <jzuluaga@idiap.ch>
7
+ #
8
+ # SPDX-License-Identifier: MIT-License
9
+
10
+ """\
11
+ Script for loading air traffic control (ATC) speech datasets for automatic speech recognition (ASR).
12
+ This script has been designed for ATC datasets that are in Kaldi format
13
+
14
+ Required files: text, wav.scp and segments files
15
+
16
+ - Databases
17
+ - Training:
18
+ - ATCOSIM, LDC-ATCC and, UWB-ATCC corpora.
19
+ - Testing:
20
+ - ATCO2-test-set, ATCOSIM, LDC-ATCC and, UWB-ATCC corpora.
21
+ """
22
+
23
+ import os
24
+ import re
25
+
26
+ import datasets
27
+ import numpy as np
28
+ import soundfile as sf
29
+ from datasets.tasks import AutomaticSpeechRecognition
30
+
31
+ _CITATION = """\
32
+ @article{zuluaga2022does,
33
+ title={How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
34
+ author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and Motlicek, Petr and Kleinert, Matthias and Helmke, Hartmut and Ohneiser, Oliver and Zhan, Qingran},
35
+ journal={2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
36
+ year={2022}
37
+ }
38
+ @article{zuluagabertraffic,
39
+ title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications (submitted to @ SLT-2022)},
40
+ author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and Nigmatulina, Iuliia and Motlicek, Petr and Ohneiser, Oliver and Helmke, Hartmut},
41
+ journal={2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
42
+ year={2022}
43
+ }
44
+ """
45
+
46
+ _DESCRIPTION = """\
47
+ ATC speech DATASET. This DataLoader works with data in Kaldi format.
48
+ - We use the following files: text, segments and wav.scp
49
+ - text --> utt_id transcript
50
+ - segments --> utt_id recording_id t_begin t_end
51
+ - wav.scp --> recording_id /path/to/wav/
52
+ The default dataset is from ATCO2 project, a 1-hour sample: https://www.replaywell.com/atco2/download/ATCO2-ASRdataset-v1_beta.tgz
53
+ """
54
+
55
+ _DATA_URL = "http://catalog.elra.info/en-us/repository/browse/ELRA-S0484/"
56
+
57
+ _HOMEPAGE = "https://github.com/idiap/w2v2-air-traffic"
58
+
59
+ logger = datasets.logging.get_logger(__name__)
60
+
61
+ # Our models work with audio data at 16kHZ,
62
+ _SAMPLING_RATE = int(16000)
63
+
64
+
65
+ class ATCDataASRConfig(datasets.BuilderConfig):
66
+ """BuilderConfig for air traffic control datasets."""
67
+
68
+ def __init__(self, **kwargs):
69
+ """
70
+ Args:
71
+ data_dir: `string`, the path to the folder containing the files required to read: json or wav.scp
72
+ **kwargs: keyword arguments forwarded to super.
73
+ """
74
+ super(ATCDataASRConfig, self).__init__(**kwargs)
75
+
76
+
77
+ class ATCDataASR(datasets.GeneratorBasedBuilder):
78
+
79
+ DEFAULT_WRITER_BATCH_SIZE = 256
80
+ DEFAULT_CONFIG_NAME = "all"
81
+ BUILDER_CONFIGS = [
82
+ # TRAIN, DEV AND TEST DATASETS
83
+ ATCDataASRConfig(name="train", description="ATC train dataset."),
84
+ ATCDataASRConfig(name="dev", description="ATC dev dataset."),
85
+ ATCDataASRConfig(name="test", description="ATC test dataset."),
86
+ # UNSUPERVISED DATASETS
87
+ ATCDataASRConfig(name="unsupervised", description="ATC unsupervised dataset."),
88
+ ]
89
+
90
+ # provide some information about the Dataset we just gathered
91
+ def _info(self):
92
+ return datasets.DatasetInfo(
93
+ description=_DESCRIPTION,
94
+ features=datasets.Features(
95
+ {
96
+ "id": datasets.Value("string"),
97
+ "file": datasets.Value("string"),
98
+ "audio": datasets.features.Audio(sampling_rate=_SAMPLING_RATE),
99
+ "text": datasets.Value("string"),
100
+ "segment_start_time": datasets.Value("float"),
101
+ "segment_end_time": datasets.Value("float"),
102
+ "duration": datasets.Value("float"),
103
+ }
104
+ ),
105
+ supervised_keys=("audio", "text"),
106
+ homepage=_HOMEPAGE,
107
+ citation=_CITATION,
108
+ task_templates=[
109
+ AutomaticSpeechRecognition(
110
+ audio_column="audio", transcription_column="text"
111
+ )
112
+ ],
113
+ )
114
+
115
+ def _split_generators(self, dlmanager):
116
+ """Returns SplitGenerators."""
117
+
118
+ split = self.config.name
119
+
120
+ # UNSUPERVISED set (used only for decoding)
121
+ if "unsupervised" in split:
122
+ split_name = datasets.Split.TEST
123
+ elif "test" in split or "dev" in split or "dummy" in split:
124
+ split_name = datasets.Split.TEST
125
+ # The last option left is: Train set
126
+ else:
127
+ split_name = datasets.Split.TRAIN
128
+
129
+ # you need to pass a data directory where the Kaldi folder is stored
130
+ filepath = self.config.data_dir
131
+
132
+ return [
133
+ datasets.SplitGenerator(
134
+ name=split_name,
135
+ # These kwargs will be passed to _generate_examples
136
+ gen_kwargs={
137
+ "filepath": filepath,
138
+ "split": split,
139
+ },
140
+ )
141
+ ]
142
+
143
+ def _generate_examples(self, filepath, split):
144
+ """You need to pass a path with the kaldi data, the folder should have
145
+ audio: wav.scp,
146
+ transcripts: text,
147
+ timing information: segments
148
+ """
149
+
150
+ logger.info("Generating examples located in: %s", filepath)
151
+
152
+ text_file = os.path.join(filepath, "text")
153
+ wavscp = os.path.join(filepath, "wav.scp")
154
+ segments = os.path.join(filepath, "segments")
155
+
156
+ id_ = ""
157
+ text_dict, wav_dict = {}, {}
158
+ segments_dict, utt2wav_id = {}, {}
159
+
160
+ line = 0
161
+ # get the text file
162
+ with open(text_file) as text_f:
163
+ for line in text_f:
164
+ if len(line.split(" ")) > 1:
165
+ id_, transcript = line.split(" ", maxsplit=1)
166
+ transcript = _remove_special_characters(transcript)
167
+ if len(transcript.split(" ")) == 0:
168
+ continue
169
+ if len(transcript) < 2:
170
+ continue
171
+ text_dict[id_] = transcript
172
+ else: # line is empty
173
+ # if unsupervised set, then it's normal. else, continue
174
+ if not "test_unsup" in self.config.name:
175
+ continue
176
+ id_ = line.rstrip().split(" ")[0]
177
+ text_dict[id_] = ""
178
+
179
+ # get wav.scp and load data into memory
180
+ with open(wavscp) as text_f:
181
+ for line in text_f:
182
+ if line:
183
+ if len(line.split()) < 2:
184
+ continue
185
+ id_, wavpath = line.split(" ", maxsplit=1)
186
+ # only selects the part that ends of wav, flac or sph
187
+ wavpath = [
188
+ x
189
+ for x in wavpath.split(" ")
190
+ if ".wav" in x or ".WAV" in x or ".flac" in x or ".sph" in x
191
+ ][0].rstrip()
192
+
193
+ # make the output
194
+ segment, sampling_rate = sf.read(wavpath, dtype=np.int16)
195
+ wav_dict[id_] = [wavpath.rstrip(), segment, sampling_rate]
196
+
197
+ # get segments dictionary
198
+ with open(segments) as text_f:
199
+ for line in text_f:
200
+ if line:
201
+ if len(line.split()) < 4:
202
+ continue
203
+ id_, wavid_, start, end = line.rstrip().split(" ")
204
+ segments_dict[id_] = start.rstrip(), end.rstrip()
205
+ utt2wav_id[id_] = wavid_
206
+
207
+ for rec_id, text in text_dict.items():
208
+ if rec_id in utt2wav_id and rec_id in segments_dict:
209
+
210
+ # get audio data from memory and the path of the file
211
+ wavpath, segment, sampling_rate = wav_dict[utt2wav_id[rec_id]]
212
+ # get timing information
213
+ seg_start, seg_end = segments_dict[rec_id]
214
+ seg_start, seg_end = float(seg_start), float(seg_end)
215
+ duration = round((seg_end - seg_start), 3)
216
+
217
+ # get the samples, bytes, already cropping by segment,
218
+ samples = _extract_audio_segment(
219
+ segment, sampling_rate, float(seg_start), float(seg_end)
220
+ )
221
+
222
+ # output data for given dataset
223
+ example = {
224
+ "audio": {
225
+ "path": wavpath,
226
+ "array": samples,
227
+ "sampling_rate": sampling_rate,
228
+ },
229
+ "id": rec_id,
230
+ "file": wavpath,
231
+ "text": text,
232
+ "segment_start_time": format(float(seg_start), ".3f"),
233
+ "segment_end_time": format(float(seg_end), ".3f"),
234
+ "duration": format(float(duration), ".3f"),
235
+ }
236
+
237
+ yield rec_id, example
238
+
239
+
240
+ def _remove_special_characters(text):
241
+ """Function to remove some special chars/symbols from the given transcript"""
242
+
243
+ text = text.split(" ")
244
+ # first remove words between [] and <>
245
+ text = " ".join(
246
+ [
247
+ x
248
+ for x in text
249
+ if "[" not in x and "]" not in x and "<" not in x and ">" not in x
250
+ ]
251
+ )
252
+
253
+ # regex with predifined symbols to ignore/remove,
254
+ chars_to_ignore_regex2 = '[\{\[\]\<\>\/\,\?\.\!\u00AC\;\:"\\%\\\]|[0-9]'
255
+
256
+ text = re.sub(chars_to_ignore_regex2, "", text).lower()
257
+ sentence = text.replace("\u2013", "-")
258
+ sentence = sentence.replace("\u2014", "-")
259
+ sentence = sentence.replace("\u2018", "'")
260
+ sentence = sentence.replace("\u201C", "")
261
+ sentence = sentence.replace("\u201D", "")
262
+ sentence = sentence.replace("ñ", "n")
263
+ sentence = sentence.replace(" - ", " ")
264
+ sentence = sentence.replace("-", "")
265
+ sentence = sentence.replace("'", " ")
266
+ return sentence.lower().rstrip()
267
+
268
+
269
+ def _extract_audio_segment(segment, sampling_rate, start_sec, end_sec):
270
+ """Extracts segment of audio samples (as an ndarray) from the given segment."""
271
+ # The dataset only contains mono audio.
272
+ start_sample = int(start_sec * sampling_rate)
273
+ end_sample = min(int(end_sec * sampling_rate), segment.shape[0])
274
+ samples = segment[start_sample:end_sample]
275
+ return samples
data/test-00000-of-00001-01544bdf54b4ccf3.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52ad36731ede5f6cc528e95aa69e32c0f7be101646615590e86e7d84539c4653
3
+ size 470060009
data/test-00000-of-00001-3a021115ca23c2a5.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a637e5b035c4e22bdf880554c900c3a775eac46c10dc9713e827dd6cbf38783
3
+ size 131266472
data/train-00000-of-00002-4a9602acfde9f517.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dcb6fbecc22116d74c8e079e367a379225b517874f5de803373389a9b0aa2b3
3
+ size 302140417
data/train-00000-of-00004-c1d7fb31dcbf644a.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4948175e5ab8b3e221ad485854f73f47068891f61bba31f6360264186a45b08
3
+ size 488301521
data/train-00001-of-00002-91082fb03180a296.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca9535c5ccf2d8a318b4835a9dfece6e00026d69b664e5002b16e8e83d0f5829
3
+ size 278058025
data/train-00001-of-00004-f165730df6bf7253.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:855589e16589edbd45c21fb07dba5ff53ec7ceae1cbd187cbae413988e6c3a6b
3
+ size 468054363
data/train-00002-of-00004-67e682f17e32b703.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a3bb0a59d896f9b4897b5349936d0acd134a8965c9035ae766f50c8ce8b9d5e
3
+ size 495020975
data/train-00003-of-00004-b0b05d4b243c95c6.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:333c1426084c3837d7b42c59b08618a9df1c25925dd636b8c0ad836dd641fea3
3
+ size 473692438