jimregan parquet-converter commited on
Commit
bc2f0ae
·
0 Parent(s):

Duplicate from CSTR-Edinburgh/vctk

Browse files

Co-authored-by: Parquet-converter (BOT) <parquet-converter@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +27 -0
  2. README.md +230 -0
  3. vctk.py +131 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: VCTK
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - automatic-speech-recognition
19
+ - text-to-speech
20
+ - text-to-audio
21
+ task_ids: []
22
+ paperswithcode_id: vctk
23
+ train-eval-index:
24
+ - config: main
25
+ task: automatic-speech-recognition
26
+ task_id: speech_recognition
27
+ splits:
28
+ train_split: train
29
+ col_mapping:
30
+ file: path
31
+ text: text
32
+ metrics:
33
+ - type: wer
34
+ name: WER
35
+ - type: cer
36
+ name: CER
37
+ dataset_info:
38
+ features:
39
+ - name: speaker_id
40
+ dtype: string
41
+ - name: audio
42
+ dtype:
43
+ audio:
44
+ sampling_rate: 48000
45
+ - name: file
46
+ dtype: string
47
+ - name: text
48
+ dtype: string
49
+ - name: text_id
50
+ dtype: string
51
+ - name: age
52
+ dtype: string
53
+ - name: gender
54
+ dtype: string
55
+ - name: accent
56
+ dtype: string
57
+ - name: region
58
+ dtype: string
59
+ - name: comment
60
+ dtype: string
61
+ config_name: main
62
+ splits:
63
+ - name: train
64
+ num_bytes: 40103111
65
+ num_examples: 88156
66
+ download_size: 11747302977
67
+ dataset_size: 40103111
68
+ ---
69
+
70
+ # Dataset Card for VCTK
71
+
72
+ ## Table of Contents
73
+ - [Dataset Description](#dataset-description)
74
+ - [Dataset Summary](#dataset-summary)
75
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
76
+ - [Languages](#languages)
77
+ - [Dataset Structure](#dataset-structure)
78
+ - [Data Instances](#data-instances)
79
+ - [Data Fields](#data-fields)
80
+ - [Data Splits](#data-splits)
81
+ - [Dataset Creation](#dataset-creation)
82
+ - [Curation Rationale](#curation-rationale)
83
+ - [Source Data](#source-data)
84
+ - [Annotations](#annotations)
85
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
86
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
87
+ - [Social Impact of Dataset](#social-impact-of-dataset)
88
+ - [Discussion of Biases](#discussion-of-biases)
89
+ - [Other Known Limitations](#other-known-limitations)
90
+ - [Additional Information](#additional-information)
91
+ - [Dataset Curators](#dataset-curators)
92
+ - [Licensing Information](#licensing-information)
93
+ - [Citation Information](#citation-information)
94
+ - [Contributions](#contributions)
95
+
96
+ ## Dataset Description
97
+
98
+ - **Homepage:** [Edinburg DataShare](https://doi.org/10.7488/ds/2645)
99
+ - **Repository:**
100
+ - **Paper:**
101
+ - **Leaderboard:**
102
+ - **Point of Contact:**
103
+
104
+ ### Dataset Summary
105
+
106
+ This CSTR VCTK Corpus includes around 44-hours of speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
107
+
108
+ ### Supported Tasks
109
+
110
+ - `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
111
+ - `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
112
+
113
+ ### Languages
114
+
115
+ [More Information Needed]
116
+
117
+ ## Dataset Structure
118
+
119
+ ### Data Instances
120
+
121
+ A data point comprises the path to the audio file, called `file` and its transcription, called `text`.
122
+
123
+ ```
124
+ {
125
+ 'speaker_id': 'p225',
126
+ 'text_id': '001',
127
+ 'text': 'Please call Stella.',
128
+ 'age': '23',
129
+ 'gender': 'F',
130
+ 'accent': 'English',
131
+ 'region': 'Southern England',
132
+ 'file': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
133
+ 'audio':
134
+ {
135
+ 'path': '/datasets/downloads/extracted/8ed7dad05dfffdb552a3699777442af8e8ed11e656feb277f35bf9aea448f49e/wav48_silence_trimmed/p225/p225_001_mic1.flac',
136
+ 'array': array([0.00485229, 0.00689697, 0.00619507, ..., 0.00811768, 0.00836182, 0.00854492], dtype=float32),
137
+ 'sampling_rate': 48000
138
+ },
139
+ 'comment': ''
140
+ }
141
+ ```
142
+
143
+ Each audio file is a single-channel FLAC with a sample rate of 48000 Hz.
144
+
145
+ ### Data Fields
146
+
147
+ Each row consists of the following fields:
148
+
149
+ - `speaker_id`: Speaker ID
150
+ - `audio`: Audio recording
151
+ - `file`: Path to audio file
152
+ - `text`: Text transcription of corresponding audio
153
+ - `text_id`: Text ID
154
+ - `age`: Speaker's age
155
+ - `gender`: Speaker's gender
156
+ - `accent`: Speaker's accent
157
+ - `region`: Speaker's region, if annotation exists
158
+ - `comment`: Miscellaneous comments, if any
159
+
160
+ ### Data Splits
161
+
162
+ The dataset has no predefined splits.
163
+
164
+ ## Dataset Creation
165
+
166
+ ### Curation Rationale
167
+
168
+ [More Information Needed]
169
+
170
+ ### Source Data
171
+
172
+ #### Initial Data Collection and Normalization
173
+
174
+ [More Information Needed]
175
+
176
+ #### Who are the source language producers?
177
+
178
+ [More Information Needed]
179
+
180
+ ### Annotations
181
+
182
+ #### Annotation process
183
+
184
+ [More Information Needed]
185
+
186
+ #### Who are the annotators?
187
+
188
+ [More Information Needed]
189
+
190
+ ### Personal and Sensitive Information
191
+
192
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
193
+
194
+ ## Considerations for Using the Data
195
+
196
+ ### Social Impact of Dataset
197
+
198
+ [More Information Needed]
199
+
200
+ ### Discussion of Biases
201
+
202
+ [More Information Needed]
203
+
204
+ ### Other Known Limitations
205
+
206
+ [More Information Needed]
207
+
208
+ ## Additional Information
209
+
210
+ ### Dataset Curators
211
+
212
+ [More Information Needed]
213
+
214
+ ### Licensing Information
215
+
216
+ Public Domain, Creative Commons Attribution 4.0 International Public License ([CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/legalcode))
217
+
218
+ ### Citation Information
219
+
220
+ ```bibtex
221
+ @inproceedings{Veaux2017CSTRVC,
222
+ title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
223
+ author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
224
+ year = 2017
225
+ }
226
+ ```
227
+
228
+ ### Contributions
229
+
230
+ Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
vctk.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """VCTK dataset."""
18
+
19
+
20
+ import os
21
+ import re
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{Veaux2017CSTRVC,
28
+ title = {CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit},
29
+ author = {Christophe Veaux and Junichi Yamagishi and Kirsten MacDonald},
30
+ year = 2017
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ The CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents.
36
+ """
37
+
38
+ _URL = "https://datashare.ed.ac.uk/handle/10283/3443"
39
+ _DL_URL = "https://datashare.is.ed.ac.uk/bitstream/handle/10283/3443/VCTK-Corpus-0.92.zip"
40
+
41
+
42
+ class VCTK(datasets.GeneratorBasedBuilder):
43
+ """VCTK dataset."""
44
+
45
+ VERSION = datasets.Version("0.9.2")
46
+
47
+ BUILDER_CONFIGS = [
48
+ datasets.BuilderConfig(name="main", version=VERSION, description="VCTK dataset"),
49
+ ]
50
+
51
+ def _info(self):
52
+ return datasets.DatasetInfo(
53
+ description=_DESCRIPTION,
54
+ features=datasets.Features(
55
+ {
56
+ "speaker_id": datasets.Value("string"),
57
+ "audio": datasets.features.Audio(sampling_rate=48_000),
58
+ "file": datasets.Value("string"),
59
+ "text": datasets.Value("string"),
60
+ "text_id": datasets.Value("string"),
61
+ "age": datasets.Value("string"),
62
+ "gender": datasets.Value("string"),
63
+ "accent": datasets.Value("string"),
64
+ "region": datasets.Value("string"),
65
+ "comment": datasets.Value("string"),
66
+ }
67
+ ),
68
+ supervised_keys=("file", "text"),
69
+ homepage=_URL,
70
+ citation=_CITATION,
71
+ )
72
+
73
+ def _split_generators(self, dl_manager):
74
+ root_path = dl_manager.download_and_extract(_DL_URL)
75
+
76
+ return [
77
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"root_path": root_path}),
78
+ ]
79
+
80
+ def _generate_examples(self, root_path):
81
+ """Generate examples from the VCTK corpus root path."""
82
+
83
+ meta_path = os.path.join(root_path, "speaker-info.txt")
84
+ txt_root = os.path.join(root_path, "txt")
85
+ wav_root = os.path.join(root_path, "wav48_silence_trimmed")
86
+ # NOTE: "comment" is handled separately in logic below
87
+ fields = ["speaker_id", "age", "gender", "accent", "region"]
88
+
89
+ key = 0
90
+ with open(meta_path, encoding="utf-8") as meta_file:
91
+ _ = next(iter(meta_file))
92
+ for line in meta_file:
93
+ data = {}
94
+ line = line.strip()
95
+ search = re.search(r"\(.*\)", line)
96
+ if search is None:
97
+ data["comment"] = ""
98
+ else:
99
+ start, _ = search.span()
100
+ data["comment"] = line[start:]
101
+ line = line[:start]
102
+ values = line.split()
103
+ for i, field in enumerate(fields):
104
+ if field == "region":
105
+ data[field] = " ".join(values[i:])
106
+ else:
107
+ data[field] = values[i] if i < len(values) else ""
108
+ speaker_id = data["speaker_id"]
109
+ speaker_txt_path = os.path.join(txt_root, speaker_id)
110
+ speaker_wav_path = os.path.join(wav_root, speaker_id)
111
+ # NOTE: p315 does not have text
112
+ if not os.path.exists(speaker_txt_path):
113
+ continue
114
+ for txt_file in sorted(os.listdir(speaker_txt_path)):
115
+ filename, _ = os.path.splitext(txt_file)
116
+ _, text_id = filename.split("_")
117
+ for i in [1, 2]:
118
+ wav_file = os.path.join(speaker_wav_path, f"{filename}_mic{i}.flac")
119
+ # NOTE: p280 does not have mic2 files
120
+ if not os.path.exists(wav_file):
121
+ continue
122
+ with open(os.path.join(speaker_txt_path, txt_file), encoding="utf-8") as text_file:
123
+ text = text_file.readline().strip()
124
+ more_data = {
125
+ "file": wav_file,
126
+ "audio": wav_file,
127
+ "text": text,
128
+ "text_id": text_id,
129
+ }
130
+ yield key, {**data, **more_data}
131
+ key += 1