DavidErikMollberg commited on
Commit
950862d
·
0 Parent(s):
.gitattributes ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - is
6
+ language_creators:
7
+ - crowdsourced
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: "Samrómur Icelandic Speech 1.0."
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ tags:
18
+ - crowd-sourced icelandic
19
+ - "samrómur"
20
+ - icelandic speech
21
+ - samromur
22
+ - iceland
23
+ task_categories:
24
+ - automatic-speech-recognition
25
+ task_ids: []
26
+ ---
27
+
28
+
29
+
30
+ # Dataset Card for samromur_asr
31
+ ## Table of Contents
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-fields)
39
+ - [Data Splits](#data-splits)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Annotations](#annotations)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+ - **Homepage:** [Samrómur 21.05]
57
+ - **Repository:** [OpenSLR](http://www.openslr.org/112/)
58
+ - **Paper:** [Samrómur: Crowd-sourcing Data Collection for Icelandic Speech Recognition](https://aclanthology.org/2020.lrec-1.425.pdf)
59
+ - **Point of Contact:** [Jón Guðnason](mailto:jg@ru.is)
60
+
61
+ ### Dataset Summary
62
+ This is the first release of the Samrómur Icelandic Speech corpus that contains 100.000 validated utterances.
63
+
64
+ The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology.
65
+
66
+
67
+ ### Example Usage
68
+ The Samrómur Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
69
+ ```python
70
+ from datasets import load_dataset
71
+ samromur_asr = load_dataset("language-and-voice-lab/samromur_asr")
72
+ ```
73
+ To load an specific split (for example, the validation split) do:
74
+ ```python
75
+ from datasets import load_dataset
76
+ samromur_asr = load_dataset("language-and-voice-lab/samromur_asr",split="validation")
77
+ ```
78
+
79
+ ### Supported Tasks
80
+ automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
81
+
82
+ ### Languages
83
+ The audio is in Icelandic.
84
+ The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
85
+
86
+ ## Dataset Structure
87
+
88
+ ### Data Instances
89
+ ```python
90
+ {
91
+ 'audio_id': '009123-0150695',
92
+ 'audio': {
93
+ 'path': '/home/david/.cache/HuggingFace/datasets/downloads/extracted/cb428a7f1e46b058d76641ef32f36b49d28b73aea38509983170495408035a10/dev/009123/009123-0150695.flac',
94
+ 'array': array([0., 0., 0., ..., 0., 0., 0.], dtype=float32),
95
+ 'sampling_rate': 16000
96
+ },
97
+ 'speaker_id': '009123',
98
+ 'gender': 'female',
99
+ 'age': '18-19',
100
+ 'duration': 3.299999952316284,
101
+ 'normalized_text': 'það skipti heldur engu'
102
+ }
103
+ ```
104
+
105
+ ### Data Fields
106
+ * `audio_id` (string) - id of audio segment
107
+ * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
108
+ * `speaker_id` (string) - id of speaker
109
+ * `gender` (string) - gender of speaker (male or female)
110
+ * `age` (string) - range of age of the speaker.
111
+ * `duration` (float32) - duration of the audio file in seconds.
112
+ * `normalized_text` (string) - normalized audio segment transcription.
113
+
114
+ ### Data Splits
115
+ The corpus is split into train, validation, and test subsets with no speaker overlap. Each subset contains folders that correspond to speaker IDs, and the audio files inside use the following naming convention: {speaker_ID}-{utterance_ID}.flac. Lenghts of each portion are: train=114h/34m, test=15h51m, validation=15h16m.
116
+
117
+ To load an specific portion please see the above section "Example Usage".
118
+
119
+ ## Dataset Creation
120
+
121
+ ### Curation Rationale
122
+
123
+ * The recording has started in October 2019 and continues to this day (May 2021).
124
+
125
+ * This release has been authorized for release in May 2021.
126
+
127
+ * The aim is to create an open-source speech corpus to enable research and development for Icelandic Language Technology.
128
+
129
+ * The corpus contains audio recordings and a metadata file that contains the prompts the participants read.
130
+
131
+ * A Kaldi based script using this data can be found on the Language and Voice Lab gitHub page https://github.com/cadia-lvl/samromur-asr
132
+
133
+ ### Source Data
134
+
135
+ #### Initial Data Collection and Normalization
136
+
137
+ * The utterances were recorded by a smartphone or the web app.
138
+
139
+ * The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur.
140
+
141
+ * Each recording contains one read sentence from a script.
142
+
143
+ * The script contains 85.080 unique sentences and 90.838 unique tokens.
144
+
145
+ ### Annotations
146
+
147
+ #### Annotation process
148
+
149
+ Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
150
+
151
+ There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
152
+
153
+ #### Who are the annotators?
154
+ The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
155
+
156
+ ### Personal and Sensitive Information
157
+ The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
158
+
159
+ ## Considerations for Using the Data
160
+
161
+ ### Social Impact of Dataset
162
+ This contribution describes an ongoing project of speech data collection, using the web application Samrómur which is built upon Common Voice, Mozilla Foundation's web platform for open-source voice collection. The goal of the project is to build a large-scale speech corpus for Automatic Speech Recognition (ASR) for Icelandic. Upon completion, Samrómur will be the largest open speech corpus for Icelandic collected from the public domain.
163
+
164
+ ### Discussion of Biases
165
+
166
+ * The participants are aged between 18 to 90, 59,782 recordings are from female speakers and 40,218 are from male, recorded by a smartphone or the web app.
167
+
168
+ * Participants self-reported their age group, gender, and the native language.
169
+
170
+ * The corpus contains 100 000 utterance from 8392 speaker, totalling 145 hours.
171
+
172
+ ### Other Known Limitations
173
+ "Samromur 21.05" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
174
+
175
+ ## Additional Information
176
+
177
+ ### Dataset Curators
178
+
179
+ The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology.
180
+
181
+ ### Licensing Information
182
+ [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
183
+
184
+ ### Citation Information
185
+ ```
186
+ @inproceedings{mollberg-etal-2020-samromur,
187
+ title = "{S}amr{\'o}mur: Crowd-sourcing Data Collection for {I}celandic Speech Recognition",
188
+ author = "Mollberg, David Erik and
189
+ J{\'o}nsson, {\'O}lafur Helgi and
190
+ {\TH}orsteinsd{\'o}ttir, Sunneva and
191
+ Steingr{\'\i}msson, Stein{\th}{\'o}r and
192
+ Magn{\'u}sd{\'o}ttir, Eyd{\'\i}s Huld and
193
+ Gudnason, Jon",
194
+ booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
195
+ month = may,
196
+ year = "2020",
197
+ address = "Marseille, France",
198
+ publisher = "European Language Resources Association",
199
+ url = "https://aclanthology.org/2020.lrec-1.425",
200
+ pages = "3463--3467",
201
+ language = "English",
202
+ ISBN = "979-10-95546-34-4",
203
+ }
204
+ ```
205
+
206
+ ### Contributions
207
+ This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
208
+
209
+ The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program.
210
+
211
+ Special thanks for the summer students for all the hard work.
scripts/parallel_convert.sh ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Directory where the converted .wav files will be stored.
4
+ # The prompt mentioned storing them "directly under the upper test folder".
5
+ # Assuming the script is run from /home/dem/projects/k2/isl_zipformer/asr_data/samromur_asr/
6
+ # and the 'upper test folder' is 'test/', this script will place files in 'test/wav_files/'.
7
+ # If the script is run from /home/dem/projects/k2, it will be in isl_zipformer/asr_data/samromur_asr/test/wav_files
8
+ # Let's assume the script is run from where the `test` directory is.
9
+ the_set=train
10
+
11
+ OUTPUT_DIR="${the_set}/wav_files"
12
+ mkdir -p "$OUTPUT_DIR"
13
+
14
+ # Number of parallel jobs. `nproc` gets the number of CPU cores.
15
+ # You can change this to a fixed number, e.g., PARALLEL_JOBS=4
16
+ PARALLEL_JOBS=8
17
+
18
+ echo "Converting files in parallel using up to $PARALLEL_JOBS jobs..."
19
+
20
+ # Find all .flac files and pipe them to xargs for parallel processing.
21
+ # The `-P` option for xargs sets the maximum number of parallel processes.
22
+ # The `-I {}` option tells xargs to replace `{}` with the input line (the file path).
23
+ find ${the_set}/train_part_01 -type f -name "*.flac" | xargs -P "$PARALLEL_JOBS" -I {} bash -c '
24
+ flac_file="{}"
25
+ # Get the base filename without the .flac extension
26
+ base_filename=$(basename "$flac_file" .flac)
27
+ output_wav_file="'"$OUTPUT_DIR"'/${base_filename}.wav"
28
+
29
+ # Run the ffmpeg conversion.
30
+ # -y overwrites the output file if it exists.
31
+ ffmpeg -y -hide_banner -loglevel quiet -i "$flac_file" -ac 1 -ar 16000 "$output_wav_file"
32
+ '
33
+
34
+ echo "All conversions are complete. Check the '$OUTPUT_DIR' directory."
scripts/tmp_prep.ipynb ADDED
The diff for this file is too large to render. See raw diff