Datasets:

Modalities:
Audio
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
File size: 7,402 Bytes
5a47f7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea8160b
5a47f7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8dfcfa5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a47f7a
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
---
license: cc-by-4.0
pretty_name: VoxLingua107
size_categories:
- 100M<n<1B
---

# VoxLingua107

VoxLingua107 is a speech dataset for training spoken language identification models. 
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.

VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours. 
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.

For more information, see the paper [J&ouml;rgen Valk, Tanel Alum&auml;e. _VoxLingua107: a Dataset for Spoken Language Recognition_. Proc. SLT 2021](https://arxiv.org/abs/2011.12998).

### Why

VoxLingua107 can be used for training spoken language recognition models that work well with real-world, varying speech data. 
You can try a demo system trained on this dataset [here](https://huggingface.co/speechbrain/lang-id-voxlingua107-ecapa).

### How 

We extracted audio data from YouTube videos that are retrieved using language-specific search phrases (random phrases from Wikipedia of the particular language).
If the language of the video title and description matched with the language of the search phrase, 
the audio in the video was deemed likely to be in that particular language. This allowed to collect large amounts of somewhat noisy data relatively cheaply.
Speech/non-speech detection and speaker diarization was used to segment the videos into short sentence-like utterances.
A data-driven post-filtering step was applied to remove clips that were very different from other clips in this language's dataset, and thus likely not in the given language.
Due to the automatic data collection process, there are still clips in the dataset that are not in the given language or contain non-speech  (around 2% overall), 
especially for some languages (like Welsh).


### License and copyright

The VoxLingua107 dataset is distributed under the Creative Commons Attribution 4.0 International License. The copyright remains with the original owners of the video. 

We also point out that the distribution of languages, accents, dialects, genders, races and societal factors in this dataset is not representative of the global population. Using this dataset for training and deploying models may thus introduce unintended biases.


### Notice and take down policy

Notice: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:

- Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
- Clearly identify the copyrighted work claimed to be infringed.
- Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
- Send the request to [Tanel Alum&auml;e](mailto:tanel.alumae@taltech.ee)

Take down: We will comply to legitimate requests by removing the affected sources from the corpus.


### Languages and sizes

| Language Code | Language Name | Hours |
|---|---|---|
| ab | Abkhazian | 10 |
| af | Afrikaans | 108 |
| am | Amharic | 81 |
| ar | Arabic | 59 |
| as | Assamese | 155 |
| az | Azerbaijani | 58 |
| ba | Bashkir | 58 |
| be | Belarusian | 133 |
| bg | Bulgarian | 50 |
| bn | Bengali | 55 |
| bo | Tibetan | 101 |
| br | Breton | 44 |
| bs | Bosnian | 105 |
| ca | Catalan | 88 |
| ceb | Cebuano | 6 |
| cs | Czech | 67 |
| cy | Welsh | 76 |
| da | Danish | 28 |
| de | German | 39 |
| el | Greek | 66 |
| en | English | 49 |
| eo | Esperanto | 10 |
| es | Spanish | 39 |
| et | Estonian | 38 |
| eu | Basque | 29 |
| fa | Persian | 56 |
| fi | Finnish | 33 |
| fo | Faroese | 67 |
| fr | French | 67 |
| gl | Galician | 72 |
| gn | Guarani | 2 |
| gu | Gujarati | 46 |
| gv | Manx | 4 |
| ha | Hausa | 106 |
| haw | Hawaiian | 12 |
| hi | Hindi | 81 |
| hr | Croatian | 118 |
| ht | Haitian | 96 |
| hu | Hungarian | 73 |
| hy | Armenian | 69 |
| ia | Interlingua | 3 |
| id | Indonesian | 40 |
| is | Icelandic | 92 |
| it | Italian | 51 |
| iw | Hebrew | 96 |
| ja | Japanese | 56 |
| jw | Javanese | 53 |
| ka | Georgian | 98 |
| kk | Kazakh | 78 |
| km | Central Khmer | 41 |
| kn | Kannada | 46 |
| ko | Korean | 77 |
| la | Latin | 67 |
| lb | Luxembourgish | 75 |
| ln | Lingala | 90 |
| lo | Lao | 42 |
| lt | Lithuanian | 82 |
| lv | Latvian | 42 |
| mg | Malagasy | 109 |
| mi | Maori | 34 |
| mk | Macedonian | 112 |
| ml | Malayalam | 47 |
| mn | Mongolian | 71 |
| mr | Marathi | 85 |
| ms | Malay | 83 |
| mt | Maltese | 66 |
| my | Burmese | 41 |
| ne | Nepali | 72 |
| nl | Dutch | 40 |
| nn | Norwegian Nynorsk | 57 |
| no | Norwegian | 107 |
| oc | Occitan | 15 |
| pa | Panjabi | 54 |
| pl | Polish | 80 |
| ps | Pushto | 47 |
| pt | Portuguese | 64 |
| ro | Romanian | 65 |
| ru | Russian | 73 |
| sa | Sanskrit | 15 |
| sco | Scots | 3 |
| sd | Sindhi | 84 |
| si | Sinhala | 67 |
| sk | Slovak | 40 |
| sl | Slovenian | 121 |
| sn | Shona | 30 |
| so | Somali | 103 |
| sq | Albanian | 71 |
| sr | Serbian | 50 |
| su | Sundanese | 64 |
| sv | Swedish | 34 |
| sw | Swahili | 64 |
| ta | Tamil | 51 |
| te | Telugu | 77 |
| tg | Tajik | 64 |
| th | Thai | 61 |
| tk | Turkmen | 85 |
| tl | Tagalog | 93 |
| tr | Turkish | 59 |
| tt | Tatar | 103 |
| uk | Ukrainian | 52 |
| ur | Urdu | 42 |
| uz | Uzbek | 45 |
| vi | Vietnamese | 64 |
| war | Waray | 11 |
| yi | Yiddish | 46 |
| yo | Yoruba | 94 |
| zh | Mandarin Chinese | 44 |


### Usage

Although webdataset can be used in a streaming fashion, it is recommended to first make a local copy fo the dataset using git clone.

    git lfs install
    git clone git@hf.co:datasets/TalTechNLP/voxlingua107_wds

Then you can use the Python webdataset library to create an iterable dataset out of it:

    import webdataset as wds
    import random
    train_files = glob.glob("voxlingua107_wds/train/**/*.tar") # you can also limit the training data to selected languages    
    dev_files = glob.glob("voxlingua107_wds/dev/*.tar")
    
    random.shuffle(train_files)
    
    def mapper(sample):
        # "audio" field represents 16 kHz raw audio
        return {"audio": sample[0], "lang": sample[1]["lang"]}
        
    # Since each shard contains 500 samples for a single language, it is good to use a reasonably large buffer size to get nicely shuffled samples
    buffer_size = 100000   
    dataset = wds.WebDataset(train_urls, shardshuffle=2000).shuffle(buffer_size, initial=buffer_size).decode(wds.torch_audio).to_tuple("wav","json").map(mapper)
    dev_dataset = wds.WebDataset(dev_urls).decode(wds.torch_audio).to_tuple("wav","json").map(mapper)
    
    train_iter = iter(dataset)
    print(next(train_iter))
    {'audio': (tensor([[-9.7656e-04, -8.5449e-04, -3.0518e-05,  ...,  2.7466e-03,
          3.7842e-03,  5.1880e-03]]), 16000), 'lang': 'kn'}



### Citing

```
@inproceedings{valk2021slt,
  title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
  author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
  booktitle={Proc. IEEE SLT Workshop},
  year={2021},
}