Dataset Viewer
Auto-converted to Parquet Duplicate
file
stringlengths
24
31
audio
audioduration (s)
0.26
95.2
label
class label
36 classes
is_unknown
bool
2 classes
speaker_id
stringlengths
8
8
utterance_id
int8
0
23
backward/2356b88d_nohash_0.wav
30backward
true
2356b88d
0
backward/3291330e_nohash_3.wav
30backward
true
3291330e
3
backward/b91b718c_nohash_3.wav
30backward
true
b91b718c
3
backward/85851131_nohash_1.wav
30backward
true
85851131
1
backward/244cc3a5_nohash_0.wav
30backward
true
244cc3a5
0
backward/2927c601_nohash_4.wav
30backward
true
2927c601
4
backward/18e910f4_nohash_3.wav
30backward
true
18e910f4
3
backward/9151f184_nohash_2.wav
30backward
true
9151f184
2
backward/86f3558f_nohash_0.wav
30backward
true
86f3558f
0
backward/ef3367d9_nohash_6.wav
30backward
true
ef3367d9
6
backward/fc3ba625_nohash_3.wav
30backward
true
fc3ba625
3
backward/8e523821_nohash_1.wav
30backward
true
8e523821
1
backward/226537ab_nohash_3.wav
30backward
true
226537ab
3
backward/06f6c194_nohash_2.wav
30backward
true
06f6c194
2
backward/5a5721f8_nohash_4.wav
30backward
true
5a5721f8
4
backward/18e910f4_nohash_1.wav
30backward
true
18e910f4
1
backward/ac9dee0e_nohash_3.wav
30backward
true
ac9dee0e
3
backward/978240e1_nohash_3.wav
30backward
true
978240e1
3
backward/42a99aec_nohash_2.wav
30backward
true
42a99aec
2
backward/62f05757_nohash_2.wav
30backward
true
62f05757
2
backward/9307154f_nohash_0.wav
30backward
true
9307154f
0
backward/e9abfe31_nohash_0.wav
30backward
true
e9abfe31
0
backward/2fee065a_nohash_2.wav
30backward
true
2fee065a
2
backward/195c120a_nohash_2.wav
30backward
true
195c120a
2
backward/fd9c7413_nohash_0.wav
30backward
true
fd9c7413
0
backward/513aeddf_nohash_3.wav
30backward
true
513aeddf
3
backward/a55105d0_nohash_2.wav
30backward
true
a55105d0
2
backward/e9abfe31_nohash_1.wav
30backward
true
e9abfe31
1
backward/bb6d4301_nohash_0.wav
30backward
true
bb6d4301
0
backward/51c5601d_nohash_0.wav
30backward
true
51c5601d
0
backward/b69fe0e2_nohash_3.wav
30backward
true
b69fe0e2
3
backward/5a98d407_nohash_2.wav
30backward
true
5a98d407
2
backward/d394ef8e_nohash_3.wav
30backward
true
d394ef8e
3
backward/890cc926_nohash_1.wav
30backward
true
890cc926
1
backward/472b8045_nohash_0.wav
30backward
true
472b8045
0
backward/7f9eb952_nohash_1.wav
30backward
true
7f9eb952
1
backward/821b64cc_nohash_4.wav
30backward
true
821b64cc
4
backward/262d7a03_nohash_0.wav
30backward
true
262d7a03
0
backward/f798ac78_nohash_1.wav
30backward
true
f798ac78
1
backward/617aeb6c_nohash_1.wav
30backward
true
617aeb6c
1
backward/cd671b5f_nohash_1.wav
30backward
true
cd671b5f
1
backward/4995d875_nohash_1.wav
30backward
true
4995d875
1
backward/7fb8d703_nohash_1.wav
30backward
true
7fb8d703
1
backward/c1cc6e95_nohash_0.wav
30backward
true
c1cc6e95
0
backward/a1e71565_nohash_3.wav
30backward
true
a1e71565
3
backward/0d6d7360_nohash_3.wav
30backward
true
0d6d7360
3
backward/d21fd169_nohash_0.wav
30backward
true
d21fd169
0
backward/fb7008b0_nohash_1.wav
30backward
true
fb7008b0
1
backward/42f81601_nohash_2.wav
30backward
true
42f81601
2
backward/3a3ee7ed_nohash_3.wav
30backward
true
3a3ee7ed
3
backward/fa57ab3b_nohash_3.wav
30backward
true
fa57ab3b
3
backward/fb8c31a9_nohash_2.wav
30backward
true
fb8c31a9
2
backward/f0522ff4_nohash_0.wav
30backward
true
f0522ff4
0
backward/f5341341_nohash_1.wav
30backward
true
f5341341
1
backward/5ebc1cda_nohash_1.wav
30backward
true
5ebc1cda
1
backward/bde0f20a_nohash_1.wav
30backward
true
bde0f20a
1
backward/85851131_nohash_3.wav
30backward
true
85851131
3
backward/f736ab63_nohash_2.wav
30backward
true
f736ab63
2
backward/54ad8f22_nohash_3.wav
30backward
true
54ad8f22
3
backward/fa57ab3b_nohash_1.wav
30backward
true
fa57ab3b
1
backward/fd5ccd39_nohash_1.wav
30backward
true
fd5ccd39
1
backward/050170cb_nohash_2.wav
30backward
true
050170cb
2
backward/dd664d92_nohash_0.wav
30backward
true
dd664d92
0
backward/b528edb3_nohash_3.wav
30backward
true
b528edb3
3
backward/96d8bb6f_nohash_1.wav
30backward
true
96d8bb6f
1
backward/98ea0818_nohash_1.wav
30backward
true
98ea0818
1
backward/74241b28_nohash_4.wav
30backward
true
74241b28
4
backward/57152045_nohash_3.wav
30backward
true
57152045
3
backward/d4dddb92_nohash_3.wav
30backward
true
d4dddb92
3
backward/a7acbbeb_nohash_0.wav
30backward
true
a7acbbeb
0
backward/472b8045_nohash_4.wav
30backward
true
472b8045
4
backward/dae01802_nohash_3.wav
30backward
true
dae01802
3
backward/51f7a034_nohash_1.wav
30backward
true
51f7a034
1
backward/f798ac78_nohash_4.wav
30backward
true
f798ac78
4
backward/c50f55b8_nohash_17.wav
30backward
true
c50f55b8
17
backward/2748cce7_nohash_0.wav
30backward
true
2748cce7
0
backward/235b444f_nohash_0.wav
30backward
true
235b444f
0
backward/82951cf0_nohash_0.wav
30backward
true
82951cf0
0
backward/fb8c31a9_nohash_4.wav
30backward
true
fb8c31a9
4
backward/64df20d8_nohash_2.wav
30backward
true
64df20d8
2
backward/5769c5ab_nohash_3.wav
30backward
true
5769c5ab
3
backward/5d4e3bb8_nohash_1.wav
30backward
true
5d4e3bb8
1
backward/9bfd6a9d_nohash_0.wav
30backward
true
9bfd6a9d
0
backward/a77fbcfd_nohash_0.wav
30backward
true
a77fbcfd
0
backward/e4be0cf6_nohash_2.wav
30backward
true
e4be0cf6
2
backward/9151f184_nohash_3.wav
30backward
true
9151f184
3
backward/0a2b400e_nohash_1.wav
30backward
true
0a2b400e
1
backward/211b928a_nohash_1.wav
30backward
true
211b928a
1
backward/54ad8f22_nohash_1.wav
30backward
true
54ad8f22
1
backward/2f0ce4d9_nohash_1.wav
30backward
true
2f0ce4d9
1
backward/29229c21_nohash_3.wav
30backward
true
29229c21
3
backward/ad526ada_nohash_0.wav
30backward
true
ad526ada
0
backward/460209ac_nohash_2.wav
30backward
true
460209ac
2
backward/833d9f56_nohash_4.wav
30backward
true
833d9f56
4
backward/15dd287d_nohash_2.wav
30backward
true
15dd287d
2
backward/779de043_nohash_4.wav
30backward
true
779de043
4
backward/54aecbd5_nohash_2.wav
30backward
true
54aecbd5
2
backward/3291330e_nohash_2.wav
30backward
true
3291330e
2
backward/3ce4910e_nohash_2.wav
30backward
true
3ce4910e
2
backward/cd671b5f_nohash_4.wav
30backward
true
cd671b5f
4
End of preview. Expand in Data Studio

--> This is an exact copy of google/speech_commands adapted to be usable with recent datasets 🤗 versions (no remote code). <--

Dataset Card for SpeechCommands

Dataset Summary

This is a set of one-second .wav audio files, each containing a single spoken English word or background noise. These words are from a small set of commands, and are spoken by a variety of different speakers. This data set is designed to help train simple machine learning models. It is covered in more detail at https://arxiv.org/abs/1804.03209.

Version 0.01 of the data set (configuration "v0.01") was released on August 3rd 2017 and contains 64,727 audio files.

Version 0.02 of the data set (configuration "v0.02") was released on April 11th 2018 and contains 105,829 audio files.

Supported Tasks and Leaderboards

  • keyword-spotting: the dataset can be used to train and evaluate keyword spotting systems. The task is to detect preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial.

Languages

The language data in SpeechCommands is in English (BCP-47 en).

Dataset Structure

Data Instances

Example of a core word ("label" is a word, "is_unknown" is False):

{
  "file": "no/7846fd85_nohash_0.wav", 
  "audio": {
    "path": "no/7846fd85_nohash_0.wav", 
    "array": array([ -0.00021362, -0.00027466, -0.00036621, ...,  0.00079346,
          0.00091553,  0.00079346]), 
    "sampling_rate": 16000
    },
  "label": 1,  # "no"
  "is_unknown": False,
  "speaker_id": "7846fd85",
  "utterance_id": 0
}

Example of an auxiliary word ("label" is a word, "is_unknown" is True)

{
  "file": "tree/8b775397_nohash_0.wav", 
  "audio": {
    "path": "tree/8b775397_nohash_0.wav", 
    "array": array([ -0.00854492, -0.01339722, -0.02026367, ...,  0.00274658,
          0.00335693,  0.0005188]), 
    "sampling_rate": 16000
    },
  "label": 28,  # "tree"
  "is_unknown": True,
  "speaker_id": "1b88bf70",
  "utterance_id": 0
}

Example of background noise (_silence_) class:

{
  "file": "_silence_/doing_the_dishes.wav", 
  "audio": {
    "path": "_silence_/doing_the_dishes.wav", 
    "array": array([ 0.        ,  0.        ,  0.        , ..., -0.00592041,
         -0.00405884, -0.00253296]), 
    "sampling_rate": 16000
    }, 
  "label": 30,  # "_silence_"
  "is_unknown": False,
  "speaker_id": "None",
  "utterance_id": 0  # doesn't make sense here
}

Data Fields

  • file: relative audio filename inside the original archive.
  • audio: dictionary containing a relative audio filename, a decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audios might take a significant amount of time. Thus, it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
  • label: either word pronounced in an audio sample or background noise (_silence_) class. Note that it's an integer value corresponding to the class name.
  • is_unknown: if a word is auxiliary. Equals to False if a word is a core word or _silence_, True if a word is an auxiliary word.
  • speaker_id: unique id of a speaker. Equals to None if label is _silence_.
  • utterance_id: incremental id of a word utterance within the same speaker.

Data Splits

The dataset has two versions (= configurations): "v0.01" and "v0.02". "v0.02" contains more words (see section Source Data for more details).

train validation test
v0.01 51093 6799 3081
v0.02 84848 9982 4890

Note that in train and validation sets examples of _silence_ class are longer than 1 second. You can use the following code to sample 1-second examples from the longer ones:

def sample_noise(example):
    # Use this function to extract random 1 sec slices of each _silence_ utterance,
    # e.g. inside `torch.utils.data.Dataset.__getitem__()`
    from random import randint

    if example["label"] == "_silence_":
        random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
        example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]

    return example

Dataset Creation

Curation Rationale

The primary goal of the dataset is to provide a way to build and test small models that can detect a single word from a set of target words and differentiate it from background noise or unrelated speech with as few false positives as possible.

Source Data

Initial Data Collection and Normalization

The audio files were collected using crowdsourcing, see aiyprojects.withgoogle.com/open_speech_recording for some of the open source audio collection code that was used. The goal was to gather examples of people speaking single-word commands, rather than conversational sentences, so they were prompted for individual words over the course of a five minute session.

In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".

In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".

In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left", "Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation it is marked by True value of "is_unknown" feature). Their function is to teach a model to distinguish core words from unrecognized ones.

The _silence_ label contains a set of longer audio clips that are either recordings or a mathematical simulation of noise.

Who are the source language producers?

The audio files were collected using crowdsourcing.

Annotations

Annotation process

Labels are the list of words prepared in advances. Speakers were prompted for individual words over the course of a five minute session.

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).

Citation Information

@article{speechcommandsv2,
   author = { {Warden}, P.},
    title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
  journal = {ArXiv e-prints},
  archivePrefix = "arXiv",
  eprint = {1804.03209},
  primaryClass = "cs.CL",
  keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
    year = 2018,
    month = apr,
    url = {https://arxiv.org/abs/1804.03209},
}

Contributions

Thanks to @polinaeterna for adding this dataset.

Downloads last month
30

Paper for beeneptune/speech_commands