Datasets:
__key__ stringlengths 5 5 | __url__ stringclasses 3
values | cls int64 0 38.1k | json dict | pt unknown |
|---|---|---|---|---|
00000 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 15,717 | {
"md5": "f5160820a3ec696a57a750db630ef66e",
"pretty_cls": "heflin"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00001 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 13,400 | {
"md5": "6997c1ae754c127d7d40bbbcf680533e",
"pretty_cls": "forsyth"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00002 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 4,080 | {
"md5": "826be1495914eb091f106c30b7f3b7a6",
"pretty_cls": "boston"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00003 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 28,927 | {
"md5": "8ebdf6de0b20a046f4950c32b9aa8ca8",
"pretty_cls": "rigging"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00004 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 4,447 | {
"md5": "61af3139ff9396a8020285aa358f1f63",
"pretty_cls": "brkan"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00005 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 34,411 | {
"md5": "bff0533abb593c3ea949e60ba53ee434",
"pretty_cls": "thrombosis"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00006 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 29,868 | {
"md5": "8a025675a6630808486c2da7772296ff",
"pretty_cls": "scarred"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00007 | hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.gz | 36,104 | {
"md5": "ab0912d4a0727bf92f03b7d19dcadb96",
"pretty_cls": "usurper"
} | [
80,
75,
3,
4,
0,
0,
8,
8,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
16,
0,
18,
0,
97,
114,
99,
104,
105,
118,
101,
47,
100,
97,
116,
97,
46,
112,
107,
108,
70,
66,
14,
0,
90,
90,
90,
90,
90,
90,... |
00008 | "hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.g(...TRUNCATED) | 668 | {
"md5": "3a8f62ba0fb9971dc92a3481ad3124f0",
"pretty_cls": "agnes"
} | "UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAQABIAYXJjaGl2ZS9kYXRhLnBrbEZCDgBaWlpaWlpaWlpaWlpaWoACY3RvcmNoLl9(...TRUNCATED) |
00009 | "hf://datasets/einstein8612/mlc-mlsw-melspects@8e47f687e677f17a234b09652c74b98c80355060/000000.tar.g(...TRUNCATED) | 36,290 | {
"md5": "eec75093b5e74057dc7278e663bb2853",
"pretty_cls": "veda"
} | "UEsDBAAACAgAAAAAAAAAAAAAAAAAAAAAAAAQABIAYXJjaGl2ZS9kYXRhLnBrbEZCDgBaWlpaWlpaWlpaWlpaWoACY3RvcmNoLl9(...TRUNCATED) |
MLCommons Multilingual Spoken Words Mel-Spectograms
This dataset contains all English words from the dataset available at MLCommons (or also available on huggingface). These audio files have been processed into Mel spectrograms for downstream usage in DCNNs or similar processes.
Dataset description
There's a total of 6624343 samples of Mel spectograms. There are a total of 38150 different words, the cls is the index of that word in alphabetical order. With every entry in the dataset there's the label, the Mel spectogram in a pickled PyTorch tensor in byte format, and a metadata JSON with the MD5 sum of the tensor representing the Mel spectogram and the actual word. The spectogram is in the shape of 128x32
You can view the Mel spectrogram of a word by viewing it through librosa.
Example
sr = 16000
import librosa
import torch
import io
import webdataset as wds
import matplotlib.pyplot as plt
def bytes_to_tensor(x):
return torch.load(io.BytesIO(x))
shuffle_buffer = 1000
dataset = wds.WebDataset(urls).shuffle(shuffle_buffer).map_dict(pt=bytes_to_tensor, cls=int).to_tuple("pt", "cls")
spectogram, label = next(iter(dataset))
librosa.display.specshow(spectogram.numpy(), sr=sr, hop_length=512, y_axis='mel', fmax=8000, x_axis='time')
plt.title(f"Mel Spectrogram (y={label})")
plt.colorbar(format='%+2.0f dB')
Usage
As it's in the WebDataset format you can simply use it as is like above, or through a PyTorch dataloader (recommended).
Example (PyTorch dataloader)
import torch
import io
import webdataset as wds
def bytes_to_tensor(x):
return torch.load(io.BytesIO(x))
shuffle_buffer = 1000
dataset = wds.WebDataset(urls).shuffle(shuffle_buffer).map_dict(pt=bytes_to_tensor, cls=int).to_tuple("pt", "cls")
loader = torch.utils.data.DataLoader(dataset, batch_size=256, num_workers=1)
for inputs, labels in loader:
print(inputs.shape) # torch.Size([256, 128, 32])
Process behind dataset
The exact process was done using torchlibrosa using the following relevant code.
sr = 16000
class PowerToDB(nn.Module):
def __init__(self, amin=1e-10, top_db=80.0):
super(PowerToDB, self).__init__()
self.amin = torch.tensor(amin)
self.top_db = top_db
def forward(self, S: torch.Tensor):
refs = torch.amax(S, dim=(2,3))
log_spec = 10.0 * torch.log10(self.amin.maximum(S)) - (10.0 * torch.log10(self.amin.maximum(refs))).unsqueeze(2).unsqueeze(3)
log_spec = torch.maximum(log_spec, log_spec.max() - self.top_db)
log_spec = log_spec[:, 0, :, :] # Remove channel dimension, always 1
log_spec = log_spec.transpose(1, 2) # Swap mel bin and time axis
return log_spec
feature_extractor = torch.nn.Sequential(
tl.Spectrogram(
pad_mode="constant",
hop_length=512,
win_length=None,
), tl.LogmelFilterBank(
sr=sr,
n_mels=128,
is_log=False,
), PowerToDB(
amin=1e-10,
top_db=80.0,
))
This gives the same output as
sr = 16000
mel_spect = librosa.feature.melspectrogram(y=y, sr=sr, n_fft=2048, hop_length=512)
mel_spect = librosa.power_to_db(mel_spect, ref=np.max)
with some (negligible) degree of numerical error due to using PyTorch instead.
- Downloads last month
- 3