Datasets:

Modalities:
Audio
ArXiv:
License:
angus924 commited on
Commit
f39721f
·
verified ·
1 Parent(s): a38a432

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -1,9 +1,27 @@
1
  ---
2
  tags:
 
 
 
3
  - audio
 
 
 
 
4
  ---
5
  Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
6
 
 
 
 
 
 
 
 
 
 
 
 
7
  ***AudioMNIST*** consists of audio recordings of 60 different speakers saying the digits 0 to 9, with 50 recordings per digit per speaker [1, 2]. The processed dataset contains 30,000 (univariate) time series, each of length 47,998 (approximately 1 second of data sampled at 44khz), with ten classes representing the digits 0 to 9. This version of the dataset has been split into cross-validation folds based on speaker (i.e., such that recordings for a given speaker do not appear in both the training and validation sets). ***AudioMNIST-DS*** is a variant of the same dataset downsampled to a length of 4,000.
8
 
9
  [1] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark. *Journal of the Franklin Institute*, 361(1):418–428.
 
1
  ---
2
  tags:
3
+ - time series
4
+ - time series classification
5
+ - monster
6
  - audio
7
+ license: mit
8
+ pretty_name: AudioMNIST-DS
9
+ size_categories:
10
+ - 10K<n<100K
11
  ---
12
  Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
13
 
14
+ |AudioMNIST-DS||
15
+ |-|-:|
16
+ |Category|Audio|
17
+ |Num. Examples|30,000|
18
+ |Num. Channels|1|
19
+ |Length|4,000|
20
+ |Sampling Freq.|~4 kHz|
21
+ |Num. Classes|10|
22
+ |License|[MIT](https://opensource.org/license/mit)|
23
+ |Citations|[1] [2]|
24
+
25
  ***AudioMNIST*** consists of audio recordings of 60 different speakers saying the digits 0 to 9, with 50 recordings per digit per speaker [1, 2]. The processed dataset contains 30,000 (univariate) time series, each of length 47,998 (approximately 1 second of data sampled at 44khz), with ten classes representing the digits 0 to 9. This version of the dataset has been split into cross-validation folds based on speaker (i.e., such that recordings for a given speaker do not appear in both the training and validation sets). ***AudioMNIST-DS*** is a variant of the same dataset downsampled to a length of 4,000.
26
 
27
  [1] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark. *Journal of the Franklin Institute*, 361(1):418–428.