Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,28 @@
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
- audio
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
***AudioMNIST*** consists of audio recordings of 60 different speakers saying the digits 0 to 9, with 50 recordings per digit per speaker [1, 2]. The processed dataset contains 30,000 (univariate) time series, each of length 47,998 (approximately 1 second of data sampled at 44khz), with ten classes representing the digits 0 to 9. This version of the dataset has been split into cross-validation folds based on speaker (i.e., such that recordings for a given speaker do not appear in both the training and validation sets). ***AudioMNIST-DS*** is a variant of the same dataset downsampled to a length of 4,000.
|
| 8 |
|
| 9 |
[1] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark. *Journal of the Franklin Institute*, 361(1):418–428.
|
|
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
- audio
|
| 4 |
+
- monster
|
| 5 |
+
- time series
|
| 6 |
+
license: mit
|
| 7 |
+
size_categories:
|
| 8 |
+
- 10K<n<100K
|
| 9 |
+
task_categories:
|
| 10 |
+
- text-classification
|
| 11 |
+
pretty_name: AudioMNIST
|
| 12 |
---
|
| 13 |
Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
|
| 14 |
|
| 15 |
+
|AudioMNIST||
|
| 16 |
+
|-|-:|
|
| 17 |
+
|Category|Audio|
|
| 18 |
+
|Num. Examples|30,000|
|
| 19 |
+
|Num. Channels|1|
|
| 20 |
+
|Length|47,998|
|
| 21 |
+
|Sampling Freq.|44.1 kHz|
|
| 22 |
+
|Num. Classes|10|
|
| 23 |
+
|License|[MIT](https://opensource.org/license/mit)|
|
| 24 |
+
|Citations|[1] [2]|
|
| 25 |
+
|
| 26 |
***AudioMNIST*** consists of audio recordings of 60 different speakers saying the digits 0 to 9, with 50 recordings per digit per speaker [1, 2]. The processed dataset contains 30,000 (univariate) time series, each of length 47,998 (approximately 1 second of data sampled at 44khz), with ten classes representing the digits 0 to 9. This version of the dataset has been split into cross-validation folds based on speaker (i.e., such that recordings for a given speaker do not appear in both the training and validation sets). ***AudioMNIST-DS*** is a variant of the same dataset downsampled to a length of 4,000.
|
| 27 |
|
| 28 |
[1] Sören Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert Müller, Sebastian Lapuschkin, and Wojciech Samek. (2024). AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark. *Journal of the Franklin Institute*, 361(1):418–428.
|