File size: 2,187 Bytes
f32a9bf
 
 
 
 
 
 
 
 
 
 
 
c56e75f
f32a9bf
c56e75f
f32a9bf
 
 
 
 
 
 
c56e75f
f32a9bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c56e75f
 
 
f32a9bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
tags:
- Embeddings
- ACAV100M
- AE29H_float32
- nanowakeword
- noice
---

# AE29H_float32

**Audio Embeddings ~29 hours** dataset contains **precomputed audio embeddings** designed for **[Nanowakeword](https://github.com/arcosoph/nanowakeword)** framework. The embeddings are intended to be used as **general-purpose negative training data**, meaning the audio does **not contain the target wake word or phrase**.

Unlike raw audio datasets, the files in this dataset contain **low-dimensional audio embeddings** extracted from audio clips using a pre-trained [speech embedding](https://www.kaggle.com/models/google/speech-embedding) model. These embeddings can be directly used as input features when training wake-word detection models with NanoWakeWord.

The goal of this dataset is to provide **diverse background audio representations** (speech, environmental noise, music, etc.) that help wake-word models learn to avoid false activations.

---

# Dataset Source

The embeddings were generated from a subset of the **[ACAV100M](https://acav100m.github.io/)** dataset.

ACAV100M is a very large automatically curated audio-visual dataset created from millions of internet videos and designed for large-scale audio-visual learning. It contains diverse real-world audio such as speech, environmental sounds, music, and background noise.

For this dataset:

* A **20K subset (~2 days of audio)** from ACAV100M.
* Audio clips were processed and converted into embeddings suitable for wake-word training.

---

# Dataset Statistics

* **Shape:**

  ```
  (21115, 16, 96)
  ```

* **Total samples:**
  **21,115**

* **Feature dimensions:**

  * **Temporal steps:** 16
  * **Embedding size:** 96
Each sample represents approximately **1.28 seconds of audio**, where each temporal step corresponds to **~80 ms**.

---

# Data Type

```
dtype: float32
```

**Value range**

```
min: -77.23914
max: 95.59355
```

---

# Intended Use

This dataset is intended for:

* Training **NanoWakeWord wake-word detection models**
* Providing **negative training examples**
* Improving **false-positive robustness**
* Training models that operate directly on **audio embeddings**