Update README.md
Browse files
README.md
CHANGED
|
@@ -10,9 +10,9 @@ tags:
|
|
| 10 |
|
| 11 |
# AE29H_float32
|
| 12 |
|
| 13 |
-
**Audio Embeddings ~29 hours** dataset contains **precomputed audio embeddings** designed for
|
| 14 |
|
| 15 |
-
Unlike raw audio datasets, the files in this dataset contain **low-dimensional audio embeddings** extracted from audio clips using a pre-trained speech embedding model. These embeddings can be directly used as input features when training wake-word detection models with NanoWakeWord.
|
| 16 |
|
| 17 |
The goal of this dataset is to provide **diverse background audio representations** (speech, environmental noise, music, etc.) that help wake-word models learn to avoid false activations.
|
| 18 |
|
|
@@ -20,7 +20,7 @@ The goal of this dataset is to provide **diverse background audio representation
|
|
| 20 |
|
| 21 |
# Dataset Source
|
| 22 |
|
| 23 |
-
The embeddings were generated from a subset of the **ACAV100M** dataset.
|
| 24 |
|
| 25 |
ACAV100M is a very large automatically curated audio-visual dataset created from millions of internet videos and designed for large-scale audio-visual learning. It contains diverse real-world audio such as speech, environmental sounds, music, and background noise.
|
| 26 |
|
|
@@ -43,8 +43,10 @@ For this dataset:
|
|
| 43 |
**21,115**
|
| 44 |
|
| 45 |
* **Feature dimensions:**
|
| 46 |
-
* **Embedding size:** 96
|
| 47 |
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
---
|
| 50 |
|
|
|
|
| 10 |
|
| 11 |
# AE29H_float32
|
| 12 |
|
| 13 |
+
**Audio Embeddings ~29 hours** dataset contains **precomputed audio embeddings** designed for **[Nanowakeword](https://github.com/arcosoph/nanowakeword)** framework. The embeddings are intended to be used as **general-purpose negative training data**, meaning the audio does **not contain the target wake word or phrase**.
|
| 14 |
|
| 15 |
+
Unlike raw audio datasets, the files in this dataset contain **low-dimensional audio embeddings** extracted from audio clips using a pre-trained [speech embedding](https://www.kaggle.com/models/google/speech-embedding) model. These embeddings can be directly used as input features when training wake-word detection models with NanoWakeWord.
|
| 16 |
|
| 17 |
The goal of this dataset is to provide **diverse background audio representations** (speech, environmental noise, music, etc.) that help wake-word models learn to avoid false activations.
|
| 18 |
|
|
|
|
| 20 |
|
| 21 |
# Dataset Source
|
| 22 |
|
| 23 |
+
The embeddings were generated from a subset of the **[ACAV100M](https://acav100m.github.io/)** dataset.
|
| 24 |
|
| 25 |
ACAV100M is a very large automatically curated audio-visual dataset created from millions of internet videos and designed for large-scale audio-visual learning. It contains diverse real-world audio such as speech, environmental sounds, music, and background noise.
|
| 26 |
|
|
|
|
| 43 |
**21,115**
|
| 44 |
|
| 45 |
* **Feature dimensions:**
|
|
|
|
| 46 |
|
| 47 |
+
* **Temporal steps:** 16
|
| 48 |
+
* **Embedding size:** 96
|
| 49 |
+
Each sample represents approximately **1.28 seconds of audio**, where each temporal step corresponds to **~80 ms**.
|
| 50 |
|
| 51 |
---
|
| 52 |
|