AE29H_float32 / README.md
arcosoph-creator's picture
Update README.md
c56e75f verified
metadata
license: apache-2.0
tags:
  - Embeddings
  - ACAV100M
  - AE29H_float32
  - nanowakeword
  - noice

AE29H_float32

Audio Embeddings ~29 hours dataset contains precomputed audio embeddings designed for Nanowakeword framework. The embeddings are intended to be used as general-purpose negative training data, meaning the audio does not contain the target wake word or phrase.

Unlike raw audio datasets, the files in this dataset contain low-dimensional audio embeddings extracted from audio clips using a pre-trained speech embedding model. These embeddings can be directly used as input features when training wake-word detection models with NanoWakeWord.

The goal of this dataset is to provide diverse background audio representations (speech, environmental noise, music, etc.) that help wake-word models learn to avoid false activations.


Dataset Source

The embeddings were generated from a subset of the ACAV100M dataset.

ACAV100M is a very large automatically curated audio-visual dataset created from millions of internet videos and designed for large-scale audio-visual learning. It contains diverse real-world audio such as speech, environmental sounds, music, and background noise.

For this dataset:

  • A 20K subset (~2 days of audio) from ACAV100M.
  • Audio clips were processed and converted into embeddings suitable for wake-word training.

Dataset Statistics

  • Shape:

    (21115, 16, 96)
    
  • Total samples: 21,115

  • Feature dimensions:

    • Temporal steps: 16
    • Embedding size: 96

Each sample represents approximately 1.28 seconds of audio, where each temporal step corresponds to ~80 ms.


Data Type

dtype: float32

Value range

min: -77.23914
max: 95.59355

Intended Use

This dataset is intended for:

  • Training NanoWakeWord wake-word detection models
  • Providing negative training examples
  • Improving false-positive robustness
  • Training models that operate directly on audio embeddings