You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MISP-QEKS: A Large-Scale Tri-Modal Benchmark for Query-by-Example Keyword Spotting

Official dataset release for:

MISP-QEKS: A Large-Scale Dataset with Multimodal Cues for Query-by-Example Keyword Spotting
ACM MM 2025, Dublin, Ireland
DOI: https://doi.org/10.1145/3746027.3758268


Overview

MISP-QEKS is the first large-scale tri-modal benchmark for open-vocabulary Query-by-Example Keyword Spotting (QEKS).

Unlike traditional keyword spotting datasets that:

  • focus on fixed keyword sets
  • rely on clean audio-only recordings
  • lack OOV evaluation

MISP-QEKS provides:

  • Fully aligned Text–Audio–Visual keyword clips
  • Real-world noise simulation
  • In-Vocabulary (IV) and Out-of-Vocabulary (OOV) evaluation splits
  • 610,000 enrollment–query pairs
  • 9,830+ distinct keywords

This dataset enables robust, multimodal, open-vocabulary keyword spotting research under realistic acoustic conditions.


Task Definition

Tri-modal QEKS Framework

As shown in the figure, given:

  • An enrollment example (text/audio/video)
  • A query clip (audio/video)

The system predicts:

  • Whether both samples contain the same keyword
  • A probability score

This supports:

  • Open-vocabulary keyword spotting
  • Cross-modal matching
  • Robust detection under noise

Dataset Construction Pipeline

Construction Pipeline

As shown in the figure, MISP-QEKS is constructed from sentence-level audio-visual-text data via:

  1. Phone-level forced alignment
  2. Word-level cropping
  3. Real-world noise simulation
  4. Enrollment–query pair construction

This pipeline enables large-scale, synchronized multimodal keyword samples suitable for open-vocabulary QEKS research.

Dataset Statistics

  • Total duration: 193.606 hours
  • Keywords: 9,830
  • Enrollment–Query pairs: 610,000
    • 122,000 positive
    • 488,000 negative
  • Positive:Negative ratio = 1:4

Data Splits

Split Duration (h) Keywords Pairs Positive Negative
Train 157.756 8,357 500,000 100,000 400,000
Dev 3.245 2,247 10,000 2,000 8,000
Eval-seen 15.300 2,174 50,000 10,000 40,000
Eval-blind 17.305 1,445 50,000 10,000 40,000

Evaluation protocol:

  • Eval-seen → In-Vocabulary (IV)
  • Eval-blind → Out-of-Vocabulary (OOV)
  • Speaker-independent split

Keyword Frequency Distribution

Keyword Frequency Histogram

The keyword frequency distribution demonstrates:

  • Strong coverage across high-frequency and mid-frequency words.
  • Long-tail behavior suitable for evaluating generalization.

This design supports robust training while preserving realistic lexical imbalance.


Noise Characteristics and Quality Distribution

To emulate realistic acoustic environments, clean clips are mixed with real-world background noise at SNR levels {+5, 0, −5, −10} dB.

Speech Quality Distribution

PESQ and STOI Distribution

  • Most PESQ scores lie between 1.5 and 2.5.
  • Most STOI values fall between 65% and 85%.

This confirms that the dataset spans a broad spectrum of realistic noise conditions.


Repository Structure and File Description

The repository contains the following files:

Data Archives

  • train.zip Contains the training split (157.756 hours, 500,000 enrollment–query pairs across 8,357 keywords).

  • dev_seen.zip Development split for hyperparameter tuning on In-Vocabulary (IV) keywords.
    Keywords partially overlap with the training set.

  • dev_unseen.zip Development split for Out-of-Vocabulary (OOV) validation.
    Keywords do not appear in the training set.

  • eval_seen.zip In-Vocabulary (IV) evaluation split.
    Keywords appear in the training set and are used for standard evaluation.

  • eval_unseen.zip
    Out-of-Vocabulary (OOV) evaluation split.
    Keywords are not seen during training and are used to assess generalization.


Noise and Metadata

  • noise.zip
    Real-world background noise recordings used for acoustic simulation.

  • snr_map.zip Mapping file indicating the signal-to-noise ratio (SNR) assigned to each noisy sample.


Baseline Checkpoint

  • train/model/

    Contains the official 10-epoch checkpoint of the XEQ-Matcher baseline described in the ACM MM 2025 paper.

    This checkpoint can be used directly for evaluation or reproduction of reported results.

    Official implementation: https://github.com/coalboss/MISP-QEKS


Pretrained Feature Extractors

  • model/

    Contains pretrained feature extraction models required by the baseline system.

    These models are used as frozen encoders for:

    • Audio feature extraction (e.g., Whisper-Tiny encoder)
    • Visual feature extraction (CNN-ResNet backbone)
    • Text processing (G2P model)

    These components are necessary to reproduce the reported baseline performance.

Downloads last month
31