multimodal / README.md
Ayush2312's picture
Upload README.md with huggingface_hub
62a9524 verified
metadata
license: mit
task_categories:
  - audio-classification
  - feature-extraction
tags:
  - spatial-audio
  - audio-encoder-training
  - room-acoustics
  - 3d-audio
  - binaural-sim
  - trajectography
language:
  - en
pretty_name: Spatial Audio Encoder Training Dataset (SAET)
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: train
        path: metadata.jsonl

Spatial Audio Encoder Training Dataset (SAET)

A high-fidelity synthetic dataset designed for training audio encoders to perceive and reason about 3D soundscapes. The dataset maps binaural/stereo audio cues to precise spatial trajectories and semantic labels.

🎧 Dataset Summary

This dataset contains 10-second stereo scenes (44.1kHz) synthesized in a virtual 3D room. Each scene features 1-3 moving sound sources with ground-truth trajectory metadata sampled at 10Hz.

πŸ“Š Dataset Generation Progress (Current State)

Stage Description Progress Details
1. Extraction Mono event extraction from AudioSet-Strong βœ… Complete 224 events extracted from 70/216 segments.
2. Synthesis 3D Spatial Scene Synthesis (Target: 10k) πŸ”„ ~75% 7,500+ scenes generated.
3. Reasoning QnA Pair Generation ⏳ Pending High-level reasoning tasks (7 categories).

πŸ“ Spatial Metadata Specification

Each audio sample is accompanied by a dense JSON metadata file (in data/scene_metadata/) and a summary entry in metadata.jsonl.

Coordinate System

  • Origin: Bottom-left-front corner of the room $[0, 0, 0]$.
  • Room Dimensions: $10m \times 8m \times 3m$ (Length $\times$ Width $\times$ Height).
  • Listener (Mic) Position: Fixed at center $[5.0, 2.0, 1.6]$.
  • Azimuth: $0^\circ$ is directly in front (+Y), $+90^\circ$ is Right (+X), $-90^\circ$ is Left (-X). Range: $[-180^\circ, 180^\circ]$.
  • Distance: Euclidean distance from the microphone center in meters.

Motion Dynamics

Sources follow one of five deterministic motion profiles:

  • Static: Source remains at a fixed 3D point.
  • Approach: Source moves linearly towards the listener.
  • Recede: Source moves linearly away from the listener.
  • Lateral: Source moves across the field of view (e.g., Left-to-Right).
  • Arc: Source moves in a circular path around the listener, maintaining relatively constant distance but shifting azimuth.

🧠 Reasoning Q&A Pairs (Stage 3)

A subset of scenes includes 7 question-answer pairs generated by an LLM (DeepSeek-R1-Distill-Qwen-7B) focusing on:

  1. Lateral Trajectory: Directional changes (Left-to-Right, Right-to-Left).
  2. Radial Change: Distance shifts (Approaching, Receding).
  3. Comparative: Which source is closer/farther?
  4. Temporal: Entry/Exit timings (Early, Middle, Late).
  5. Relative Motion: Inter-source spatial relationships.
  6. Natural Perception: Qualitative descriptions of sound movement.
  7. Choreography: Overall spatial pattern recognition.

πŸ”Š Audio Simulation Details

  • Engine: PyRoomAcoustics (Image Source Method).
  • Reverberation: 2nd order reflections simulated with a frequency-independent absorption coefficient of $0.25$.
  • Source Events: 224 high-variety mono events extracted from 70/216 AudioSet-Strong segments, rigorously filtered for quality (Duration $\geq$ 3.0s, CLAP semantic similarity score $\geq$ 0.45).
  • Format: 2-channel Stereo, 16-bit PCM, 44.1kHz.

πŸ› οΈ Data Columns (metadata.jsonl)

Column Type Description
audio Audio Path to the stereo .wav file.
scene_id int Unique ID matching the filename.
labels list Semantic classes (e.g., Crowd, Siren, Engine).
num_events int Number of simultaneous sources in the scene.
motion_types list List of motion profiles for each source.

🎯 Use Cases

  1. Spatial Audio Embedding: Training models like CLAP or Wav2Vec to create embeddings that cluster by spatial location or motion type.
  2. Trajectory Inference: Predicting the azimuth/distance change of a source over time.
  3. Source Separation: Decoupling multiple spatialized streams in a reverberant environment.

Reference: This dataset follows the methodology of "Spatial Audio Question Answering and Reasoning on Dynamic Source Movements" (2024).