Datasets:

Modalities:
Video
Audio
Languages:
English
ArXiv:
License:
AVATAR / README.md
hahyeon610's picture
Update README.md
a7f0146 verified
metadata
license: cc-by-4.0
language:
  - en
tags:
  - video
  - multimodal
  - audio
  - audio-visual-localization
size_categories:
  - 1B<n<10B
pretty_name: AVATAR

AVATAR: What’s Making That Sound Right Now? Video-centric Audio-Visual Localization

AVATAR stands for Audio-Visual localizAtion benchmark for a spatio-TemporAl peRspective in video.

AVATAR is a benchmark dataset designed to evaluate video-centric audio-visual localization (AVL) in complex and dynamic real-world scenarios.
Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers high-resolution temporal annotations over entire videos. It supports four challenging evaluation settings:
Single-sound, Mixed-sound, Multi-entity, and Off-screen.

📄 Paper (ICCV 2025)
🌐 Project Website
📁 Code & Data Viewer


📦 Dataset Structure

The dataset consists of the following files:

File Description
video.zip ~3.8GB of .mp4 video clips
metadata.zip ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags)
vggsound_10k.txt List of 10,000 training video IDs from VGGSound
code/ AVATAR benchmark evaluation code

Each annotated frame includes:

  • Visual bounding boxes and segmentation masks for sound-emitting objects
  • Audio-visual category labels aligned to the active sound source at each timestamp
  • Instance-level scenario labels (e.g., Off-screen, Mixed-sound)

📊 Dataset Statistics

AVATAR provides detailed quantitative statistics to help users understand its scale and diversity.

Type Count
Videos 5,000
Frames 24,266
Off-screen 670
Scenario Type Instances
Total 28,516
Single-sound 15,372
Multi-entity 9,322
Mixed-sound 3,822

🧪 Scenarios and Tasks

AVATAR supports fine-grained scenario-wise evaluation of AVL models:

  1. Single-sound: One sound-emitting instance per frame
  2. Mixed-sound: Multiple overlapping sound sources (same or different categories)
  3. Multi-entity: One sounding instance among multiple visually similar ones
  4. Off-screen: No visible sound source within the frame

🔍 You can evaluate your model using:

  • Consensus IoU (CIoU)
  • AUC
  • Pixel-level TN% (for Off-screen)

🧩 Audio-Visual Category Diversity

AVATAR spans 80 audio-visual categories covering a wide range of everyday domains, including:

  • Human activities (e.g., talking, singing)
  • Music performances (e.g., violin, drum, piano)
  • Animal sounds (e.g., dog barking, bird chirping)
  • Vehicles (e.g., car engine, helicopter)
  • Tools and machines (e.g., chainsaw, blender)

Such diversity enables a comprehensive evaluation of model generalizability across varied audio-visual contexts.


📝 Example Metadata Format

{
  "video_id": str,
  "frame_number": int,
  "annotations": [
    { // instance 1 (e.g., man)
      "segmentation": [ // (x, y) annotated RLE format
            [float, float], 
            ...
          ],
      "bbox": [float, float, float, float], // (l, t, w, h),
      "scenario": str, // "Single-sound", "Mixed-sound", "Multi-entity", "Off-screen"
      "audio_visual_category": str,
    },
    { // instance 2 (e.g., piano)
      ...
    }, 
    ...
  ]
}

📚 Citation

@InProceedings{Choi_2025_ICCV,
    author    = {Choi, Hahyeon and Lee, Junhoo and Kwak, Nojun},
    title     = {What's Making That Sound Right Now? Video-centric Audio-Visual Localization},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2025},
    pages     = {20095-20104}
}