File size: 4,157 Bytes
ca6d222 b4247f2 ca6d222 b4247f2 ca6d222 b4247f2 ca6d222 b4247f2 ca6d222 b4247f2 ca6d222 a7f0146 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
license: cc-by-4.0
language:
- en
tags:
- video
- multimodal
- audio
- audio-visual-localization
size_categories:
- 1B<n<10B
pretty_name: AVATAR
---
# AVATAR: What’s Making That Sound Right Now? Video-centric Audio-Visual Localization
**AVATAR** stands for **A**udio-**V**isual localiz**A**tion benchmark for a spatio-**T**empor**A**l pe**R**spective in video.
AVATAR is a **benchmark dataset** designed to evaluate **video-centric audio-visual localization (AVL)** in **complex and dynamic real-world scenarios**.
Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers **high-resolution temporal annotations** over entire videos. It supports four challenging evaluation settings:
**Single-sound**, **Mixed-sound**, **Multi-entity**, and **Off-screen**.
📄 [Paper (ICCV 2025)](https://arxiv.org/abs/2507.04667)
🌐 [Project Website](https://hahyeon610.github.io/Video-centric_Audio_Visual_Localization/)
📁 [Code & Data Viewer](https://huggingface.co/datasets/mipal/AVATAR/tree/main)
---
## 📦 Dataset Structure
The dataset consists of the following files:
| File | Description |
|------|-------------|
| `video.zip` | ~3.8GB of `.mp4` video clips |
| `metadata.zip` | ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags) |
| `vggsound_10k.txt` | List of 10,000 training video IDs from [VGGSound](https://huggingface.co/datasets/Loie/VGGSound)|
| `code/` | AVATAR benchmark evaluation code |
Each annotated frame includes:
- Visual bounding boxes and segmentation masks for sound-emitting objects
- Audio-visual category labels aligned to the active sound source at each timestamp
- Instance-level scenario labels (e.g., Off-screen, Mixed-sound)
---
## 📊 Dataset Statistics
AVATAR provides detailed quantitative statistics to help users understand its scale and diversity.
| Type | Count |
|------------|--------|
| Videos | 5,000 |
| Frames | 24,266 |
| Off-screen | 670 |
| Scenario Type | Instances |
|-----------------|-----------|
| Total | 28,516 |
| Single-sound | 15,372 |
| Multi-entity | 9,322 |
| Mixed-sound | 3,822 |
---
## 🧪 Scenarios and Tasks
AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
1. **Single-sound**: One sound-emitting instance per frame
2. **Mixed-sound**: Multiple overlapping sound sources (same or different categories)
3. **Multi-entity**: One sounding instance among multiple visually similar ones
4. **Off-screen**: No visible sound source within the frame
🔍 You can evaluate your model using:
- **Consensus IoU (CIoU)**
- **AUC**
- **Pixel-level TN% (for Off-screen)**
---
## 🧩 Audio-Visual Category Diversity
AVATAR spans **80 audio-visual categories** covering a wide range of everyday domains, including:
- **Human activities** (e.g., talking, singing)
- **Music performances** (e.g., violin, drum, piano)
- **Animal sounds** (e.g., dog barking, bird chirping)
- **Vehicles** (e.g., car engine, helicopter)
- **Tools and machines** (e.g., chainsaw, blender)
Such diversity enables a **comprehensive evaluation** of model generalizability across varied audio-visual contexts.
---
## 📝 Example Metadata Format
```json
{
"video_id": str,
"frame_number": int,
"annotations": [
{ // instance 1 (e.g., man)
"segmentation": [ // (x, y) annotated RLE format
[float, float],
...
],
"bbox": [float, float, float, float], // (l, t, w, h),
"scenario": str, // "Single-sound", "Mixed-sound", "Multi-entity", "Off-screen"
"audio_visual_category": str,
},
{ // instance 2 (e.g., piano)
...
},
...
]
}
```
---
## 📚 Citation
```bibtex
@InProceedings{Choi_2025_ICCV,
author = {Choi, Hahyeon and Lee, Junhoo and Kwak, Nojun},
title = {What's Making That Sound Right Now? Video-centric Audio-Visual Localization},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {20095-20104}
}
``` |