Datasets:

Modalities:
Video
Audio
Languages:
English
ArXiv:
License:
hahyeon610 commited on
Commit
ca6d222
·
1 Parent(s): 4f706ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -3
README.md CHANGED
@@ -1,3 +1,83 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - video
7
+ - multimodal
8
+ - audio
9
+ - audio-visual-localization
10
+ size_categories:
11
+ - 1B<n<10B
12
+ pretty_name: AVATAR
13
+ ---
14
+
15
+ # AVATAR: What’s Making That Sound Right Now? Video-centric Audio-Visual Localization
16
+
17
+ **AVATAR** stands for **A**udio-**V**isual localiz**A**tion benchmark for a spatio-**T**empor**A**l pe**R**spective in video.
18
+
19
+ AVATAR is a **benchmark dataset** designed to evaluate **video-centric audio-visual localization (AVL)** in **complex and dynamic real-world scenarios**.
20
+ Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers **high-resolution temporal annotations** over entire videos. It supports four challenging evaluation settings:
21
+ **Single-sound**, **Mixed-sound**, **Multi-entity**, and **Off-screen**.
22
+
23
+ 📄 [Paper (ICCV 2025)](https://hahyeon610.github.io/Video-centric_Audio_Visual_Localization/)
24
+ 🌐 [Project Website](https://hahyeon610.github.io/Video-centric_Audio_Visual_Localization/)
25
+ 📁 [Code & Data Viewer](https://huggingface.co/datasets/mipal/AVATAR/tree/main)
26
+
27
+ ---
28
+
29
+ ## 📦 Dataset Structure
30
+
31
+ The dataset consists of the following files:
32
+
33
+ | File | Description |
34
+ |------|-------------|
35
+ | `video.zip` | ~3.8GB of `.mp4` video clips |
36
+ | `metadata.zip` | ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags) |
37
+ | `vggsound_10k.txt` | List of 10,000 training video IDs from VGGSound |
38
+ | `code/` | AVATAR benchmark evaluation code |
39
+
40
+ Each annotated frame includes:
41
+ - Visual bounding boxes and segmentation masks for sound-emitting objects
42
+ - Audio-visual category labels aligned to the active sound source at each timestamp
43
+ - Instance-level scenario labels (e.g., Off-screen, Mixed-sound)
44
+
45
+ ---
46
+
47
+ ## 🧪 Scenarios and Tasks
48
+
49
+ AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
50
+
51
+ 1. **Single-sound**: One sound-emitting instance per frame
52
+ 2. **Mixed-sound**: Multiple overlapping sound sources (same or different categories)
53
+ 3. **Multi-entity**: One sounding instance among multiple visually similar ones
54
+ 4. **Off-screen**: No visible sound source within the frame
55
+
56
+ 🔍 You can evaluate your model using:
57
+ - **Consensus IoU (CIoU)**
58
+ - **AUC**
59
+ - **Pixel-level TN% (for Off-screen)**
60
+
61
+ ---
62
+
63
+ ## 📋 Sample Instance (metadata)
64
+ ```json
65
+ {
66
+ "video_id": str,
67
+ "frame_number": int,
68
+ "annotations": [
69
+ { // instance 1 (e.g., man)
70
+ "segmentation": [ // (x, y) annotated RLE format
71
+ [float, float],
72
+ ...
73
+ ],
74
+ "bbox": [float, float, float, float], // (l, t, w, h),
75
+ "scenario": str, // "Single-Sound", "Mixed-Sound", "Multi-Entity", "Off-Screen"
76
+ "audio_visual_category": str,
77
+ },
78
+ { // instance 2 (e.g., piano)
79
+ ...
80
+ },
81
+ ...
82
+ ]
83
+ }