Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ AVATAR is a **benchmark dataset** designed to evaluate **video-centric audio-vis
|
|
| 20 |
Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers **high-resolution temporal annotations** over entire videos. It supports four challenging evaluation settings:
|
| 21 |
**Single-sound**, **Mixed-sound**, **Multi-entity**, and **Off-screen**.
|
| 22 |
|
| 23 |
-
📄 [Paper (ICCV 2025)](https://
|
| 24 |
🌐 [Project Website](https://hahyeon610.github.io/Video-centric_Audio_Visual_Localization/)
|
| 25 |
📁 [Code & Data Viewer](https://huggingface.co/datasets/mipal/AVATAR/tree/main)
|
| 26 |
|
|
@@ -34,7 +34,7 @@ The dataset consists of the following files:
|
|
| 34 |
|------|-------------|
|
| 35 |
| `video.zip` | ~3.8GB of `.mp4` video clips |
|
| 36 |
| `metadata.zip` | ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags) |
|
| 37 |
-
| `vggsound_10k.txt` | List of 10,000 training video IDs from VGGSound
|
| 38 |
| `code/` | AVATAR benchmark evaluation code |
|
| 39 |
|
| 40 |
Each annotated frame includes:
|
|
@@ -44,6 +44,25 @@ Each annotated frame includes:
|
|
| 44 |
|
| 45 |
---
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
## 🧪 Scenarios and Tasks
|
| 48 |
|
| 49 |
AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
|
|
@@ -60,7 +79,20 @@ AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
|
|
| 60 |
|
| 61 |
---
|
| 62 |
|
| 63 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
```json
|
| 65 |
{
|
| 66 |
"video_id": str,
|
|
@@ -72,7 +104,7 @@ AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
|
|
| 72 |
...
|
| 73 |
],
|
| 74 |
"bbox": [float, float, float, float], // (l, t, w, h),
|
| 75 |
-
"scenario": str, // "Single-
|
| 76 |
"audio_visual_category": str,
|
| 77 |
},
|
| 78 |
{ // instance 2 (e.g., piano)
|
|
|
|
| 20 |
Unlike previous benchmarks that rely on static image-level annotations and assume simplified conditions, AVATAR offers **high-resolution temporal annotations** over entire videos. It supports four challenging evaluation settings:
|
| 21 |
**Single-sound**, **Mixed-sound**, **Multi-entity**, and **Off-screen**.
|
| 22 |
|
| 23 |
+
📄 [Paper (ICCV 2025)](https://arxiv.org/abs/2507.04667)
|
| 24 |
🌐 [Project Website](https://hahyeon610.github.io/Video-centric_Audio_Visual_Localization/)
|
| 25 |
📁 [Code & Data Viewer](https://huggingface.co/datasets/mipal/AVATAR/tree/main)
|
| 26 |
|
|
|
|
| 34 |
|------|-------------|
|
| 35 |
| `video.zip` | ~3.8GB of `.mp4` video clips |
|
| 36 |
| `metadata.zip` | ~1.6GB of annotations (bounding boxes, segmentation masks, scenario tags) |
|
| 37 |
+
| `vggsound_10k.txt` | List of 10,000 training video IDs from [VGGSound](https://huggingface.co/datasets/Loie/VGGSound)|
|
| 38 |
| `code/` | AVATAR benchmark evaluation code |
|
| 39 |
|
| 40 |
Each annotated frame includes:
|
|
|
|
| 44 |
|
| 45 |
---
|
| 46 |
|
| 47 |
+
## 📊 Dataset Statistics
|
| 48 |
+
|
| 49 |
+
AVATAR provides detailed quantitative statistics to help users understand its scale and diversity.
|
| 50 |
+
|
| 51 |
+
| Type | Count |
|
| 52 |
+
|------------|--------|
|
| 53 |
+
| Videos | 5,000 |
|
| 54 |
+
| Frames | 24,266 |
|
| 55 |
+
| Off-screen | 670 |
|
| 56 |
+
|
| 57 |
+
| Scenario Type | Instances |
|
| 58 |
+
|-----------------|-----------|
|
| 59 |
+
| Total | 28,516 |
|
| 60 |
+
| Single-sound | 15,372 |
|
| 61 |
+
| Multi-entity | 9,322 |
|
| 62 |
+
| Mixed-sound | 3,822 |
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
## 🧪 Scenarios and Tasks
|
| 67 |
|
| 68 |
AVATAR supports **fine-grained scenario-wise evaluation** of AVL models:
|
|
|
|
| 79 |
|
| 80 |
---
|
| 81 |
|
| 82 |
+
## 🧩 Audio-Visual Category Diversity
|
| 83 |
+
|
| 84 |
+
AVATAR spans **80 audio-visual categories** covering a wide range of everyday domains, including:
|
| 85 |
+
- **Human activities** (e.g., talking, singing)
|
| 86 |
+
- **Music performances** (e.g., violin, drum, piano)
|
| 87 |
+
- **Animal sounds** (e.g., dog barking, bird chirping)
|
| 88 |
+
- **Vehicles** (e.g., car engine, helicopter)
|
| 89 |
+
- **Tools and machines** (e.g., chainsaw, blender)
|
| 90 |
+
|
| 91 |
+
Such diversity enables a **comprehensive evaluation** of model generalizability across varied audio-visual contexts.
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
## 📝 Example Metadata Format
|
| 96 |
```json
|
| 97 |
{
|
| 98 |
"video_id": str,
|
|
|
|
| 104 |
...
|
| 105 |
],
|
| 106 |
"bbox": [float, float, float, float], // (l, t, w, h),
|
| 107 |
+
"scenario": str, // "Single-sound", "Mixed-sound", "Multi-entity", "Off-screen"
|
| 108 |
"audio_visual_category": str,
|
| 109 |
},
|
| 110 |
{ // instance 2 (e.g., piano)
|