|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
task_categories: |
|
|
- image-segmentation |
|
|
tags: |
|
|
- video |
|
|
- multimodal |
|
|
- segmentation |
|
|
- pointing |
|
|
- spatio-temporal-grounding |
|
|
- robotics |
|
|
- autonomous-driving |
|
|
- cell-tracking |
|
|
- egocentric-vision |
|
|
- gui-interaction |
|
|
--- |
|
|
|
|
|
# VPoS-Bench: Video Pointing and Segmentation Benchmark |
|
|
|
|
|
**VPoS-Bench** is a challenging out-of-distribution benchmark designed to evaluate the spatio-temporal pointing and reasoning capabilities of video-language models. It covers a diverse set of five real-world application domains, with fine-grained point-level and segmentation annotations that enable robust evaluation of multimodal models under realistic, temporally complex scenarios. |
|
|
|
|
|
> **Webpage**: [VideoMolmo](https://mbzuai-oryx.github.io/VideoMolmo/) |
|
|
|
|
|
> **Paper**: [VideoMolmo: Spatio-Temporal Grounding meets Pointing](https://arxiv.org/pdf/2506.05336) |
|
|
|
|
|
> **Model**: [VideoMolmo on Hugging Face](https://huggingface.co/ghazishazan/VideoMolmo) |
|
|
|
|
|
> **Code**: [VideoMolmo on Github](https://github.com/mbzuai-oryx/VideoMolmo) |
|
|
|
|
|
--- |
|
|
|
|
|
## 🌍 Benchmark Overview |
|
|
|
|
|
VPoS-Bench tests the **generalization** of models in five diverse real-world scenarios: |
|
|
|
|
|
1. **Cell Tracking** |
|
|
Track the trajectory of biological entities (e.g., nuclei or cells) across microscopy video frames. |
|
|
> Applications: developmental biology, disease modeling |
|
|
|
|
|
2. **Egocentric Vision** |
|
|
Identify and follow objects or hands in first-person camera footage. |
|
|
> Applications: activity recognition, assistive tech |
|
|
|
|
|
3. **Autonomous Driving** |
|
|
Point to traffic participants (pedestrians, vehicles, lights) under varying conditions. |
|
|
> Applications: self-driving systems, urban scene understanding |
|
|
|
|
|
4. **Video-GUI Interaction** |
|
|
Follow on-screen elements (e.g., cursors, buttons) across software interface recordings. |
|
|
> Applications: AI-assisted UI navigation, screen agents |
|
|
|
|
|
5. **Robotics** |
|
|
Track manipulable objects or robotic end-effectors as they interact in structured environments. |
|
|
> Applications: robot learning, manipulation planning |
|
|
|
|
|
--- |
|
|
|
|
|
## 📁 Dataset Structure |
|
|
The dataset is organized by domain. Each domain folder contains three subdirectories: |
|
|
|
|
|
- `frames/` – Extracted video frames. |
|
|
- `masks/` – Segmentation masks corresponding to frames. |
|
|
- `annotations/` – JSON files containing text descriptions and point-level annotations. |
|
|
|
|
|
```text |
|
|
vpos-bench/ |
|
|
├── cell-tracking/ |
|
|
│ ├── frames/ # Extracted video frames (e.g., frame_0001.jpg, ...) |
|
|
│ ├── masks/ # Segmentation masks per frame (optional) |
|
|
│ └── annotations/ # Point coordinates + caption in JSON format |
|
|
│ |
|
|
├── autonomous-driving/ |
|
|
... |
|
|
--- |
|
|
├── |
|
|
``` |
|
|
|
|
|
## 📁 Annotation Format |
|
|
Each annotation is keyed by a unique video ID and consists of: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"video_id": { |
|
|
"caption": "natural language instruction here", |
|
|
"frames": [ |
|
|
{ |
|
|
"frame_path": "domain/frames/video_id/frame_00001.jpg", |
|
|
"mask_path": "domain/masks/video_id/0.png", |
|
|
"points": [[x, y], ...] |
|
|
}, |
|
|
{ |
|
|
"frame_path": "domain/frames/video_id/frame_00002.jpg", |
|
|
"mask_path": "domain/masks/video_id/1.png", |
|
|
"points": [[x, y], ...] |
|
|
} |
|
|
] |
|
|
} |
|
|
} |
|
|
``` |