StreamGaze / README.md
nielsr's picture
nielsr HF Staff
Update task category, add paper/code links, and enhance sample usage
a47c569 verified
|
raw
history blame
6 kB
---
language:
- en
license: cc-by-4.0
task_categories:
- video-text-to-text
tags:
- gaze
- multimodal
- video-understanding
- streaming-video
- temporal-reasoning
- proactive-understanding
- egocentric-vision
- visual-question-answering
---
# StreamGaze Dataset
[Paper](https://huggingface.co/papers/2512.01707) | [Project Page](https://streamgaze.github.io/) | [Code](https://github.com/daeunni/StreamGaze)
**StreamGaze** is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts.
## πŸ“ Dataset Structure
```
streamgaze/
β”œβ”€β”€ metadata/
β”‚ β”œβ”€β”€ egtea.csv # EGTEA fixation metadata
β”‚ β”œβ”€β”€ egoexolearn.csv # EgoExoLearn fixation metadata
β”‚ └── holoassist.csv # HoloAssist fixation metadata
β”‚
β”œβ”€β”€ qa/
β”‚ β”œβ”€β”€ past_gaze_sequence_matching.json
β”‚ β”œβ”€β”€ past_non_fixated_object_identification.json
β”‚ β”œβ”€β”€ past_object_transition_prediction.json
β”‚ β”œβ”€β”€ past_scene_recall.json
β”‚ β”œβ”€β”€ present_future_action_prediction.json
β”‚ β”œβ”€β”€ present_object_attribute_recognition.json
β”‚ β”œβ”€β”€ present_object_identification_easy.json
β”‚ β”œβ”€β”€ present_object_identification_hard.json
β”‚ β”œβ”€β”€ proactive_gaze_triggered_alert.json
β”‚ └── proactive_object_appearance_alert.json
β”‚
└── videos/
β”œβ”€β”€ videos_egtea_original.tar.gz # EGTEA original videos
β”œβ”€β”€ videos_egtea_viz.tar.gz # EGTEA with gaze visualization
β”œβ”€β”€ videos_egoexolearn_original.tar.gz # EgoExoLearn original videos
β”œβ”€β”€ videos_egoexolearn_viz.tar.gz # EgoExoLearn with gaze visualization
β”œβ”€β”€ videos_holoassist_original.tar.gz # HoloAssist original videos
└── videos_holoassist_viz.tar.gz # HoloAssist with gaze visualization
```
## 🎯 Task Categories
### **Past (Historical Context)**
- **Gaze Sequence Matching**: Match gaze patterns to action sequences
- **Non-Fixated Object Identification**: Identify objects outside gaze
- **Object Transition Prediction**: Predict object state changes
- **Scene Recall**: Recall scene details from memory
### **Present (Current Context)**
- **Object Identification (Easy/Hard)**: Identify objects in/outside FOV
- **Object Attribute Recognition**: Recognize object attributes
- **Future Action Prediction**: Predict upcoming actions
### **Proactive (Future-Oriented)**
- **Gaze-Triggered Alert**: Alert based on gaze patterns
- **Object Appearance Alert**: Alert on object appearance
## πŸš€ Quick Start
### Data Preparation
Download our dataset from HuggingFace and extract videos. The dataset should be located as below (note: the dataset itself is in `danaleee/StreamGaze`):
```
StreamGaze/
β”œβ”€β”€ dataset/
β”‚ β”œβ”€β”€ videos/
β”‚ β”‚ β”œβ”€β”€ original_video/ # Original egocentric videos
β”‚ β”‚ └── gaze_viz_video/ # Videos with gaze overlay
β”‚ └── qa/
β”‚ β”œβ”€β”€ past_*.json # Past task QA pairs
β”‚ β”œβ”€β”€ present_*.json # Present task QA pairs
β”‚ └── proactive_*.json # Proactive task QA pairs
```
#### Extract Videos
```bash
# Extract EGTEA videos
tar -xzf videos_egtea_original.tar.gz -C videos/egtea/original/
tar -xzf videos_egtea_viz.tar.gz -C videos/egtea/viz/
# Extract EgoExoLearn videos
tar -xzf videos_egoexolearn_original.tar.gz -C videos/egoexolearn/original/
tar -xzf videos_egoexolearn_viz.tar.gz -C videos/egoexolearn/viz/
# Extract HoloAssist videos
tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/
tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/
```
### Running Evaluation
Quick evaluation on existing models:
```bash
# Evaluate ViSpeak (without gaze visualization)
bash scripts/vispeak.sh
# Evaluate ViSpeak (with gaze visualization)
bash scripts/vispeak.sh --use_gaze_instruction
# Evaluate GPT-4o
bash scripts/gpt4o.sh --use_gaze_instruction
# Evaluate Qwen2.5-VL
bash scripts/qwen25vl.sh --use_gaze_instruction
```
Results will be automatically computed and saved to:
```
results/
β”œβ”€β”€ ModelName/
β”‚ β”œβ”€β”€ results/ # Without gaze visualization
β”‚ β”‚ β”œβ”€β”€ *_output.json
β”‚ β”‚ └── evaluation_summary.json
β”‚ └── results_viz/ # With gaze visualization
β”‚ β”œβ”€β”€ *_output.json
β”‚ └── evaluation_summary.json
```
## πŸ”‘ Metadata Format
Each metadata CSV contains:
- `video_source`: Video identifier
- `fixation_id`: Fixation segment ID
- `start_time_seconds` / `end_time_seconds`: Temporal boundaries
- `center_x` / `center_y`: Gaze center coordinates (normalized)
- `representative_object`: Primary object at gaze point
- `other_objects_in_cropped_area`: Objects within FOV
- `other_objects_outside_fov`: Objects outside FOV
- `scene_caption`: Scene description
- `action_caption`: Action description
## πŸ“ QA Format
Each QA JSON file contains:
```json
{
"response_time": "[00:08 - 09:19]",
"questions": [
{
"question": "Among {milk, spoon, pan, phone}, which did the user never gaze at?",
"time_stamp": "03:14",
"answer": "A",
"options": [
"A. milk",
"B. spoon",
"C. pan",
"D. phone"
],
}
],
"video_path": "OP01-R03-BaconAndEggs.mp4"
}
```
## πŸ“„ License
This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
See https://creativecommons.org/licenses/by/4.0/
## πŸ”— Links
- **Paper**: [https://huggingface.co/papers/2512.01707](https://huggingface.co/papers/2512.01707)
- **Evaluation code**: [https://github.com/daeunni/StreamGaze](https://github.com/daeunni/StreamGaze)
- **Project page**: [https://streamgaze.github.io/](https://streamgaze.github.io/)
## πŸ“§ Contact
For questions or issues, please contact: daeun@cs.unc.edu