metadata
language:
- en
license: cc-by-4.0
task_categories:
- video-text-to-text
tags:
- gaze
- multimodal
- video-understanding
- streaming-video
- temporal-reasoning
- proactive-understanding
- egocentric-vision
- visual-question-answering
StreamGaze Dataset
Paper | Project Page | Code
StreamGaze is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts.
π Dataset Structure
streamgaze/
βββ metadata/
β βββ egtea.csv # EGTEA fixation metadata
β βββ egoexolearn.csv # EgoExoLearn fixation metadata
β βββ holoassist.csv # HoloAssist fixation metadata
β
βββ qa/
β βββ past_gaze_sequence_matching.json
β βββ past_non_fixated_object_identification.json
β βββ past_object_transition_prediction.json
β βββ past_scene_recall.json
β βββ present_future_action_prediction.json
β βββ present_object_attribute_recognition.json
β βββ present_object_identification_easy.json
β βββ present_object_identification_hard.json
β βββ proactive_gaze_triggered_alert.json
β βββ proactive_object_appearance_alert.json
β
βββ videos/
βββ videos_egtea_original.tar.gz # EGTEA original videos
βββ videos_egtea_viz.tar.gz # EGTEA with gaze visualization
βββ videos_egoexolearn_original.tar.gz # EgoExoLearn original videos
βββ videos_egoexolearn_viz.tar.gz # EgoExoLearn with gaze visualization
βββ videos_holoassist_original.tar.gz # HoloAssist original videos
βββ videos_holoassist_viz.tar.gz # HoloAssist with gaze visualization
π― Task Categories
Past (Historical Context)
- Gaze Sequence Matching: Match gaze patterns to action sequences
- Non-Fixated Object Identification: Identify objects outside gaze
- Object Transition Prediction: Predict object state changes
- Scene Recall: Recall scene details from memory
Present (Current Context)
- Object Identification (Easy/Hard): Identify objects in/outside FOV
- Object Attribute Recognition: Recognize object attributes
- Future Action Prediction: Predict upcoming actions
Proactive (Future-Oriented)
- Gaze-Triggered Alert: Alert based on gaze patterns
- Object Appearance Alert: Alert on object appearance
π Quick Start
Data Preparation
Download our dataset from HuggingFace and extract videos. The dataset should be located as below (note: the dataset itself is in danaleee/StreamGaze):
StreamGaze/
βββ dataset/
β βββ videos/
β β βββ original_video/ # Original egocentric videos
β β βββ gaze_viz_video/ # Videos with gaze overlay
β βββ qa/
β βββ past_*.json # Past task QA pairs
β βββ present_*.json # Present task QA pairs
β βββ proactive_*.json # Proactive task QA pairs
Extract Videos
# Extract EGTEA videos
tar -xzf videos_egtea_original.tar.gz -C videos/egtea/original/
tar -xzf videos_egtea_viz.tar.gz -C videos/egtea/viz/
# Extract EgoExoLearn videos
tar -xzf videos_egoexolearn_original.tar.gz -C videos/egoexolearn/original/
tar -xzf videos_egoexolearn_viz.tar.gz -C videos/egoexolearn/viz/
# Extract HoloAssist videos
tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/
tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/
Running Evaluation
Quick evaluation on existing models:
# Evaluate ViSpeak (without gaze visualization)
bash scripts/vispeak.sh
# Evaluate ViSpeak (with gaze visualization)
bash scripts/vispeak.sh --use_gaze_instruction
# Evaluate GPT-4o
bash scripts/gpt4o.sh --use_gaze_instruction
# Evaluate Qwen2.5-VL
bash scripts/qwen25vl.sh --use_gaze_instruction
Results will be automatically computed and saved to:
results/
βββ ModelName/
β βββ results/ # Without gaze visualization
β β βββ *_output.json
β β βββ evaluation_summary.json
β βββ results_viz/ # With gaze visualization
β βββ *_output.json
β βββ evaluation_summary.json
π Metadata Format
Each metadata CSV contains:
video_source: Video identifierfixation_id: Fixation segment IDstart_time_seconds/end_time_seconds: Temporal boundariescenter_x/center_y: Gaze center coordinates (normalized)representative_object: Primary object at gaze pointother_objects_in_cropped_area: Objects within FOVother_objects_outside_fov: Objects outside FOVscene_caption: Scene descriptionaction_caption: Action description
π QA Format
Each QA JSON file contains:
{
"response_time": "[00:08 - 09:19]",
"questions": [
{
"question": "Among {milk, spoon, pan, phone}, which did the user never gaze at?",
"time_stamp": "03:14",
"answer": "A",
"options": [
"A. milk",
"B. spoon",
"C. pan",
"D. phone"
],
}
],
"video_path": "OP01-R03-BaconAndEggs.mp4"
}
π License
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
See https://creativecommons.org/licenses/by/4.0/
π Links
- Paper: https://huggingface.co/papers/2512.01707
- Evaluation code: https://github.com/daeunni/StreamGaze
- Project page: https://streamgaze.github.io/
π§ Contact
For questions or issues, please contact: daeun@cs.unc.edu