nielsr HF Staff commited on
Commit
a47c569
Β·
verified Β·
1 Parent(s): b30c649

Update task category, add paper/code links, and enhance sample usage

Browse files

This PR significantly improves the dataset card for `danaleee/StreamGaze` by:

* **Updating Metadata**:
* Changing the `task_categories` from `question-answering` to `video-text-to-text` to more accurately reflect the dataset's focus on gaze-guided temporal reasoning and proactive understanding in streaming videos.
* Adding relevant `tags` such as `gaze`, `multimodal`, `video-understanding`, `streaming-video`, `temporal-reasoning`, `proactive-understanding`, `egocentric-vision`, and `visual-question-answering` to enhance discoverability.
* **Adding Prominent Links**: Including direct links to the paper ([`2512.01707`](https://huggingface.co/papers/2512.01707)), project page, and GitHub repository at the very top of the dataset card for easy access.
* **Enhancing Sample Usage**: Expanding the "Usage" section to incorporate detailed instructions for data preparation and running evaluations, including code snippets for quick evaluation on existing models, directly sourced from the project's GitHub README. This provides clear, actionable steps for users.
* **Updating Links Section**: Ensuring all relevant links (Paper, Evaluation code, Project page) are consistently listed in the `

Files changed (1) hide show
  1. README.md +65 -5
README.md CHANGED
@@ -1,12 +1,24 @@
1
  ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - question-answering
5
  language:
6
  - en
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
 
8
  # StreamGaze Dataset
9
 
 
 
10
  **StreamGaze** is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts.
11
 
12
  ## πŸ“ Dataset Structure
@@ -56,9 +68,25 @@ streamgaze/
56
  - **Gaze-Triggered Alert**: Alert based on gaze patterns
57
  - **Object Appearance Alert**: Alert on object appearance
58
 
59
- ## πŸ“₯ Usage
60
 
61
- ### Extract Videos
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ```bash
64
  # Extract EGTEA videos
@@ -74,6 +102,37 @@ tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/
74
  tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/
75
  ```
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ## πŸ”‘ Metadata Format
78
 
79
  Each metadata CSV contains:
@@ -118,6 +177,7 @@ See https://creativecommons.org/licenses/by/4.0/
118
 
119
  ## πŸ”— Links
120
 
 
121
  - **Evaluation code**: [https://github.com/daeunni/StreamGaze](https://github.com/daeunni/StreamGaze)
122
  - **Project page**: [https://streamgaze.github.io/](https://streamgaze.github.io/)
123
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: cc-by-4.0
5
+ task_categories:
6
+ - video-text-to-text
7
+ tags:
8
+ - gaze
9
+ - multimodal
10
+ - video-understanding
11
+ - streaming-video
12
+ - temporal-reasoning
13
+ - proactive-understanding
14
+ - egocentric-vision
15
+ - visual-question-answering
16
  ---
17
+
18
  # StreamGaze Dataset
19
 
20
+ [Paper](https://huggingface.co/papers/2512.01707) | [Project Page](https://streamgaze.github.io/) | [Code](https://github.com/daeunni/StreamGaze)
21
+
22
  **StreamGaze** is a comprehensive streaming video benchmark for evaluating MLLMs on gaze-based QA tasks across past, present, and future contexts.
23
 
24
  ## πŸ“ Dataset Structure
 
68
  - **Gaze-Triggered Alert**: Alert based on gaze patterns
69
  - **Object Appearance Alert**: Alert on object appearance
70
 
71
+ ## πŸš€ Quick Start
72
 
73
+ ### Data Preparation
74
+
75
+ Download our dataset from HuggingFace and extract videos. The dataset should be located as below (note: the dataset itself is in `danaleee/StreamGaze`):
76
+
77
+ ```
78
+ StreamGaze/
79
+ β”œβ”€β”€ dataset/
80
+ β”‚ β”œβ”€β”€ videos/
81
+ β”‚ β”‚ β”œβ”€β”€ original_video/ # Original egocentric videos
82
+ β”‚ β”‚ └── gaze_viz_video/ # Videos with gaze overlay
83
+ β”‚ └── qa/
84
+ β”‚ β”œβ”€β”€ past_*.json # Past task QA pairs
85
+ β”‚ β”œβ”€β”€ present_*.json # Present task QA pairs
86
+ β”‚ └── proactive_*.json # Proactive task QA pairs
87
+ ```
88
+
89
+ #### Extract Videos
90
 
91
  ```bash
92
  # Extract EGTEA videos
 
102
  tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/
103
  ```
104
 
105
+ ### Running Evaluation
106
+
107
+ Quick evaluation on existing models:
108
+
109
+ ```bash
110
+ # Evaluate ViSpeak (without gaze visualization)
111
+ bash scripts/vispeak.sh
112
+
113
+ # Evaluate ViSpeak (with gaze visualization)
114
+ bash scripts/vispeak.sh --use_gaze_instruction
115
+
116
+ # Evaluate GPT-4o
117
+ bash scripts/gpt4o.sh --use_gaze_instruction
118
+
119
+ # Evaluate Qwen2.5-VL
120
+ bash scripts/qwen25vl.sh --use_gaze_instruction
121
+ ```
122
+
123
+ Results will be automatically computed and saved to:
124
+
125
+ ```
126
+ results/
127
+ β”œβ”€β”€ ModelName/
128
+ β”‚ β”œβ”€β”€ results/ # Without gaze visualization
129
+ β”‚ β”‚ β”œβ”€β”€ *_output.json
130
+ β”‚ β”‚ └── evaluation_summary.json
131
+ β”‚ └── results_viz/ # With gaze visualization
132
+ β”‚ β”œβ”€β”€ *_output.json
133
+ β”‚ └── evaluation_summary.json
134
+ ```
135
+
136
  ## πŸ”‘ Metadata Format
137
 
138
  Each metadata CSV contains:
 
177
 
178
  ## πŸ”— Links
179
 
180
+ - **Paper**: [https://huggingface.co/papers/2512.01707](https://huggingface.co/papers/2512.01707)
181
  - **Evaluation code**: [https://github.com/daeunni/StreamGaze](https://github.com/daeunni/StreamGaze)
182
  - **Project page**: [https://streamgaze.github.io/](https://streamgaze.github.io/)
183