danaleee commited on
Commit
004ed06
Β·
verified Β·
1 Parent(s): 6d55324

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +134 -3
README.md CHANGED
@@ -1,3 +1,134 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # StreamGaze Dataset
2
+
3
+ **StreamGaze** is a comprehensive egocentric vision benchmark for evaluating multimodal large language models on temporal gaze-based question answering tasks across past, present, and future contexts.
4
+
5
+
6
+ ## πŸ“ Dataset Structure
7
+
8
+ ```
9
+ streamgaze/
10
+ β”œβ”€β”€ metadata/
11
+ β”‚ β”œβ”€β”€ egtea.csv # EGTEA fixation metadata
12
+ β”‚ β”œβ”€β”€ egoexolearn.csv # EgoExoLearn fixation metadata
13
+ β”‚ └── holoassist.csv # HoloAssist fixation metadata
14
+ β”‚
15
+ β”œβ”€β”€ qa/
16
+ β”‚ β”œβ”€β”€ past_gaze_sequence_matching.json
17
+ β”‚ β”œβ”€β”€ past_non_fixated_object_identification.json
18
+ β”‚ β”œβ”€β”€ past_object_transition_prediction.json
19
+ β”‚ β”œβ”€β”€ past_scene_recall.json
20
+ β”‚ β”œβ”€β”€ present_future_action_prediction.json
21
+ β”‚ β”œβ”€β”€ present_object_attribute_recognition.json
22
+ β”‚ β”œβ”€β”€ present_object_identification_easy.json
23
+ β”‚ β”œβ”€β”€ present_object_identification_hard.json
24
+ β”‚ β”œβ”€β”€ proactive_gaze_triggered_alert.json
25
+ β”‚ └── proactive_object_appearance_alert.json
26
+ β”‚
27
+ └── videos/
28
+ β”œβ”€β”€ videos_egtea_original.tar.gz # EGTEA original videos
29
+ β”œβ”€β”€ videos_egtea_viz.tar.gz # EGTEA with gaze visualization
30
+ β”œβ”€β”€ videos_egoexolearn_original.tar.gz # EgoExoLearn original videos
31
+ β”œβ”€β”€ videos_egoexolearn_viz.tar.gz # EgoExoLearn with gaze visualization
32
+ β”œβ”€β”€ videos_holoassist_original.tar.gz # HoloAssist original videos
33
+ └── videos_holoassist_viz.tar.gz # HoloAssist with gaze visualization
34
+ ```
35
+
36
+ ## 🎯 Task Categories
37
+
38
+ ### **Past (Historical Context)**
39
+ - **Gaze Sequence Matching**: Match gaze patterns to action sequences
40
+ - **Non-Fixated Object Identification**: Identify objects outside gaze
41
+ - **Object Transition Prediction**: Predict object state changes
42
+ - **Scene Recall**: Recall scene details from memory
43
+
44
+ ### **Present (Current Context)**
45
+ - **Object Identification (Easy/Hard)**: Identify objects in/outside FOV
46
+ - **Object Attribute Recognition**: Recognize object attributes
47
+ - **Future Action Prediction**: Predict upcoming actions
48
+
49
+ ### **Proactive (Future-Oriented)**
50
+ - **Gaze-Triggered Alert**: Alert based on gaze patterns
51
+ - **Object Appearance Alert**: Alert on object appearance
52
+
53
+ ## πŸ“₯ Usage
54
+
55
+ ### Extract Videos
56
+
57
+ ```bash
58
+ # Extract EGTEA videos
59
+ tar -xzf videos_egtea_original.tar.gz -C videos/egtea/original/
60
+ tar -xzf videos_egtea_viz.tar.gz -C videos/egtea/viz/
61
+
62
+ # Extract EgoExoLearn videos
63
+ tar -xzf videos_egoexolearn_original.tar.gz -C videos/egoexolearn/original/
64
+ tar -xzf videos_egoexolearn_viz.tar.gz -C videos/egoexolearn/viz/
65
+
66
+ # Extract HoloAssist videos
67
+ tar -xzf videos_holoassist_original.tar.gz -C videos/holoassist/original/
68
+ tar -xzf videos_holoassist_viz.tar.gz -C videos/holoassist/viz/
69
+ ```
70
+
71
+ ### Load with Python
72
+
73
+ ```python
74
+ from datasets import load_dataset
75
+ import json
76
+ import pandas as pd
77
+
78
+ # Load dataset
79
+ dataset = load_dataset("danaleee/streamgaze_private")
80
+
81
+ # Load metadata
82
+ egtea_meta = pd.read_csv("metadata/egtea.csv")
83
+ egoexo_meta = pd.read_csv("metadata/egoexolearn.csv")
84
+ holoassist_meta = pd.read_csv("metadata/holoassist.csv")
85
+
86
+ # Load QA data
87
+ with open("qa/present_object_identification_easy.json") as f:
88
+ qa_data = json.load(f)
89
+ ```
90
+
91
+ ## πŸ”‘ Metadata Fields
92
+
93
+ Each metadata CSV contains:
94
+ - `video_source`: Video identifier
95
+ - `fixation_id`: Fixation segment ID
96
+ - `start_time_seconds` / `end_time_seconds`: Temporal boundaries
97
+ - `center_x` / `center_y`: Gaze center coordinates (normalized)
98
+ - `representative_object`: Primary object at gaze point
99
+ - `other_objects_in_cropped_area`: Objects within FOV
100
+ - `other_objects_outside_fov`: Objects outside FOV
101
+ - `scene_caption`: Scene description
102
+ - `action_caption`: Action description (EgoExoLearn only)
103
+
104
+ ## πŸ“ QA Format
105
+
106
+ Each QA JSON file contains:
107
+ ```json
108
+ {
109
+ "qa_pairs": [
110
+ {
111
+ "video_id": "OP01-R01-PastaSalad",
112
+ "fixation_id": 5,
113
+ "question": "What object is the camera wearer looking at?",
114
+ "choices": ["fork", "plate", "pasta", "bowl"],
115
+ "answer": "pasta",
116
+ "task_type": "present_object_identification_easy"
117
+ }
118
+ ]
119
+ }
120
+ ```
121
+
122
+ ## πŸ“„ License
123
+
124
+ This dataset is for research purposes only. Please cite our paper if you use this dataset.
125
+
126
+ ## πŸ”— Links
127
+
128
+ - **GitHub**: [https://github.com/daeunlee_adobe/StreamGaze](https://github.com/daeunlee_adobe/StreamGaze)
129
+ - **Paper**: [Coming soon]
130
+
131
+ ## πŸ“§ Contact
132
+
133
+ For questions or issues, please contact: daeunlee@adobe.com
134
+