Improve dataset card: add paper link, GitHub link, and metadata

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +48 -2
README.md CHANGED
@@ -1,5 +1,51 @@
1
- # Dataset sample
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  VRGaze dataset sample:
4
 
5
- ![Sample](./vr_dataset_sample2.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - other
4
+ tags:
5
+ - computer-vision
6
+ - gaze-estimation
7
+ - virtual-reality
8
+ ---
9
+
10
+ # VRGaze: A Large-scale Dataset for VR Gaze Estimation
11
+
12
+ [Paper](https://huggingface.co/papers/2603.07832) | [Code](https://github.com/gazeshift3/gazeshift)
13
+
14
+ VRGaze is the first large-scale off-axis gaze estimation dataset for Virtual Reality (VR), introduced in the paper "GazeShift: Unsupervised Gaze Estimation and Dataset for VR".
15
+
16
+ ## Dataset Summary
17
+
18
+ The dataset comprises **2.1 million near-eye infrared images** collected from 68 participants. It is specifically designed to address data scarcity in VR gaze research, focusing on the off-axis camera configurations typical of modern headsets.
19
+
20
+ - **Images:** 2.1 million infrared images.
21
+ - **Participants:** 68 individuals.
22
+ - **Hardware:** Off-axis camera configurations common in modern VR systems.
23
+ - **Purpose:** Designed for unsupervised gaze representation learning and few-shot calibration.
24
+
25
+ ## Dataset Sample
26
 
27
  VRGaze dataset sample:
28
 
29
+ ![Sample](./vr_dataset_sample2.png)
30
+
31
+ ## Usage
32
+
33
+ To use this dataset with the official GazeShift implementation, follow these steps:
34
+
35
+ ### Installation
36
+
37
+ ```bash
38
+ git clone https://github.com/gazeshift3/gazeshift
39
+ cd gazeshift
40
+ pip install -r requirements.txt
41
+ ```
42
+
43
+ ### Training
44
+
45
+ To reproduce the experiments on the VRGaze dataset, run the provided training script (ensure you update the dataset and output locations within the script):
46
+
47
+ ```bash
48
+ bash Train.sh
49
+ ```
50
+
51
+ The model is expected to achieve a mean angular error of approximately 1.84° after per-person calibration (around 400K steps).