Datasets:
Access Request: GlimpseML Focus100 Gaze Dataset
This dataset contains anonymised image and gaze data (PID removed). To adhere
to GlimpseML's data sharing agreements and ethical standards, access is highly
restricted.
Access Requirements:
Academic Use Only: Your Hugging Face account must be registered
with a valid academic institution email address. We will verify the email
associated with your account. Requests from personal or corporate emails will
be denied.Non-Commercial License: Data is released under the CC BY-NC-SA 4.0
license. Commercial use in any form is expressly prohibited.Data Security: Users must store the data securely to prevent
unauthorised access and agree not to attempt to re-identify any individuals in
the dataset.Manual Review: All requests are subject to manual review by GlimpseML
to ensure compliance with this framework.
Log in or Sign Up to review the conditions and access this dataset content.
Focus100 dataset
π Project page | π CVPR Paper
Updates
03/2026: π€ Dataset released to Hugging Face.06/2026: π Paper published at CVPR 2026.
Contents
Focus100
Focus100 is a new dataset designed to facilitate research on dynamic human attention in driving scenarios, particularly for the development and evaluation of gaze estimation models. Unlike datasets that provide only aggregated saliency maps, Focus100 provides high-resolution, time-stamped gaze sequences from 30 participants viewing 100 egocentric driving videos. This rich data enables the study of fine-grained temporal attention patterns and scanpath dynamics, crucial for understanding human behaviour in complex driving environments.
Dataset overview
The Focus100 dataset comprises anonymised driving videos with associated viewer gaze. We also provide object detections extracted from the original videos (before anonymisation):
- Videos:
10060-second videos,1280 x 640pixels at10 Hzframe rate, anonymised to remove personally identifiable information and stored asmp4files. - Gaze:
89060-second gaze sequences sampled at 60 Hz, mean of left and right eye gaze positions in image space (1280 x 640), for at least 7 subjects per video. Each sample is synchronised and associated to a video frame. (Note: As the gaze is sampled at 60 Hz and the video at 10 Hz, there are 6 gaze samples associated with the duration of each video frame). - Detections: YOLOv8x detections for each frame of the videos.
Data collection & experiment design
To capture natural gaze behaviour in response to realistic driving scenarios, we simulated a driving experience mirroring the hazard perception component of the UK driving test.
- Task: Participants were asked to actively monitor 1-minute egocentric driving clips for potential hazards. When they perceived a developing hazard, they responded by pressing the
CTRLkey. This ensured active engagement while providing insights into natural gaze patterns during dynamic driving situations. - Participants: We recruited 30 frequent drivers (14 male, 16 female) aged 21 to 60 (M=36.9, SD=6.7). All participants had held a valid driver's license for at least three years, had normal vision, and had driven within the past week.
- Hardware setup: Participants were seated 57 cm from a 24-inch Dell P2423 monitor. Gaze data was recorded at 60 Hz using a Tobii Pro Nano eye tracker attached to the lower edge of the monitor. The eye tracker provides an accuracy of 0.3Β° and a precision of 0.10Β° RMS under optimal conditions. Responses were logged using a standard Logitech K120 keyboard.
- Procedure: Following eye-tracker calibration and a practice session, participants viewed 30 unique clips. Each of the 100 clips in the dataset was shown to 7β12 randomly assigned participants to ensure a balanced representation of viewing patterns.
(Note: Due to technical issues during gaze recording, 10 recordings were omitted from the final dataset, resulting in the final 890 1-minute gaze recordings across the 30 subjects).
Privacy & anonymization
To protect the privacy of individuals in driving videos, all personally identifiable information (PII) has been carefully removed. All detected faces and license plates in the videos were automatically blurred to ensure that individuals and vehicles could not be identified. The gaze data provided in the dataset has been processed to remove any information that could potentially identify individual participants. All personal identifiers associated with the gaze data, such as participant names or ID numbers, gender, age, recording locations, and times have been removed.
Gaze data preprocessing
To ensure the gaze signal is temporally continuous and spatially aligned with the video frames, the raw tracker data underwent a minimal preprocessing pipeline:
- Raw gaze data from the Tobii tracker (recorded at an irregular 60 Hz) was temporally resampled to exactly 60 Hz along the duration of the sequence.
- At each time point, the mean of the left and right eye gaze measurements was calculated.
- The resulting gaze ratios were projected onto the image frame (provided in the release scaled to the
1280 x 640video resolution). - To maintain temporal continuity, invalid samples such as blinks (as detected by the tracker) were initially set to
NaN. These missing values were then linearly interpolated across the time series. This interpolation was applied independently to the x and y coordinates using NumPy'sinterpfunction. Note that despite this interpolation, the original gaze valid indicator is retained in the final data, allowing users to easily distinguish between true tracked measurements and interpolated points.
Directory structure
The directory structure of the dataset release is shown below:
DATASET_ROOT
βββ data
β βββ <sequence_name_1>
β β βββ detections.json
β β βββ gaze
β β β βββ gaze_subject-<subject_id_1>.csv
β β β βββ ...
β β β βββ gaze_subject-<subject_id_n>.csv
β β βββ video.mp4
β βββ <sequence_name_2>
β β ...
β βββ <sequence_name_100>
β β βββ detections.json
β β βββ gaze
β β β βββ gaze_subject-<subject_id_1>.csv
β β β βββ ...
β β β βββ gaze_subject-<subject_id_n>.csv
β β βββ video.mp4
βββ meta
| βββ split.csv
βββ README.md
Videos
The videos are generated using anonymised frames for each of the 100 sequences in mp4 format. All detected faces and license plates in each frame have been blurred to ensure individuals and vehicles could not be identified. Each video is stored in the subdirectory of the corresponding sequence: data/<sequence_name>/video.mp4.
Gaze
The gaze .csv files stored in data/<sequence_name>/gaze/ contain the following columns:
| Column Name | Type | Description |
|---|---|---|
timestamp |
Float | Timestamp relative to the start of the sequence. |
frame_index |
Integer | Index of the corresponding video frame in the sequence. |
sequence_name |
String | Sequence unique identifier (12 alphanumeric characters). |
gaze_x |
Float | Measured gaze position on the frame, in pixels on the x-axis (up to 1280). |
gaze_y |
Float | Measured gaze position on the frame, in pixels on the y-axis (up to 640). |
gaze_valid |
Boolean | 1 if gaze is valid (i.e., non-blink, good measurement confidence), 0 otherwise. |
ctrl_press |
Boolean | 1 if the signal from a CTRL key press started at this timestamp. |
Detections
The detections.json file in each sequence folder contains YOLOv8x outputs for every processed video frame. The detector used the original frames (before anonymisation) as input. In order to avoid including detections of reflected objects on the bonnet of the ego-car, any bounding box whose top edge fell within the bottom 15% of the image (y-coordinate >= 0.85 in normalised coordinates) was excluded.
The data is structured as a list of frames, where each frame contains a list of detected object dictionaries. Each dictionary contains the following keys:
| Key | Type | Description |
|---|---|---|
label_id |
Integer | The class ID output by the pre-trained YOLOv8 model. |
score |
Float | The confidence score of the detection prediction, ranging from 0.0 to 1.0. |
label |
String | The human-readable name of the detected class (e.g., "car", "person", "traffic light"). |
xywh |
List [Float] | The bounding box [center_x, center_y, width, height] in absolute image pixels. |
xywhn |
List [Float] | The bounding box [center_x, center_y, width, height] normalised between 0.0 and 1.0 relative to the image dimensions. |
Example JSON structure:
// The outermost list represents the video sequence, the inner list represents a single frame, and the dictionary represents a single detection within that frame.
[
[
{
"label_id": 2,
"score": 0.9387431740760803,
"label": "car",
"xywh": [
571.3340454101562,
390.24945068359375,
152.9697265625,
138.1370849609375
],
"xywhn": [
0.4463547170162201,
0.6097647547721863,
0.11950759589672089,
0.2158391922712326
]
},
// ...
]
// ...
]
Dataset split
The meta/ directory contains the split.csv file. This file contains the definitions of the train/valid/test dataset splits. Each sequence_name is assigned the split it belongs to.
Dataset download
Please note that this dataset is restricted to non-commercial academic and research use. See the License section for full details regarding data privacy regulations and usage limitations.
How to access:
- Complete the access request form at the top of this Hugging Face repository page. You must be logged in with a valid academic email address.
- Wait for your request to be manually reviewed and approved by the GlimpseML team.
- Once approved, you can download the dataset directly from the "Files and versions" tab, or programmatically using the
huggingface_hublibrary:
from huggingface_hub import snapshot_download
# Make sure you have logged in via `huggingface-cli login` or pass your token
local_dir = snapshot_download(
repo_id="GlimpseML/Focus100",
repo_type="dataset",
local_dir="/path/to/DATASET_ROOT/" #e.g. ./focus100/
)
Code
An example script for visualising the data included in the dataset is provided at code/visualise_data.py.
It can be run by doing:
python visualise_data.py --data_root /path/to/DATASET_ROOT/data/ --sequences 5e507f01006b 66f31c6b2521 --generate_video --output_dir /path/to/output/
The script will generate 60 FPS videos for the given sequences with the detection bounding boxes and recorded gaze positions overlaid over the anonymised video frames. Note that generating videos requires ffmpeg installed on the system. If ffmpeg is not available, run the command without the --generate_video flag, adding --keep_frames to keep the generated frames stored as .jpg files.
A minimal conda environment needed to run the visualisation script can be created by running:
conda create -n focus100 python=3.10 -y
conda activate focus100
conda install -c conda-forge opencv pandas tqdm ffmpeg -y
Citation
If you find the Focus100 dataset useful in your research, please cite our work:
@inproceedings{palmer_beyond_scanpaths_cvpr2026,
title = {Beyond Scanpaths: Graph-Based Gaze Simulation in Dynamic Scenes},
author = {Palmer, Luke and Palasek, Petar and Abdelkawy, Hazem},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026}
}
License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The dataset is restricted to research or academic use only and requires institutional registration for access. Users of the dataset are expected to adhere to ethical research practices and comply with all relevant data privacy regulations, including GDPR. Commercial use is strictly prohibited.
Contact
For inquiries and suggestions please email us at: focus100@glimpse.ml
- Downloads last month
- 17