AmbientEye Dataset
AmbientEye is a large-scale eye-tracking dataset(n = 2,606,225) collected exclusively outdoors under natural ambient sunlight — without any active infrared illuminator. It is designed to benchmark pupil segmentation methods under varying, uncontrolled NIR irradiance conditions that arise in real-world outdoor use.
To obtain high-quality pupil annotations, a single point is manually placed within the pupil region of the first frame of each session and provided as a prompt for SAM2. In the second stage, human annotators review and refine every frame of the segmentation result to validate its accuracy. When the predicted mask does not align well with the pupil boundary, annotators correct the annotation by manually marking pupil boundary points to fit an ellipse to the pupil.
In total, pupil annotations were obtained for 2,518,693 out of 2,606,225 frames (96.6%).
Overview
| Property | Value |
|---|---|
| Participants | 35 |
| Sessions | 70 (2 per participant) |
| Total frames | 2,606,225 |
| Conditions | sunfacing, sunoccluded |
| Camera resolution | 400 × 400 px (monochrome IR) |
| Frame rate | 120 fps |
| Cameras per session | 2 (medial eye0, lateral eye1) |
| Calibration trials | 80 per session |
| IR illumination | None — ambient sunlight only |
| Recording device | Xreal Air 2 Glasses (with OV6211 IR cameras) |
| Recording period | April 2026 |
Conditions
Each participant completed two sessions in different lighting orientations:
| Condition | Description | IR irradiance (mean ± range) |
|---|---|---|
sunfacing |
Participant faces toward the sun (higher NIR) | 220 µW/cm² (9.8 – 412.6) |
sunoccluded |
Participant faces away from the sun (lower NIR) | 66.5 µW/cm² (5.9 – 112.3) |
The large irradiance range across participants reflects real-world variability in sun angle, cloud cover, and participant orientation.
Hardware
Recording was performed using Xreal Air 2 Glasses equipped with two embedded OV6211 monochrome IR cameras:
eye0— medial camera (nose side)eye1— lateral camera (ear side)
Both cameras operate at 400 × 400 px and 120 fps. No active IR LED is present; all NIR irradiance originates entirely from ambient sunlight.
Participants
35 participants (19 male, 16 female).
| Ethnicity | Count |
|---|---|
| Asian | 20 |
| White | 8 |
| Black or African American | 4 |
| Middle Eastern or North African | 1 |
| Hispanic, Latino, or Spanish | 1 |
| Eastern African | 1 |
3 participants wore makeup. Full demographics are in participant.csv.
Dataset Structure
AmbientEye/
├── video/ # Raw IR video files
│ └── {PID}_{condition}/ # e.g. P1_sunfacing, P1_sunoccluded
│ ├── eye0.mp4 # Medial camera (nose side), 400×400 px, 120 fps
│ └── eye1.mp4 # Lateral camera (ear side), 400×400 px, 120 fps
├── frames/ # Extracted JPEG frames — NOT included; generate with extract_frames.py
│ └── {PID}_{condition}/
│ ├── eye0/
│ │ ├── frame_00000.jpg
│ │ └── ...
│ └── eye1/
│ ├── frame_00000.jpg
│ └── ...
├── jsons/ # Human-reviewed SAM2 segmentation contours
│ └── {PID}_{condition}/
│ ├── eye0_contours_reviewed.json
│ └── eye1_contours_reviewed.json
├── meta/ # Session timing and calibration metadata
│ └── {PID}_{condition}/
│ ├── calibration_log.json # 80-trial stimulus targets + timestamps
│ └── info.player.json # Recording timing (duration, start times)
├── sample_contour/ # Sample videos with contour overlays
│ └── {PID}_{condition}/
│ ├── eye0.mp4
│ └── eye1.mp4
├── participant.csv # Participant demographics (N=35)
├── IRdata.csv # Per-session NIR irradiance measurements
├── session_recording_info.csv # Per-session date, time, weather, temperature
├── solar_position_reference.md # Solar azimuth/altitude tables per recording date
└── extract_frames.py # Script to extract frames from video into frames/
File Descriptions
video/{PID}_{condition}/eye0.mp4 / eye1.mp4
400×400 px monochrome IR video at 120 fps from two OV6211 cameras embedded in the Xreal Air 2 Glasses.
eye0— medial camera (nose side)eye1— lateral camera (ear side)
Captured under natural ambient illumination only. No active IR LED is used.
frames/{PID}_{condition}/eye{0,1}/frame_XXXXX.jpg
Individual frames extracted from the corresponding eye0.mp4 / eye1.mp4 video, zero-padded five-digit frame index. Provided for workflows that do not require video decoding.
This directory is not included. Run extract_frames.py to generate it (see Frame Extraction below).
jsons/{PID}_{condition}/eye0_contours_reviewed.json / eye1_contours_reviewed.json
SAM2 pupil segmentation contours, reviewed and corrected by human annotators. Each file contains:
{
"video": "eye0.mp4",
"fps": 120.0,
"width": 400,
"height": 400,
"frames": [
{
"frame": 0,
"contours": [
{ "points": [[x, y], ...], "area": 1234.5 }
]
},
...
]
}
meta/{PID}_{condition}/calibration_log.json
80 stimulus trials per session. Each entry records the screen position (x, y) of the shrinking-circle stimulus and the Pupil-synchronized timestamp of the participant's keypress response. Used for aligning eye video frames to known viewing directions.
{
"trials": [
{ "trial": 1, "pos": [683, 606], "click_ts": 165097.33 },
...
]
}
meta/{PID}_{condition}/info.player.json
Recording timing fields only:
{
"duration_s": 171.79,
"start_time_synced_s": 165082.55,
"start_time_system_s": 1776402507.93
}
start_time_synced_s— Pupil-synchronized clock at recording startstart_time_system_s— Unix timestamp at recording startduration_s— total recording duration in seconds
participant.csv
Participant demographics: ID (P1–P35), gender, age, ethnicity, country, makeup.
IRdata.csv
Measured NIR irradiance (µW/cm²) per participant for each condition (sunfacing_ir, sunoccluded_ir), confirming ambient light levels at recording time.
session_recording_info.csv
Per-session metadata: recording date, anonymized start/end times, duration (minutes), weather condition, and temperature (°C / °F).
solar_position_reference.md
Hourly solar azimuth and altitude tables for each recording date, with per-session estimates linearly interpolated to each session's start time.
sample_contour/{PID}_{condition}/eye{0,1}.mp4
Short sample videos with SAM2 contour overlays rendered on the raw IR frames, provided for quick visual quality inspection.
extract_frames.py
Script to extract all video frames from video/ into frames/ as JPEG files.
Frame Extraction
The frames/ directory is not distributed with the dataset. To generate it:
pip install opencv-python
python extract_frames.py
The script reads from VIDEO_DIR and writes to FRAMES_DIR (imported from a config module). Create a config.py in the same directory with paths appropriate for your environment:
# config.py
from pathlib import Path
VIDEO_DIR = Path("video")
FRAMES_DIR = Path("frames")
Usage
import json, cv2
session = "P1_sunfacing"
# Load contours
with open(f"jsons/{session}/eye0_contours_reviewed.json") as f:
contours = json.load(f)
# Load video
cap = cv2.VideoCapture(f"video/{session}/eye0.mp4")
for frame_data in contours["frames"]:
cap.set(cv2.CAP_PROP_POS_FRAMES, frame_data["frame"])
ret, img = cap.read()
if not ret:
break
for c in frame_data["contours"]:
pts = [[int(p[0]), int(p[1])] for p in c["points"]]
# draw / process contour ...
cap.release()
License
This dataset is released under the Creative Commons Attribution Non-Commercial 4.0 (CC BY-NC 4.0) license.
You are free to share and adapt the material for non-commercial purposes, provided appropriate credit is given.
Citation
If you use AmbientEye in your research, please cite:
@dataset{ambienteye2026,
title = {AmbientEye: A Dataset for Pupil Segmentation under Natural Ambient Infrared Illumination},
year = {2026},
url = {https://huggingface.co/datasets/migHug/AmbientEye},
license = {CC BY-NC 4.0}
}
- Downloads last month
- 42