Datasets:
EyeAssist: Radiologist Eye-Tracking Datasets and Evaluation Protocols
EyeAssist is a collection of two multimodal medical imaging datasets paired with radiologist eye-tracking (gaze) data, together with reference evaluation code that demonstrates how the datasets are used in benchmarking studies.
The release contains two datasets:
- EyeAssist-PE — chest CT volumes for lung-cancer prognosis, paired with gaze recordings from 7 radiologists across 2 reading sessions.
- EyeAssist-Neo — neonatal chest X-rays paired with gaze recordings under multiple experimental conditions (expert vs. generalist; with vs. without clinical context).
Both are designed to support research on gaze-guided deep learning, explainability in medical AI, and modeling of radiologist visual behavior.
Repository layout
EyeAssist/
├── Dataset/
│ ├── EyeAssist-PE/ # chest CT + gaze (~4.9 GB)
│ │ ├── CT/ # 40 CT volumes (NIfTI)
│ │ ├── Gaze/ # gaze recordings, 7 readers × 2 sessions
│ │ ├── Saliency/ # gaze-derived saliency maps + figures
│ │ ├── Clinical context.docx
│ │ ├── Expert Prognosis Decision.xlsx
│ │ └── README.md
│ └── EyeAssist-Neo/ # neonatal X-ray + gaze (~462 MB)
│ ├── Xrays/ # X-ray images (JPEG)
│ ├── Gaze&Saliency/ # gaze recordings under different conditions
│ └── clinical context.csv
└── Evaluation Protocol Code/
├── protocol1/ # tabular feature baselines (per dataset)
├── protocol2/ # deep saliency / transfer experiments
└── protocol3/ # gaze-weighted feature pooling (PE)
Each subdirectory contains its own README with dataset-specific schema and column definitions.
Datasets at a glance
| EyeAssist-PE | EyeAssist-Neo | |
|---|---|---|
| Modality | Chest CT (NIfTI) | Chest X-ray (JPEG) |
| Cases | 40 (20 survival / 20 death; 20 central / 20 peripheral) | 100+ |
| Readers | 7 radiologists (R1–R7) | Experts, generalists, residents |
| Sessions | 2 reading sessions | Session 2 with multiple conditions |
| Conditions | Blind / Context | Expert vs Generalist; With vs Without Clinical Context |
| Gaze format | per-frame CSV (Trial.csv) |
per-frame CSV (fixations.csv) |
| Labels | Survival outcome + expert prognosis | Diagnosis, gestational age, clinical context |
Quick start
import nibabel as nib
import pandas as pd
# EyeAssist-PE: load a CT volume + radiologist gaze
ct = nib.load("Dataset/EyeAssist-PE/CT/ca_42_1_diecentral.nii.gz").get_fdata()
gaze = pd.read_csv("Dataset/EyeAssist-PE/Gaze/Session 1/R1/Trial.csv")
# EyeAssist-Neo: load X-ray fixations
fix = pd.read_csv(
"Dataset/EyeAssist-Neo/Gaze&Saliency/Session2 expert vs generalist/Expert/expert1/csv/fixations.csv"
)
Evaluation Protocols
Three reference protocols are included under Evaluation Protocol Code/:
- Protocol 1 — tabular gaze-feature baselines: extracts summary gaze statistics (fixation count, dwell time, entropy, ROI revisit rate, etc.) and trains classical classifiers. Runs on both datasets.
- Protocol 2 — deep learning experiments: saliency-map prediction with the EML-NET backbone (vendored under
protocol2/EyeAssist-PE/models/EML-NET-Saliency/), and transfer-learning experiments on the Neo dataset. - Protocol 3 — gaze-weighted feature pooling for PE: extracts CT features with U-Net / nnU-Net / SwinUNETR backbones and pools them by per-reader gaze density, comparing
no_gaze/blind/contextconditions.
Each protocol directory has its own README with run instructions and config defaults.
Setup
pip install numpy pandas nibabel scipy scikit-learn scikit-image matplotlib torch torchvision pillow monai
Backbone-specific models (Models Genesis pretrained weights, MONAI SwinUNETR weights) are not bundled — see Evaluation Protocol Code/protocol3/main.py for the expected paths and download links.
File formats
| Type | Format | Tools |
|---|---|---|
| CT volumes | .nii, .nii.gz |
nibabel, SimpleITK |
| X-ray images | .jpg, .jpeg |
PIL, cv2 |
| Gaze data | .csv |
pandas |
| Clinical context | .csv, .docx |
pandas, python-docx |
| Prognosis labels | .xlsx |
pandas (openpyxl) |
| Saliency maps | .png, .npy |
PIL, numpy |
| Pretrained weights | .pt, .pth |
torch |
Reader anonymization
All radiologists, experts, and readers are referred to by anonymous identifiers (e.g. R1–R7, expert1–expert5, generalist1–generalist5, reader1–reader3). No personally identifying information is included.
License
Released under CC-BY-4.0. See LICENSE for details.
Acknowledgements
The Protocol 2 saliency experiments use a vendored copy of the EML-NET-Saliency codebase by Sen Jia and Neil D. B. Bruce; see the upstream repository for license and citation.
- Downloads last month
- 14