| | --- |
| | license: cc-by-nc-sa-4.0 |
| | extra_gated_fields: |
| | Name: text |
| | Email: text |
| | Country: country |
| | Organization or Affiliation: text |
| | What do you intend to use the model for?: |
| | type: select |
| | options: |
| | - Research |
| | - Personal use |
| | - Creative Professional |
| | - Startup |
| | - Enterprise |
| | --- |
| | |
| | # Dataset Card: GazeIntent = RadSeq & RadExplore & RadHybrid |
| |
|
| | **Dataset Name**: `phamtrongthang/GazeIntent` |
| | **Repository**: [UARK‑AICV/RadGazeIntent](https://github.com/UARK-AICV/RadGazeIntent) |
| | **License**: CC BY-NC-SA 4.0 |
| |
|
| | --- |
| |
|
| | ## 1. Dataset Summary |
| |
|
| | GazeIntent is the first intention-labeled eye-tracking dataset for radiological interpretation, capturing **radiologist's diagnostic intentions** during chest X-ray analysis. It includes: |
| |
|
| | - 3,562 chest X-ray samples with expert radiologist eye-tracking data |
| | - Fine-grained intention labels for each fixation point |
| | - Three distinct intention modeling paradigms representing different visual search behaviors |
| | - Multi-label annotations for 13 radiological findings |
| |
|
| | This dataset supports research in intention interpretation, gaze-informed diagnosis, cognitive modeling, and explainable AI in medical imaging. |
| |
|
| | > 🏅 This work was **accepted at ACM MM 2025** - A top-tier international conference on multimedia research. |
| |
|
| | --- |
| |
|
| | ## 2. Dataset Structure |
| |
|
| | | Attribute | Description | |
| | |-------------------------|-------------| |
| | | **Total Samples** | 3,562 chest X-rays | |
| | | **Sources** | EGD (1,079) + REFLACX (2,483) | |
| | | **Modality** | Chest X-ray images | |
| | | **Gaze Data** | 2D coordinates + fixation duration + intention labels | |
| | | **Intention Classes** | 13 radiological findings | |
| | | **Radiologists** | Multiple expert radiologists | |
| |
|
| | --- |
| |
|
| | ## 3. Three Intention Paradigms |
| |
|
| | **RadSeq (Systematic Sequential Search)** |
| | - Models radiologists following a structured diagnostic checklist |
| | - One finding examined at a time in sequential order |
| | - Reflects systematic, methodical visual search patterns |
| |
|
| | **RadExplore (Uncertainty-driven Exploration)** |
| | - Captures opportunistic visual search behavior |
| | - Radiologists consider multiple findings simultaneously |
| | - Represents exploratory, uncertainty-driven attention |
| |
|
| | **RadHybrid (Hybrid Pattern)** |
| | - Combines initial broad scanning with focused examination |
| | - Two-phase approach: overview → targeted search |
| | - Reflects real-world diagnostic behavior patterns |
| |
|
| | --- |
| |
|
| | ## 4. Intended Uses |
| |
|
| | - Radiologist intention interpretation and prediction |
| | - Gaze-informed medical diagnosis systems |
| | - Cognitive modeling of expert visual reasoning |
| | - Medical education and training assessment |
| | - Explainable AI for radiology applications |
| | - Human-AI collaboration in medical imaging |
| |
|
| | --- |
| |
|
| | ## 5. Tasks and Benchmarks |
| |
|
| | **Primary Task**: Fixation-based Intention Classification |
| | - Baseline: **RadGazeIntent** (transformer-based architecture) |
| | - Input: Fixation sequences + chest X-ray images |
| | - Output: Intention confidence scores for 13 findings |
| |
|
| | **Evaluation Metrics:** |
| | - **Classification**: Accuracy, F1-score, Precision, Recall |
| | - **Multi-label**: Per-class and macro-averaged metrics |
| |
|
| | **Findings Covered:** |
| | Atelectasis, Cardiomegaly, Consolidation, Edema, Enlarged Cardiomediastinum, Fracture, Lung Lesion, Lung Opacity, Pleural Effusion, Pleural Other, Pneumonia, Pneumothorax, Support Devices |
| |
|
| | --- |
| |
|
| | ## 6. Data Availability |
| |
|
| | The processed intention-labeled datasets are publicly available via Hugging Face under CC BY-NC-SA 4.0 license. |
| |
|
| | **Access Requirements**: Users must agree to share contact information and accept the license terms to access the dataset files. |
| |
|
| | --- |
| |
|
| | ## 7. Technical Details |
| |
|
| | **Data Processing**: Three datasets derived from existing eye-tracking sources (EGD, REFLACX) using different intention modeling assumptions: |
| |
|
| | - **Uncertainty Filtering**: Assigns labels based on temporal alignment with radiologist transcripts |
| | - **Sequential Constraints**: Applies GazeSearch methodology for systematic search modeling |
| | - **Hybrid Integration**: Combines initial scanning phase with focused examination periods |
| |
|
| |
|
| |
|
| | --- |
| |
|
| | ## 8. Citation |
| |
|
| | Please cite this dataset using the following BibTeX entry: |
| |
|
| | ```bibtex |
| | @article{pham2025interpreting, |
| | title={Interpreting Radiologist's Intention from Eye Movements in Chest X-ray Diagnosis}, |
| | author={Pham, Trong-Thang and Nguyen, Anh and Deng, Zhigang and Wu, Carol C and Nguyen, Hien and Le, Ngan}, |
| | journal={arXiv preprint arXiv:2507.12461}, |
| | year={2025} |
| | } |
| | ``` |
| |
|
| | --- |
| |
|
| | ## 9. Acknowledgments |
| |
|
| | This work is supported by: |
| | - National Science Foundation (NSF) Award No OIA-1946391, NSF 2223793 EFRI BRAID |
| | - National Institutes of Health (NIH) 1R01CA277739-01 |
| | - Built upon EGD and REFLACX eye-tracking datasets |
| |
|
| | **Contact**: Trong Thang Pham (tp030@uark.edu) |
| |
|