Datasets:

ArXiv:
License:
drivergaze360 / README.md
sdhar16's picture
Update README.md
6254392 verified
---
license: cc-by-nc-sa-4.0
gated: true
extra_gated_heading: "Acknowledge license to access the repository"
extra_gated_button_content: "Acknowledge license"
viewer: false
---
# DriverGaze360: Omnidirectional Driver Attention with Object-Level Guidance
<p align="center">
<a href="https://dfki-av.github.io/drivergaze360/" target="_blank"><img src="https://img.shields.io/badge/Project-Page-blue"></a>
<a href="https://arxiv.org/abs/2512.14266" target="_blank"><img src="https://img.shields.io/badge/arXiv-2512.14266-b31b1b"></a>
<a href="https://huggingface.co/datasets/dfki-av/drivergaze360" target="_blank"><img src="https://img.shields.io/badge/Hugging Face-Dataset-FFD21E"></a>
<a href="https://github.com/dfki-av/drivergaze360" target="_blank"><img src="https://img.shields.io/badge/GitHub-%23121011.svg?logo=github&logoColor=white"></a>
<a href="https://cvpr.thecvf.com/" target="_blank"><img src="https://img.shields.io/badge/Conference-CVPR%202026-4b44ce"></a>
<a><img src="https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-blue"></a>
</p>
DriverGaze360 is a large-scale 360-degree field of view driver attention dataset, containing approximately 1 million gaze-labeled frames collected from 19 human drivers, enabling comprehensive omnidirectional modeling of driver gaze behavior.
<video src="https://github.com/dfki-av/drivergaze360/blob/gh-pages/static/videos/supplementary_video.mp4?raw=true" controls preload muted="muted" height="720px"></video>
## Dataset structure
Example folder structure
```
C011 # Participant
└── 001 # Recording Session
└── 001 # Iteration number
├── is.tar # Instance Segmentation
├── dt.mp4 # Depth maps
├── rgb.mp4 # RGB output
├── saliency.mp4 # Saliency maps
└── sim_gaze_df.csv # CSV with simulator data on ego car and eye gaze positions
```
## Dataset timings
All videos are saved at 30FPS with exact same number of frames.
## Known Issues
- recordings with pedestrians under the ground or floating in air (issue with CARLA replay)
- Ghost pedestrians (addressed most of such occurances, if found please report them to us)
## Citation
If you find this work useful in your research, please consider citing:
```bibtex
@article{govil_2025_cvpr,
title = {DriverGaze360: OmniDirectional Driver Attention with Object-Level Guidance},
author = {Shreedhar Govil and Didier Stricker and Jason Rambach},
year = {2025},
eprint = {2512.14266},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2512.14266}
}
```
## Acknowledgments
This work was partially funded by the European Union's Horizon Europe Research and Innovation Programme under Grant Agreement No. 101076360 (BERTHA) and by the German Federal Ministry of Research, Technology and Space under Grant Agreement No. 16IW24009 (COPPER). The authors would like to express their sincere appreciation to Prateek Kumar Sharma, for his support with data collection and the implementation of driving scenarios. We also gratefully acknowledge Ruben Abad, Alex Levy, and Prof. Antonio M. López from the Computer Vision Center (CVC) for their methodological guidance and for providing the code used to implement the goal-directed navigation routes applied in collecting part of the dataset presented in this study. Finally, we sincerely thank all the participants who contributed to the dataset collection, as well as our colleagues at DFKI for their valuable feedback and support throughout this project.
![](https://dfki-av.github.io/drivergaze360/static/images/funding_logo.png)
The views and opinions expressed in this publication are solely those of the author(s) and do not necessarily reflect those of the European Union or the European Climate, Infrastructure and Environment Executive Agency (CINEA). Neither the European Union nor the granting authority can be held responsible for them.