|
|
---
|
|
|
license: apache-2.0
|
|
|
task_categories:
|
|
|
- visual-question-answering
|
|
|
tags:
|
|
|
- autonomous-driving
|
|
|
- vision-language
|
|
|
- multimodal
|
|
|
- benchmark
|
|
|
multimodal: true
|
|
|
pretty_name: ScenePilot-Bench
|
|
|
---
|
|
|
|
|
|
|
|
|
# **ScenePilot-Bench: A Large-Scale First-Person Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving**
|
|
|
|
|
|
<div align="center">
|
|
|
<img src="assets/The overall structure.png" width="800px">
|
|
|
<p><em>Figure 1: Overview of the ScenePilot-Bench dataset and evaluation metrics.</em></p>
|
|
|
</div>
|
|
|
|
|
|
<p align="center">
|
|
|
<a href="https://github.com/yjwangtj/ScenePilot-Bench">
|
|
|
<img src="https://img.shields.io/badge/Project-Website-blue?style=flat-square">
|
|
|
</a>
|
|
|
<a href="https://huggingface.co/datasets/larswangtj/ScenePilot-4K/tree/main">
|
|
|
<img src="https://img.shields.io/badge/Dataset-Download-green?style=flat-square">
|
|
|
</a>
|
|
|
<a href="https://arxiv.org/abs/2601.19582">
|
|
|
<img src="https://img.shields.io/badge/Paper-Arxiv-red?style=flat-square">
|
|
|
</a>
|
|
|
</p>
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📦 Contents Overview
|
|
|
|
|
|
The dataset files in this repository can be grouped into the following categories.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 1. Model Weight Files
|
|
|
|
|
|
- **ScenePilot_2.5_3b_200k_merged.zip**
|
|
|
- **ScenePilot_2_2b_200k_merged.zip**
|
|
|
|
|
|
These two compressed files contain pretrained model weights obtained by training on a **200k-scale VQA training set** constructed in this work.
|
|
|
|
|
|
- The former corresponds to **Qwen2.5-VL-3B**
|
|
|
- The latter corresponds to **Qwen2-VL-2B**
|
|
|
|
|
|
Both models are trained using the same dataset and unified training pipeline, and are used in the main experiments and comparison studies.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 2. Spatial Perception and Annotation Data
|
|
|
|
|
|
- **VGGT.zip**
|
|
|
Contains annotation data related to spatial perception tasks, including:
|
|
|
- Ego-vehicle trajectory information
|
|
|
- Depth-related information
|
|
|
|
|
|
These annotations are used to support experiments involving trajectory prediction and spatial understanding.
|
|
|
|
|
|
- **YOLO.zip**
|
|
|
Provides 2D object detection results for major traffic participants.
|
|
|
All detections are generated by a unified detection model and are used as perception inputs for downstream VQA and risk assessment tasks.
|
|
|
|
|
|
- **scene_description.zip**
|
|
|
Contains scene description results generated from the original data, including:
|
|
|
- Weather conditions
|
|
|
- Road types
|
|
|
- Other environmental and semantic attributes
|
|
|
|
|
|
These descriptions are used for scene understanding and for constructing balanced dataset splits.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 3. Dataset Split Definition
|
|
|
|
|
|
- **split_train_test_val.zip**
|
|
|
|
|
|
This file contains the **original video-level dataset split**, including:
|
|
|
- Training set
|
|
|
- Validation set
|
|
|
- Test set
|
|
|
|
|
|
All VQA datasets of different scales are constructed **strictly based on this video-level split** to avoid scene-level information leakage.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 4. VQA Datasets
|
|
|
|
|
|
### 4.1 All-VQA
|
|
|
|
|
|
- **All-VQA.zip**
|
|
|
|
|
|
This archive contains all VQA data in JSON format.
|
|
|
Files are organized according to training, validation, and test splits.
|
|
|
|
|
|
Examples include:
|
|
|
- `Deleted_2D_train_vqa_add_new.json`
|
|
|
- `Deleted_2D_train_vqa_new.json`
|
|
|
|
|
|
These files together form the complete training VQA dataset.
|
|
|
Other files correspond to validation and test data.
|
|
|
|
|
|
---
|
|
|
|
|
|
### 4.2 Test-VQA
|
|
|
|
|
|
- **Test-VQA.zip**
|
|
|
|
|
|
This archive contains the **100k-scale VQA test datasets** used in the experiments.
|
|
|
|
|
|
- `Deleted_2D_test_selected_vqa_100k_final.json`
|
|
|
Used as the main test set in the primary experiments.
|
|
|
|
|
|
Additional test sets are provided for generalization studies:
|
|
|
- Files ending with `europe`, `japan-and-korea`, `us`, and `other` correspond to geographic generalization experiments.
|
|
|
- Files ending with `left` correspond to left-hand traffic country experiments.
|
|
|
|
|
|
Each test set contains **100k VQA samples**.
|
|
|
|
|
|
---
|
|
|
|
|
|
### 4.3 Train-VQA
|
|
|
|
|
|
- **Train-VQA.zip**
|
|
|
|
|
|
This archive contains training datasets of different scales:
|
|
|
- **200k VQA**
|
|
|
- **2000k VQA**
|
|
|
|
|
|
Additional subsets include:
|
|
|
- Files ending with `china`, used for geographic generalization experiments.
|
|
|
- Files ending with `right`, used for right-hand traffic country experiments.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 5. Video Index and Download Information
|
|
|
|
|
|
- **video_name_all.xlsx**
|
|
|
|
|
|
This file lists all videos used in the dataset along with their corresponding download links.
|
|
|
It is provided to support dataset reproduction and access to the original video resources.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🔧 Data Processing Utility
|
|
|
|
|
|
- **clip.py**
|
|
|
|
|
|
This repository provides a utility script for extracting image frames from raw videos.
|
|
|
|
|
|
The script performs the following operations:
|
|
|
- Trims a fixed duration from the beginning and end of each video
|
|
|
- Samples frames at a fixed rate
|
|
|
- Organizes extracted frames into structured folders
|
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
## 📚Citation
|
|
|
|
|
|
```bibtex
|
|
|
@article@misc{wang2026scenepilotbenchlargescaledatasetbenchmark,
|
|
|
title={ScenePilot-Bench: A Large-Scale Dataset and Benchmark for Evaluation of Vision-Language Models in Autonomous Driving},
|
|
|
author={Yujin Wang and Yutong Zheng and Wenxian Fan and Tianyi Wang and Hongqing Chu and Daxin Tian and Bingzhao Gao and Jianqiang Wang and Hong Chen},
|
|
|
year={2026},
|
|
|
eprint={2601.19582},
|
|
|
archivePrefix={arXiv},
|
|
|
primaryClass={cs.CV},
|
|
|
url={https://arxiv.org/abs/2601.19582},
|
|
|
}
|
|
|
```
|
|
|
|
|
|
## License
|
|
|
|
|
|
[](https://opensource.org/licenses/Apache-2.0)
|
|
|
|
|
|
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details. |