|
|
---
|
|
|
pretty_name: AutoDriDM
|
|
|
license: apache-2.0
|
|
|
language:
|
|
|
- en
|
|
|
task_categories:
|
|
|
- question-answering
|
|
|
tags:
|
|
|
- autonomous-driving
|
|
|
- vision-language-models
|
|
|
- vlm
|
|
|
- benchmark
|
|
|
- explainability
|
|
|
size_categories:
|
|
|
- 1K<n<10K
|
|
|
configs:
|
|
|
- config_name: default
|
|
|
data_files:
|
|
|
- split: test
|
|
|
path:
|
|
|
- Object-1.json
|
|
|
- Object-2.json
|
|
|
- Scene-1.json
|
|
|
- Scene-2.json
|
|
|
- Decision-1.json
|
|
|
- Decision-2.json
|
|
|
---
|
|
|
|
|
|
<div align="center">
|
|
|
|
|
|
# AutoDriDM: An Explainable Benchmark for Decision-Making of Vision-Language Models in Autonomous Driving
|
|
|
|
|
|
**Paper (arXiv):** https://arxiv.org/abs/2601.14702
|
|
|
**Hugging Face Dataset:** https://huggingface.co/datasets/ColamentosZJU/AutoDriDM
|
|
|
|
|
|
</div>
|
|
|
|
|
|
AutoDriDM is a **decision-centric**, progressive benchmark for evaluating the **perception-to-decision** capability boundary of Vision-Language Models (VLMs) in autonomous driving.
|
|
|
|
|
|
> **This release provides annotations only.**
|
|
|
> Please obtain the original images from the official sources (**nuScenes / KITTI / BDD100K**) and align them locally if you want to run image-based evaluation.
|
|
|
|
|
|
---
|
|
|
|
|
|
## ✨ Overview
|
|
|
|
|
|
### Key Facts
|
|
|
|
|
|
- **Protocol:** 3 progressive levels — **Object → Scene → Decision**
|
|
|
- **Tasks:** 6 tasks (two per level)
|
|
|
- **Scale:** **6,650** QA items built from **1,295** front-facing images
|
|
|
- **Risk-aware evaluation:** each item includes a 5-level risk label `danger_score ∈ {1,2,3,4,5}`
|
|
|
- **High-risk** can be defined as `average danger_score ≥ 4.0`
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🧩 Benchmark Structure
|
|
|
|
|
|
AutoDriDM follows a **progressive evaluation** protocol:
|
|
|
|
|
|
- **Object Level:** identify key objects and recognize their states
|
|
|
- **Scene Level:** understand global context (weather/illumination, special factors)
|
|
|
- **Decision Level:** choose driving actions and assess risk levels
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📦 Task List (6 JSON Files)
|
|
|
|
|
|
The dataset contains **six tasks**, each provided as a JSON file:
|
|
|
|
|
|
### Object Level (single-choice)
|
|
|
|
|
|
- **Object-1 (`Object-1.json`)**: Identify the **key object** that most influences the driving decision.
|
|
|
- **Object-2 (`Object-2.json`)**: Determine the **state** of a designated key object (e.g., traffic light state).
|
|
|
|
|
|
### Scene Level (multiple-choice)
|
|
|
|
|
|
- **Scene-1 (`Scene-1.json`)**: Recognize **weather / illumination** (e.g., daytime, nighttime, rain, snow, heavy fog).
|
|
|
- **Scene-2 (`Scene-2.json`)**: Identify **special scene factors** that potentially affect driving decisions (e.g., accident scene, construction zone).
|
|
|
|
|
|
### Decision Level (single-choice)
|
|
|
|
|
|
- **Decision-1 (`Decision-1.json`)**: Select the **optimal driving action** for the ego vehicle.
|
|
|
- **Decision-2 (`Decision-2.json`)**: Evaluate the **risk level** of a specified (potentially suboptimal) action.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🧾 Data Format (JSON)
|
|
|
|
|
|
Each file is a JSON array. Each element is an object with the following fields:
|
|
|
|
|
|
- `image_name` (string): image identifier/path
|
|
|
- In this release, we provide annotations only; `image_name` is intended to be mapped to your local image storage.
|
|
|
- `taskX_q` (string): question text for task X
|
|
|
- `taskX_o` (string): option list as a single string (e.g., `"A....; B....; C...."`)
|
|
|
- `taskX_a` (string): answer letters
|
|
|
- **Single-choice tasks:** one letter (e.g., `"C"`)
|
|
|
- **Multiple-choice tasks:** comma-separated letters (e.g., `"A,C"`)
|
|
|
- `danger_score` (int or string): scenario risk label on a 5-level scale (**1=minimal**, **5=severe**)
|
|
|
|
|
|
### Example (JSON)
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
"image_name": "images/xxxx.jpg",
|
|
|
"task1_q": "...",
|
|
|
"task1_o": "A....; B....; C....",
|
|
|
"task1_a": "C",
|
|
|
"danger_score": "2"
|
|
|
}
|
|
|
```
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🚀 How to Use
|
|
|
|
|
|
### 1) Download Annotations
|
|
|
|
|
|
Download the six JSON files from the Hugging Face dataset page:
|
|
|
|
|
|
- https://huggingface.co/datasets/ColamentosZJU/AutoDriDM
|
|
|
|
|
|
### 2) Load Annotations in Python
|
|
|
|
|
|
```python
|
|
|
import json
|
|
|
|
|
|
with open("Object-1.json", "r", encoding="utf-8") as f:
|
|
|
data = json.load(f)
|
|
|
|
|
|
print(len(data), list(data[0].keys()))
|
|
|
```
|
|
|
|
|
|
### 3) Local Image Alignment (for image-based evaluation)
|
|
|
|
|
|
To evaluate with images, you must:
|
|
|
|
|
|
1. Download the source datasets from the official providers:
|
|
|
- nuScenes
|
|
|
- KITTI
|
|
|
- BDD100K
|
|
|
2. Prepare a local folder (example):
|
|
|
- `./images/`
|
|
|
3. Map each `image_name` in JSON to an existing local file path in your environment.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 📌 Citation
|
|
|
|
|
|
If you use AutoDriDM in your research, please cite:
|
|
|
|
|
|
```bibtex
|
|
|
@article{tang2026autodridm,
|
|
|
title={AutoDriDM: An Explainable Benchmark for Decision-Making of Vision-Language Models in Autonomous Driving},
|
|
|
author={Tang, Zecong and Wang, Zixu and Wang, Yifei and Lian, Weitong and Gao, Tianjian and Li, Haoran and Ru, Tengju and Meng, Lingyi and Cui, Zhejun and Zhu, Yichen and others},
|
|
|
journal={arXiv preprint arXiv:2601.14702},
|
|
|
year={2026}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
---
|
|
|
|
|
|
## ⚖️ License
|
|
|
|
|
|
This project is released under the **Apache License 2.0**.
|
|
|
Some components or third-party implementations may be distributed under different licenses.
|
|
|
|
|
|
---
|
|
|
|
|
|
## 🙏 Acknowledgments
|
|
|
|
|
|
We thank the open-source community and dataset providers (**nuScenes, KITTI, BDD100K**) that make this benchmark possible.
|
|
|
|
|
|
|