File size: 5,293 Bytes
f213b9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c4e30ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a5179e
c4e30ae
 
 
 
 
 
 
 
 
 
 
6a5179e
c4e30ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---

pretty_name: AutoDriDM
license: apache-2.0
language:
  - en
task_categories:
  - question-answering
tags:
  - autonomous-driving
  - vision-language-models
  - vlm
  - benchmark
  - explainability
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path:
          - Object-1.json
          - Object-2.json
          - Scene-1.json
          - Scene-2.json
          - Decision-1.json
          - Decision-2.json
---


<div align="center">

# AutoDriDM: An Explainable Benchmark for Decision-Making of Vision-Language Models in Autonomous Driving

**Paper (arXiv):** https://arxiv.org/abs/2601.14702  
**Hugging Face Dataset:** https://huggingface.co/datasets/ColamentosZJU/AutoDriDM

</div>

AutoDriDM is a **decision-centric**, progressive benchmark for evaluating the **perception-to-decision** capability boundary of Vision-Language Models (VLMs) in autonomous driving.

> **This release provides annotations only.**  
> Please obtain the original images from the official sources (**nuScenes / KITTI / BDD100K**) and align them locally if you want to run image-based evaluation.

---

## ✨ Overview

### Key Facts

- **Protocol:** 3 progressive levels — **Object → Scene → Decision**
- **Tasks:** 6 tasks (two per level)
- **Scale:** **6,650** QA items built from **1,295** front-facing images
- **Risk-aware evaluation:** each item includes a 5-level risk label `danger_score ∈ {1,2,3,4,5}`  
  - **High-risk** can be defined as `average danger_score ≥ 4.0`

---

## 🧩 Benchmark Structure

AutoDriDM follows a **progressive evaluation** protocol:

- **Object Level:** identify key objects and recognize their states
- **Scene Level:** understand global context (weather/illumination, special factors)
- **Decision Level:** choose driving actions and assess risk levels

---

## 📦 Task List (6 JSON Files)

The dataset contains **six tasks**, each provided as a JSON file:

### Object Level (single-choice)

- **Object-1 (`Object-1.json`)**: Identify the **key object** that most influences the driving decision.
- **Object-2 (`Object-2.json`)**: Determine the **state** of a designated key object (e.g., traffic light state).

### Scene Level (multiple-choice)

- **Scene-1 (`Scene-1.json`)**: Recognize **weather / illumination** (e.g., daytime, nighttime, rain, snow, heavy fog).
- **Scene-2 (`Scene-2.json`)**: Identify **special scene factors** that potentially affect driving decisions (e.g., accident scene, construction zone).

### Decision Level (single-choice)

- **Decision-1 (`Decision-1.json`)**: Select the **optimal driving action** for the ego vehicle.
- **Decision-2 (`Decision-2.json`)**: Evaluate the **risk level** of a specified (potentially suboptimal) action.

---

## 🧾 Data Format (JSON)

Each file is a JSON array. Each element is an object with the following fields:

- `image_name` (string): image identifier/path  
  - In this release, we provide annotations only; `image_name` is intended to be mapped to your local image storage.
- `taskX_q` (string): question text for task X
- `taskX_o` (string): option list as a single string (e.g., `"A....; B....; C...."`)
- `taskX_a` (string): answer letters  
  - **Single-choice tasks:** one letter (e.g., `"C"`)  
  - **Multiple-choice tasks:** comma-separated letters (e.g., `"A,C"`)
- `danger_score` (int or string): scenario risk label on a 5-level scale (**1=minimal**, **5=severe**)

### Example (JSON)

```json

{

  "image_name": "images/xxxx.jpg",

  "task1_q": "...",

  "task1_o": "A....; B....; C....",

  "task1_a": "C",

  "danger_score": "2"

}

```

---

## 🚀 How to Use

### 1) Download Annotations

Download the six JSON files from the Hugging Face dataset page:

- https://huggingface.co/datasets/ColamentosZJU/AutoDriDM

### 2) Load Annotations in Python

```python

import json



with open("Object-1.json", "r", encoding="utf-8") as f:

    data = json.load(f)



print(len(data), list(data[0].keys()))

```

### 3) Local Image Alignment (for image-based evaluation)

To evaluate with images, you must:

1. Download the source datasets from the official providers:
   - nuScenes
   - KITTI
   - BDD100K
2. Prepare a local folder (example):
   - `./images/`
3. Map each `image_name` in JSON to an existing local file path in your environment.

---

## 📌 Citation

If you use AutoDriDM in your research, please cite:

```bibtex

@article{tang2026autodridm,

  title={AutoDriDM: An Explainable Benchmark for Decision-Making of Vision-Language Models in Autonomous Driving},

  author={Tang, Zecong and Wang, Zixu and Wang, Yifei and Lian, Weitong and Gao, Tianjian and Li, Haoran and Ru, Tengju and Meng, Lingyi and Cui, Zhejun and Zhu, Yichen and others},

  journal={arXiv preprint arXiv:2601.14702},

  year={2026}

}

```

---

## ⚖️ License

This project is released under the **Apache License 2.0**.  
Some components or third-party implementations may be distributed under different licenses.

---

## 🙏 Acknowledgments

We thank the open-source community and dataset providers (**nuScenes, KITTI, BDD100K**) that make this benchmark possible.