File size: 4,117 Bytes
8970135
 
 
 
0690a67
 
 
8970135
 
 
 
 
 
 
 
 
 
e2fa975
 
 
 
 
 
 
 
 
 
 
8970135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e2fa975
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
license: apache-2.0
language:
- en
task_categories:
  - visual-question-answering
  - robotics
tags:
- DriveFusion
- Robotics
- VLA
- VLM
- MultiModal
- AutonomousDriving
---
# DriveFusion-Data

<div align="center">
  <img src="drivefusion_logo.png" alt="DriveFusion Logo" width="300"/>
  <h1>DriveFusionQA</h1>
  <p><strong>An Autonomous Driving Vision-Language Model for Scenario Understanding & Decision Reasoning.</strong></p>

  [![Model License](https://img.shields.io/badge/License-Apache%202.0-green.svg)](https://opensource.org/licenses/Apache-2.0)
  [![Status](https://img.shields.io/badge/Status-Active-success.svg)]()
</div>

---

**DriveFusion-Data** is a large-scale multimodal autonomous driving dataset collected in the CARLA simulator using a privileged rule-based expert policy (PDM-Lite). The dataset contains rich sensor data, vehicle measurements, and language annotations for training vision-language-action (VLA) models.

This dataset is part of the **DriveFusion** project.

---

## Dataset Overview

DriveFusion-Data provides a comprehensive multimodal dataset for autonomous driving research, including:

- RGB camera images from **360° multi-camera coverage** (front, front-left, front-right, back-left, back-right)  
- LiDAR point clouds  
- Semantic segmentation maps  
- Depth maps  
- Bounding boxes  
- Vehicle and simulator measurements  
- Natural language annotations (VQA, commentary, instruction following)

The dataset is generated using a CARLA-based data collection framework with multi-town, multi-scenario, and multi-sensor configurations.

---

## Data Collection Framework

The data was collected using the **DriveFusion CARLA Data Collection Framework**, which provides:

- Rule-based expert driving using **PDM-Lite**  
- Multi-camera **360° sensor recording** and LiDAR  
- Weather and lighting augmentation  
- Scenario-based route execution  
- Automated batch data generation on clusters (SLURM)  
- Format conversion and dataset validation tools  

**Collection code repository:**  
[https://github.com/DriveFusion/carla-data-collection](https://github.com/DriveFusion/carla-data-collection)

---

## Dataset Sources and Attribution

DriveFusion-Data builds upon several open-source frameworks and datasets:

**Core Simulation:**

- [CARLA Simulator](https://github.com/carla-simulator/carla)  
- [CARLA Leaderboard 2.0](https://github.com/carla-simulator/leaderboard)  
- [Scenario Runner](https://github.com/carla-simulator/scenario_runner)  

**Reference Methods:**

- [DriveLM](https://github.com/OpenDriveLab/DriveLM) (PDM-Lite autopilot and VQA generation)  

**Language Dataset Reference:**

- [SimLingo Dataset](https://huggingface.co/datasets/RenzKa/simlingo)  

Users must comply with the licenses of all referenced frameworks and datasets.

---

## Dataset Format

Two main formats are provided:

**Pre-DriveFusion Format**  
- Raw sensor data and measurements stored in compressed JSON and sensor files.

**DriveFusion Format**  
- Standardized multimodal structure for end-to-end VLA training.  
- Includes aligned sensor data and language annotations.

---

## Intended Use

This dataset is designed for:

- Vision-Language-Action (VLA) model training  
- Autonomous driving research and benchmarking  
- Multimodal perception and planning research  
- Language grounding in driving environments  
- Embodied AI and robotics research

---

## License and Attribution

This dataset is derived from simulation and public frameworks. Users must comply with:

- CARLA license  
- CARLA Leaderboard and Scenario Runner licenses (MIT)  
- DriveLM license  
- SimLingo license  

The DriveFusion framework code is released under **Apache 2.0**. Language annotations and third-party components may have additional license restrictions.

---

## Citation

If you use DriveFusion-Data, please cite:

```bibtex
@misc{drivefusiondata2026,
  title={DriveFusion-Data: A Large-Scale Multimodal Dataset for Autonomous Driving},
  author={Samir, Omar and DriveFusion Team},
  year={2026},
  url={https://huggingface.co/datasets/DriveFusion/DriveFusion-Data}
}
```