--- license: apache-2.0 language: - en task_categories: - visual-question-answering - robotics tags: - DriveFusion - Robotics - VLA - VLM - MultiModal - AutonomousDriving --- # DriveFusion-Data
DriveFusion Logo

DriveFusionQA

An Autonomous Driving Vision-Language Model for Scenario Understanding & Decision Reasoning.

[![Model License](https://img.shields.io/badge/License-Apache%202.0-green.svg)](https://opensource.org/licenses/Apache-2.0) [![Status](https://img.shields.io/badge/Status-Active-success.svg)]()
--- **DriveFusion-Data** is a large-scale multimodal autonomous driving dataset collected in the CARLA simulator using a privileged rule-based expert policy (PDM-Lite). The dataset contains rich sensor data, vehicle measurements, and language annotations for training vision-language-action (VLA) models. This dataset is part of the **DriveFusion** project. --- ## Dataset Overview DriveFusion-Data provides a comprehensive multimodal dataset for autonomous driving research, including: - RGB camera images from **360° multi-camera coverage** (front, front-left, front-right, back-left, back-right) - LiDAR point clouds - Semantic segmentation maps - Depth maps - Bounding boxes - Vehicle and simulator measurements - Natural language annotations (VQA, commentary, instruction following) The dataset is generated using a CARLA-based data collection framework with multi-town, multi-scenario, and multi-sensor configurations. --- ## Data Collection Framework The data was collected using the **DriveFusion CARLA Data Collection Framework**, which provides: - Rule-based expert driving using **PDM-Lite** - Multi-camera **360° sensor recording** and LiDAR - Weather and lighting augmentation - Scenario-based route execution - Automated batch data generation on clusters (SLURM) - Format conversion and dataset validation tools **Collection code repository:** [https://github.com/DriveFusion/carla-data-collection](https://github.com/DriveFusion/carla-data-collection) --- ## Dataset Sources and Attribution DriveFusion-Data builds upon several open-source frameworks and datasets: **Core Simulation:** - [CARLA Simulator](https://github.com/carla-simulator/carla) - [CARLA Leaderboard 2.0](https://github.com/carla-simulator/leaderboard) - [Scenario Runner](https://github.com/carla-simulator/scenario_runner) **Reference Methods:** - [DriveLM](https://github.com/OpenDriveLab/DriveLM) (PDM-Lite autopilot and VQA generation) **Language Dataset Reference:** - [SimLingo Dataset](https://huggingface.co/datasets/RenzKa/simlingo) Users must comply with the licenses of all referenced frameworks and datasets. --- ## Dataset Format Two main formats are provided: **Pre-DriveFusion Format** - Raw sensor data and measurements stored in compressed JSON and sensor files. **DriveFusion Format** - Standardized multimodal structure for end-to-end VLA training. - Includes aligned sensor data and language annotations. --- ## Intended Use This dataset is designed for: - Vision-Language-Action (VLA) model training - Autonomous driving research and benchmarking - Multimodal perception and planning research - Language grounding in driving environments - Embodied AI and robotics research --- ## License and Attribution This dataset is derived from simulation and public frameworks. Users must comply with: - CARLA license - CARLA Leaderboard and Scenario Runner licenses (MIT) - DriveLM license - SimLingo license The DriveFusion framework code is released under **Apache 2.0**. Language annotations and third-party components may have additional license restrictions. --- ## Citation If you use DriveFusion-Data, please cite: ```bibtex @misc{drivefusiondata2026, title={DriveFusion-Data: A Large-Scale Multimodal Dataset for Autonomous Driving}, author={Samir, Omar and DriveFusion Team}, year={2026}, url={https://huggingface.co/datasets/DriveFusion/DriveFusion-Data} } ```