Acknowledge the terms and conditions to access the dataset

Terms and conditions:

The KITScenes dataset is provided to you under a Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0), with the additional terms included herein. When you download or use the dataset, you are agreeing to comply with the terms of CC BY-NC 4.0 as applicable, and also agreeing to the dataset terms (listed below). Where these dataset terms conflict with the terms of CC BY-NC 4.0, these dataset terms shall prevail.

Dataset terms:

  • In case you use the dataset within your research papers, you refer to our respective publication. If the dataset is used in media, a link to our websites (kitscenes.com) is included.
  • We take steps to protect the privacy of individuals by anonymizing faces and license plates using state-of-the-art anonymization software from BrighterAI. To the extent that you like to request removal of specific images/data frames from the dataset, please contact info@mrt.kit.edu.
  • We reserve all rights that are not explicitly granted to you. The dataset is provided as is, and you take full responsibility for any risk of using it.

Log in or Sign Up to review the conditions and access this dataset content.

The KITScenes Multimodal Dataset

Early pre-release. This is an early pre-release version of KITScenes Multimodal and may contain minor labeling errors and format changes. We recommend waiting for the upcoming full release if dataset stability is important for your use case.

KITScenes Multimodal is a high-fidelity autonomous driving dataset designed for research toward production-grade urban driving. It focuses on complex European city environments and combines high resolution synchronized cameras, long-range lidar, 4D imaging radar, GNSS/INS localization, and production-grade Lanelet2 HD maps, the most complete HD maps of any sensor dataset to date.

Highlights

  • European urban focus: recordings from Karlsruhe, Frankfurt, and Sindelfingen.
  • High-fidelity sensor suite: up to 72.5 MP of synchronized global-shutter camera imagery, seven lidars, three 4D imaging radars, and redundant GNSS/INS.
  • Long-range sensing: effective lidar range beyond 400 m with substantially higher return density than common public driving datasets.
  • HD maps in Lanelet2: production-grade maps with lane topology, regulatory elements, with 29 road-feature classes, 220 traffic-sign classes, and 3D traffic lights, signs, and poles all localized to reprojection accuracy.
  • Research benchmarks: designed to support online HD map construction, long-range monocular depth estimation, novel view synthesis, and end-to-end / world-model research.

About this pre-release

This repository currently provides an early preview of the dataset and release structure. During this stage, files, annotations, split definitions, and documentation may be refined without notice. If you need a stable benchmark release, please wait for the full public release.

Intended use

KITScenes is intended for academic research on autonomous driving perception, mapping, spatial learning, neural rendering, and embodied AI. In its current pre-release form, it is best suited for early exploration, pipeline integration, and preview experiments rather than final benchmark reporting.

Access and license

Access is gated. By requesting access, you acknowledge the dataset terms listed above and agree to use the data under CC BY-NC 4.0 together with the additional KITScenes terms.

Citation

If you use KITScenes Multimodal in research, please cite the associated KITScenes Multimodal publication. A full citation entry and paper will be added together with the full release.

Downloads last month
155