Acknowledge the terms and conditions to access the dataset
Terms and conditions:
The KITScenes dataset is provided to you under a Creative Commons Attribution-NonCommercial 4.0 International Public License (CC BY-NC 4.0), with the additional terms included herein. When you download or use the dataset, you are agreeing to comply with the terms of CC BY-NC 4.0 as applicable, and also agreeing to the dataset terms (listed below). Where these dataset terms conflict with the terms of CC BY-NC 4.0, these dataset terms shall prevail.
Dataset terms:
- In case you use the dataset within your research papers, you refer to at least one of our publications listed below. If the dataset is used in media, a link to our websites (kitscenes.com) is included.
- We take steps to protect the privacy of individuals by anonymizing faces and license plates using state-of-the-art anonymization software from BrighterAI. To the extent that you like to request removal of specific images/data frames from the dataset, please contact info@mrt.kit.edu.
- We reserve all rights that are not explicitly granted to you. The dataset is provided as is, and you take full responsibility for any risk of using it.
Publications:
- Wagner et al.: LongTail Driving Scenarios with Reasoning: The KITScenes LongTail Dataset. In arXiv, 2026
Log in or Sign Up to review the conditions and access this dataset content.
KITScenes LongTail Dataset
We collected our data over the course of two years, beginning in late 2023. Our recordings include urban and suburban environments, as well as highways (the main locations are Karlsruhe, Heidelberg, Mannheim and the Black Forrest). We adjusted our routes to include many construction zones and intersections. In particular, we filtered for rare events such as adverse weather conditions (heavy rain, snow, fog), road closures, and accidents. Consequently, our dataset encompasses scenarios that diverge from nominal data distributions (i.e., long-tail scenarios). Overall, our dataset contains one thousand 9s-long scenarios that are divided into three splits: train (500), test (400), and validation (100).
In addition to specifically selected challenging scenarios, adverse weather, and construction zones, we use the Pareto principle to determine further long-tail data. Specifically, we use the well-established nuScenes dataset (Caesar et al., 2020) as reference and rank-frequency plots with a 80% cumulative frequency threshold to define long-tail data. In nuScenes approx. 88% of the scenarios are recorded during the day, thus nighttime scenarios are long-tail data. For maneuver types, driving straight and regular turns account for approx. 90% of nuScenes. Therefore, overtaking and lane changing are part of the remaining long-tail. As an exception, we also include nominal driving at intersections to better evaluate instruction following since there are more viable trajectories than in most long-tail scenarios.
Our dataset contains multi-view video data with a 360° horizontal field of view (FoV) and six viewing angles (see (a) to (f) in the following Figure). Furthermore, we perform frame-wise image stitching (see (g)). Our stitching method introduces gradual image warping to generate 360° views.
Changelog
- Mar 19, 2026: Preview version, we release the test split and 3 training samples for few-shot evaluations. We will release the val and train splits with reasoning traces, raw images with a higher dynamic range and stitched images in later versions.
- Downloads last month
- 24

