| --- |
| license: mit |
| size_categories: |
| - 1M<n<10M |
| pretty_name: LEAD Carla Leaderboard 2.0 |
| task_categories: |
| - robotics |
| tags: |
| - autonomous-driving |
| - imitation-learning |
| - carla |
| - transfuser |
| --- |
| |
| # LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving |
|
|
| [**Project Page**](https://ln2697.github.io/lead) | [**Paper**](https://huggingface.co/papers/2512.20563) | [**Code**](https://github.com/autonomousvision/lead) |
|
|
| Official CARLA dataset accompanies our paper LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving. |
|
|
| > We release the complete pipeline required to achieve state-of-the-art closed-loop performance on the Bench2Drive benchmark. Built around the CARLA simulator, the stack features a data-centric design with: |
| > |
| > - Extensive visualization suite and runtime type validation for easier debugging. |
| > - Optimized storage format, packs 72 hours of driving in ~200GB. |
| > - Native support for NAVSIM and Waymo Vision-based E2E and extending those benchmarks through closed-loop simulation and synthetic data for additional supervision during training. |
|
|
| Find more information on [https://github.com/autonomousvision/lead](https://github.com/autonomousvision/lead). |
|
|
| ## Format |
|
|
| Each route is stored as a sequence of synchronized frames. All sensor modalities are ego-centric and time-aligned. |
| In addition to the nominal sensor suite, we provide a second, perturbated sensor stack corresponding to a counterfactual ego state used for recovery supervision. |
|
|
| ```html |
| ├── bboxes/ # Per-frame 3D bounding boxes for all actors |
| ├── depth/ # Compressed depth maps (should be used for auxiliary supervision only) |
| ├── depth_perturbated # Depth from a perturbated ego state |
| ├── hdmap/ # Ego-centric rasterized HD map |
| ├── hdmap_perturbated # HD map aligned to perturbated ego pose |
| ├── lidar/ # LiDAR point clouds |
| ├── metas/ # Per-frame metadata and ego state |
| ├── radar/ # Radar detections |
| ├── radar_perturbated # Radar detections from perturbated ego state |
| ├── rgb/ # Front-facing RGB images |
| ├── rgb_perturbated # RGB images from perturbated ego state |
| ├── semantics/ # Semantic segmentation maps |
| ├── semantics_perturbated # Semantics from perturbated ego state |
| └── results.json # Route-level summary and evaluation metadata |
| ``` |
|
|
| ## Download |
|
|
| You can either download a **single route** (useful for quick inspection / debugging) or **clone the full dataset** via Git LFS and unzip all routes. |
|
|
| **Note:** Download the dataset after setting up the [lead repository](https://github.com/autonomousvision/lead). |
|
|
| ### Option 1: Download a single route |
|
|
| ```bash |
| bash scripts/download_one_route.sh |
| ``` |
|
|
| ### Option 2: Download all routes (Git LFS) |
|
|
| Clone the dataset repository directly into the expected directory: |
|
|
| ```bash |
| git lfs install |
| git clone https://huggingface.co/datasets/ln2697/lead_carla data/carla_leaderboard2/zip |
| ``` |
|
|
| ### Unzip routes |
|
|
| Run |
|
|
| ```bash |
| bash scripts/unzip_routes.sh |
| ``` |
|
|
| ## Citation |
|
|
| If you find this work useful, please cite: |
|
|
| ```bibtex |
| @inproceedings{Nguyen2026CVPR, |
| author = {Long Nguyen and Micha Fauth and Bernhard Jaeger and Daniel Dauner and Maximilian Igl and Andreas Geiger and Kashyap Chitta}, |
| title = {LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving}, |
| booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}, |
| year = {2026}, |
| } |
| ``` |
|
|
| ## License |
|
|
| This project is released under the [MIT License](LICENSE) |