File size: 2,296 Bytes
ee22b18 a5b4ff6 ee22b18 8485284 a5b4ff6 8485284 ee22b18 35521e3 9417189 dbfc31e 4d19a27 dbfc31e 9417189 b2e8ebc ee22b18 e32a581 a5b4ff6 b7c050e ee22b18 1adb8a4 ee22b18 b7c050e ee22b18 1adb8a4 1b9c02b b9a0bb7 ee22b18 b7c050e 5d8f5d2 b7c050e ee22b18 b7c050e 58cf78f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
tags:
- CARLA
- NAVSIM
- Imitation-Learning
- Closed-Loop-Driving
---
# LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving
[**Project Page**](https://ln2697.github.io/lead) | [**Paper**](https://huggingface.co/papers/2512.20563) | [**Code**](https://github.com/autonomousvision/lead)
Official model weights for **Latent TransFuser v6 (LTFv6)**, a NAVSIM checkpoint accompanies our paper LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving.
> We release the complete pipeline required to achieve state-of-the-art closed-loop performance on the Bench2Drive benchmark. Built around the CARLA simulator, the stack features a data-centric design with:
>
> - Extensive visualization suite and runtime type validation for easier debugging.
> - Optimized storage format, packs 72 hours of driving in ~200GB.
> - Native support for NAVSIM and Waymo Vision-based E2E and extending those benchmarks through closed-loop simulation and synthetic data for additional supervision during training.
Find more information on [https://github.com/autonomousvision/lead](https://github.com/autonomousvision/lead).
<p align="center">
<img src="https://ln2697.github.io/lead/static/images/tfv6.png" alt="TFv6 Architecture" width="80%" >
</p>
## Usage
Install dependencies
```bash
pip install torch timm numpy opencv-python jaxtyping beartype omegaconf huggingface_hub
```
See [example.ipynb](https://huggingface.co/ln2697/tfv6_navsim/blob/main/example.ipynb) to inspect data format and example inference.
## Data Format
We also provide example NAVSIM cache [here](https://huggingface.co/ln2697/tfv6_navsim/tree/main/data).
**Input:**
- RGB: (256, 1920, 3), range [0, 255]
- Command: [left, straight, right, unknown], e.g. [0, 1, 0, 0] for straight
- Speed: m/s
- Acceleration: m/s²
**Output:**
- `waypoints`: (N, 2) predicted positions
- `headings`: (N,) predicted angles
## Citation
If you find this work useful, please cite:
```bibtex
@article{Nguyen2025ARXIV,
title={LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving},
author={Nguyen, Long and Fauth, Micha and Jaeger, Bernhard and Dauner, Daniel and Igl, Maximilian and Geiger, Andreas and Chitta, Kashyap},
journal={arXiv preprint arXiv:2512.20563},
year={2025}
}
``` |