ln2697 commited on
Commit
04bbf83
·
verified ·
1 Parent(s): d2a017d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: robotics
4
+ tags:
5
+ - autonomous-driving
6
+ - imitation-learning
7
+ - carla
8
+ - transfuser
9
+ pretty_name: LEAD Carla Leaderboard 2.0
10
+ size_categories:
11
+ - 1M<n<10M
12
+ ---
13
+
14
+ # LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving
15
+
16
+ [**Project Page**](https://ln2697.github.io/lead) | [**Paper**](https://huggingface.co/papers/2512.20563) | [**Code**](https://github.com/autonomousvision/lead)
17
+
18
+ Official CARLA dataset accompanies our paper LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving.
19
+
20
+ > We release the complete pipeline (covering scenario descriptions, expert driver, data preprocessing scripts, training code, and evaluation infrastructure) required to achieve **state-of-the-art closed-loop performance on the Bench2Drive** benchmark. Built around the CARLA simulator, the stack features a data-centric design with:
21
+ >
22
+ > - Extensive visualization suite and runtime type validation for easier debugging.
23
+ > - Optimized storage format, packs 72 hours of driving in ~200GB.
24
+ > - Native support for NAVSIM and Waymo Vision-based E2E, with LEAD extending these benchmarks through closed-loop simulation and synthetic data for additional supervision during training
25
+
26
+ Find more information on [https://github.com/autonomousvision/lead](https://github.com/autonomousvision/lead).
27
+
28
+ ## Format
29
+
30
+ Each route is stored as a sequence of synchronized frames. All sensor modalities are ego-centric and time-aligned.
31
+ In addition to the nominal sensor suite, we provide a second, perturbated sensor stack corresponding to a counterfactual ego state used for recovery supervision.
32
+
33
+ ```html
34
+ ├── bboxes/ # Per-frame 3D bounding boxes for all actors
35
+ ├── depth/ # Compressed depth maps (should be used for auxiliary supervision only)
36
+ ├── depth_perturbated # Depth from a perturbated ego state
37
+ ├── hdmap/ # Ego-centric rasterized HD map
38
+ ├── hdmap_perturbated # HD map aligned to perturbated ego pose
39
+ ├── lidar/ # LiDAR point clouds
40
+ ├── metas/ # Per-frame metadata and ego state
41
+ ├── radar/ # Radar detections
42
+ ├── radar_perturbated # Radar detections from perturbated ego state
43
+ ├── rgb/ # Front-facing RGB images
44
+ ├── rgb_perturbated # RGB images from perturbated ego state
45
+ ├── semantics/ # Semantic segmentation maps
46
+ ├── semantics_perturbated # Semantics from perturbated ego state
47
+ └── results.json # Route-level summary and evaluation metadata
48
+ ```
49
+
50
+ ## Citation
51
+
52
+ If you find this work useful, please cite:
53
+
54
+ ```bibtex
55
+ @article{Nguyen2025ARXIV,
56
+ title={LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving},
57
+ author={Nguyen, Long and Fauth, Micha and Jaeger, Bernhard and Dauner, Daniel and Igl, Maximilian and Geiger, Andreas and Chitta, Kashyap},
58
+ journal={arXiv preprint arXiv:2512.20563},
59
+ year={2025}
60
+ }
61
+ ```
62
+
63
+ ## License
64
+
65
+ This project is released under the [MIT License](LICENSE)