Add comprehensive model card for RAP: 3D Rasterization Augmented End-to-End Planning
Browse filesThis PR adds a comprehensive model card for the RAP model, described in the paper [RAP: 3D Rasterization Augmented End-to-End Planning](https://huggingface.co/papers/2510.04333).
It includes:
- An overview of the model and its key features/achievements.
- The full abstract of the paper.
- Links to the project page and the GitHub repository for code and further details.
- The `pipeline_tag: robotics` for better discoverability.
- The `license: apache-2.0`.
- News updates and a table of available checkpoints.
- The official BibTeX citation.
Please review and merge this PR.
README.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
pipeline_tag: robotics
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# RAP: 3D Rasterization Augmented End-to-End Planning
|
| 7 |
+
|
| 8 |
+
This repository contains the implementation and checkpoints for **RAP (Rasterization Augmented Planning)**, a scalable data augmentation pipeline for end-to-end autonomous driving, as presented in the paper [RAP: 3D Rasterization Augmented End-to-End Planning](https://huggingface.co/papers/2510.04333).
|
| 9 |
+
|
| 10 |
+
RAP leverages lightweight **3D rasterization** to generate counterfactual recovery maneuvers and cross-agent views and **Raster-to-Real feature alignment** to bridge the sim-to-real gap in feature space, achieving **state-of-the-art performance** on multiple benchmarks.
|
| 11 |
+
|
| 12 |
+
🏆 **1st Place** – [Waymo Open Dataset Vision-based E2E Driving Challenge](https://waymo.com/open/challenges/) (UniPlan entry)
|
| 13 |
+
🏆 **#1 on Leaderboards** – [Waymo Open Dataset Vision-based E2E Driving](https://waymo.com/open/challenges/2025/e2e-driving/) & [NAVSIM v1/v2](https://huggingface.co/spaces/AGC2024-P/e2e-driving-navtest) (RAP entry)
|
| 14 |
+
🏆 **State-of-the-art** – [Bench2Drive](https://thinklab-sjtu.github.io/Bench2Drive/) benchmark
|
| 15 |
+
|
| 16 |
+
Find more details on the [Project Page](https://alan-lanfeng.github.io/RAP/) and in the [GitHub Repository](https://github.com/vita-epfl/RAP).
|
| 17 |
+
|
| 18 |
+
<div align="center">
|
| 19 |
+
<img src="https://alan-lanfeng.github.io/RAP/assets/demo_vis.gif" alt="RAP Demo GIF" width="80%">
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
## Abstract
|
| 25 |
+
Imitation learning for end-to-end driving trains policies only on expert demonstrations. Once deployed in a closed loop, such policies lack recovery data: small mistakes cannot be corrected and quickly compound into failures. A promising direction is to generate alternative viewpoints and trajectories beyond the logged path. Prior work explores photorealistic digital twins via neural rendering or game engines, but these methods are prohibitively slow and costly, and thus mainly used for evaluation. In this work, we argue that photorealism is unnecessary for training end-to-end planners. What matters is semantic fidelity and scalability: driving depends on geometry and dynamics, not textures or lighting. Motivated by this, we propose 3D Rasterization, which replaces costly rendering with lightweight rasterization of annotated primitives, enabling augmentations such as counterfactual recovery maneuvers and cross-agent view synthesis. To transfer these synthetic views effectively to real-world deployment, we introduce a Raster-to-Real feature-space alignment that bridges the sim-to-real gap. Together, these components form Rasterization Augmented Planning (RAP), a scalable data augmentation pipeline for planning. RAP achieves state-of-the-art closed-loop robustness and long-tail generalization, ranking first on four major benchmarks: NAVSIM v1/v2, Waymo Open Dataset Vision-based E2E Driving, and Bench2Drive. Our results show that lightweight rasterization with feature alignment suffices to scale E2E training, offering a practical alternative to photorealistic rendering.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
## News
|
| 30 |
+
* **`Oct. 6th, 2025`:** Code released🔥!
|
| 31 |
+
|
| 32 |
+
---
|
| 33 |
+
|
| 34 |
+
## Getting Started
|
| 35 |
+
|
| 36 |
+
For detailed environment setup, data processing, training, and evaluation instructions, please refer to the [GitHub repository](https://github.com/vita-epfl/RAP).
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
+
|
| 40 |
+
## Checkpoints
|
| 41 |
+
|
| 42 |
+
### Results on NAVSIM
|
| 43 |
+
|
| 44 |
+
| Method | Model Size | Backbone | PDMS | Weight Download |
|
| 45 |
+
| :---: | :---: | :---: | :---: | :---: |
|
| 46 |
+
| RAP-DINO | 888M | DINOv3-h16+ | 93.8 | [Hugging Face](https://huggingface.co/Lanl11/RAP_ckpts/tree/main) |
|
| 47 |
+
|
| 48 |
+
### Results on Waymo
|
| 49 |
+
|
| 50 |
+
| Method | Model Size | Backbone | RFS | Weight Download |
|
| 51 |
+
| :---: | :---: | :---: | :---: | :---: |
|
| 52 |
+
| RAP-DINO | 888M | DINOv3-h16+ | 8.04 | [Hugging Face](https://huggingface.co/Lanl11/RAP_ckpts/tree/main) |
|
| 53 |
+
|
| 54 |
+
---
|
| 55 |
+
|
| 56 |
+
## Citation
|
| 57 |
+
|
| 58 |
+
```bibtex
|
| 59 |
+
@misc{feng2025rap3drasterizationaugmented,
|
| 60 |
+
title={RAP: 3D Rasterization Augmented End-to-End Planning},
|
| 61 |
+
author={Lan Feng and Yang Gao and Eloi Zablocki and Quanyi Li and Wuyang Li and Sichao Liu and Matthieu Cord and Alexandre Alahi},
|
| 62 |
+
year={2025},
|
| 63 |
+
eprint={2510.04333},
|
| 64 |
+
archivePrefix={arXiv},
|
| 65 |
+
primaryClass={cs.CV},
|
| 66 |
+
url={https://arxiv.org/abs/2510.04333},
|
| 67 |
+
}
|
| 68 |
+
```
|