ys-qu commited on
Commit
fa21614
·
verified ·
1 Parent(s): 0a78e0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -1,3 +1,64 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - reinforcement-learning
5
+ - robotics
6
+ - computer-vision
7
+ tags:
8
+ - autonomous-driving
9
+ - carla
10
+ - imitation-learning
11
+ - vlm
12
+ - found-rl
13
+ size_categories:
14
+ - 10G-100G
15
+ ---
16
+
17
+ # Found-RL Dataset: Demonstration Data for VLM Fine-tuning
18
+
19
+ ## 🚗 Dataset Overview
20
+
21
+ This dataset contains large-scale demonstration data collected from the **CARLA simulator**, designed to fine-tune Vision-Language Models (VLMs) for autonomous driving tasks. It serves as the data foundation for the paper **"Found-RL: Foundation Model-Enhanced Reinforcement Learning for Autonomous Driving"**.
22
+
23
+ The dataset comprises approximately **1.374 million state-action transitions** collected across three diverse benchmarks using expert policies.
24
+
25
+ - **📄 Paper:** [Found-RL: foundation model-enhanced reinforcement learning for autonomous driving](https://arxiv.org/abs/2602.10458)
26
+ - **💻 Code:** [https://github.com/ys-qu/found-rl](https://github.com/ys-qu/found-rl)
27
+ - **📦 Format:** Compressed `.tar.gz` archive
28
+
29
+ ## 📊 Dataset Statistics & Composition
30
+
31
+ We collected demonstration data on three primary benchmarks to ensure diversity in driving scenarios. The total dataset consists of **~1.37M transitions**.
32
+
33
+ | Benchmark | Expert Policy | Episodes | State-Action Transitions |
34
+ | :--- | :--- | :--- | :--- |
35
+ | **CARLA Leaderboard** | Roach PPO Expert (Zhang et al., 2021) | 160 | ~457k |
36
+ | **NoCrash Benchmark** | Autopilot Roaming Expert | 80 | ~235k |
37
+ | **CARLA Challenge** | Autopilot Roaming Expert | 240 | ~682k |
38
+ | **Total** | - | **480** | **~1.374 Million** |
39
+
40
+ ## 🛠 Data Collection Methodology
41
+
42
+ ### 1. Expert Policies
43
+ - **Leaderboard Benchmark:** Data was collected using the **Roach PPO expert policy** (Zhang et al., 2021).
44
+ - **NoCrash & Challenge Benchmarks:** Data was collected using the **Autopilot roaming expert policy**.
45
+
46
+ ### 2. Constraints & Filtering
47
+ To ensure high-quality training data for VLM fine-tuning, the following constraints were applied during collection:
48
+ - **Maximum Duration:** The maximum duration for each episode was set to **300 seconds**.
49
+ - **Collision Filtering:** A terminal step filtering rule was applied. A short segment of steps immediately preceding a collision event was discarded, ensuring the dataset contains only the valid, safe portion of each episode.
50
+
51
+ ### 3. Usage
52
+ This data is intended to be used with open-source frameworks (e.g., *open_clip*, *LLaVA* codebases) to fine-tune VLMs, providing them with expert-level driving understanding.
53
+
54
+ If you use this dataset in your research, please cite our paper:
55
+ ```bibtex
56
+ @misc{qu2026foundrl,
57
+ title={Found-RL: foundation model-enhanced reinforcement learning for autonomous driving},
58
+ author={Yansong Qu and Zihao Sheng and Zilin Huang and Jiancong Chen and Yuhao Luo and Tianyi Wang and Yiheng Feng and Samuel Labi and Sikai Chen},
59
+ year={2026},
60
+ eprint={2602.10458},
61
+ archivePrefix={arXiv},
62
+ primaryClass={cs.AI},
63
+ url={https://arxiv.org/abs/2602.10458},
64
+ }