Update README and LICENSE
Browse files- LICENSE.txt +15 -0
- README.md +317 -3
LICENSE.txt
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
CarDreamer is the proprietary property of The Regents of the University of California ("The Regents") and is copyright © 2024 The Regents of the University of California, Davis campus. All Rights Reserved.
|
| 2 |
+
|
| 3 |
+
Redistribution and use in source and binary forms, with or without modification, are permitted by nonprofit educational or research institutions for noncommercial use only, provided that the following conditions are met:
|
| 4 |
+
|
| 5 |
+
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
| 6 |
+
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
| 7 |
+
* The name or other trademarks of The Regents may not be used to endorse or promote products derived from this software without specific prior written permission.
|
| 8 |
+
|
| 9 |
+
The end-user understands that the program was developed for research purposes and is advised not to rely exclusively on the program for any reason.
|
| 10 |
+
|
| 11 |
+
THE SOFTWARE PROVIDED IS ON AN "AS IS" BASIS, AND THE REGENTS HAVE NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. THE REGENTS SPECIFICALLY DISCLAIM ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE REGENTS BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES, INCLUDING BUT NOT LIMITED TO PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA OR PROFITS, OR BUSINESS INTERRUPTION, HOWEVER CAUSED AND UNDER ANY THEORY OF LIABILITY WHETHER IN CONTRACT, STRICT LIABILITY OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
| 12 |
+
|
| 13 |
+
If you do not agree to these terms, do not download or use the software. This license may be modified only in a writing signed by authorized signatory of both parties.
|
| 14 |
+
|
| 15 |
+
For license information please contact jazh@ucdavis.edu
|
README.md
CHANGED
|
@@ -1,3 +1,317 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🌍 Learn to Drive in "Dreams": CarDreamer 🚗
|
| 2 |
+
|
| 3 |
+
<div align="center">
|
| 4 |
+
<a href="https://huggingface.co/ucd-dare/CarDreamer/tree/main">
|
| 5 |
+
<img src="https://img.icons8.com/?size=32&id=sop9ROXku5bb" alt="HuggingFace Checkpoints" />
|
| 6 |
+
HuggingFace Checkpoints
|
| 7 |
+
</a>
|
| 8 |
+
|
|
| 9 |
+
<a href="https://car-dreamer.readthedocs.io/en/latest/">
|
| 10 |
+
<img src="https://img.icons8.com/nolan/32/api.png" alt="CarDreamer API Documents" />
|
| 11 |
+
CarDreamer API Documents
|
| 12 |
+
</a>
|
| 13 |
+
|
|
| 14 |
+
<a href="https://arxiv.org/abs/2405.09111">
|
| 15 |
+
<img src="https://img.icons8.com/?size=32&id=48326&format=png" alt="ArXiv Pre-print" />
|
| 16 |
+
ArXiv Pre-print
|
| 17 |
+
</a>
|
| 18 |
+
</div>
|
| 19 |
+
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
Unleash the power of **imagination** and **generalization** of world models for self-driving cars.
|
| 24 |
+
|
| 25 |
+
> [!NOTE]
|
| 26 |
+
> - **July 2024:** Created a stop-sign task and a traffic-light task!
|
| 27 |
+
> - **July 2024:** Uploaded all the task checkpoints to [HuggingFace](https://huggingface.co/ucd-dare/CarDreamer/tree/main)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## **Can world models imagine traffic dynamics for training autonomous driving agents? The answer is YES!**
|
| 31 |
+
|
| 32 |
+
Integrating the high-fidelity CARLA simulator with world models, we are able to train a world model that not only learns complex environment dynamics but also have an agent interact with the neural network "simulator" to learn to drive.
|
| 33 |
+
|
| 34 |
+
Simply put, in CarDreamer the agent can learn to drive in a "dream world" from scratch, mastering maneuvers like overtaking and right turns, and avoiding collisions in heavy traffic—all within an imagined world!
|
| 35 |
+
|
| 36 |
+
Dive into our demos to see the agent skillfully navigating challenges and ensuring safe and efficient travel.
|
| 37 |
+
|
| 38 |
+
## 📚 Open-Source World Model-Based Autonomous Driving Platform
|
| 39 |
+
|
| 40 |
+
**Explore** world model based autonomous driving with CarDreamer, an open-source platform designed for the **development** and **evaluation** of **world model** based autonomous driving.
|
| 41 |
+
|
| 42 |
+
* 🏙️ **Built-in Urban Driving Tasks**: flexible and customizable observation modality and observability; optimized rewards
|
| 43 |
+
* 🔧 **Task Development Suite**: create your own urban driving tasks with ease
|
| 44 |
+
* 🌍 **Model Backbones**: integrated state-of-the-art world models
|
| 45 |
+
|
| 46 |
+
**Documentation:** [CarDreamer API Documents](https://car-dreamer.readthedocs.io/en/latest/).
|
| 47 |
+
|
| 48 |
+
**Looking for more techincal details? Check our report here! [Paper link](https://arxiv.org/abs/2405.09111)**
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
## :sun_with_face: Built-in Task Demos
|
| 52 |
+
|
| 53 |
+
> [!TIP]
|
| 54 |
+
> A world model is learnt to model traffic dynamics; then a driving agent is trained on world model's imagination! The driving agent masters diverse driving skills including lane merge, left turn, and right turn, to random roadming purely **from scratch**.
|
| 55 |
+
|
| 56 |
+
We train DreamerV3 agents on our built-in tasks with a single 4090. Depending on the observation spaces, the memory overhead ranges from 10GB-20GB alongwith 3GB reserved for CARLA.
|
| 57 |
+
|
| 58 |
+
| Right turn hard | Roundabout | Left turn hard | Lane merge | Overtake |
|
| 59 |
+
| :-------------: | :--------: | :------------: | :--------: | :------: |
|
| 60 |
+
|  |  |  |  |  |
|
| 61 |
+
|
| 62 |
+
| Right turn hard | Roundabout | Left turn hard | Lane merge | Overtake |
|
| 63 |
+
| :-------------: | :--------: | :------------: | :--------: | :---------------: |
|
| 64 |
+
|  |  |  |  |  |
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
## :blossom: The Power of Intention Sharing
|
| 68 |
+
|
| 69 |
+
> [!TIP]
|
| 70 |
+
> **Human drivers use turn signals to inform their intentions** of turning left or right. **Autonomous vehicles can do the same!**
|
| 71 |
+
|
| 72 |
+
Let's see how CarDreamer agents communicate and leverage intentions. Our experiment have demonstrated that through sharing intention, the policy learning is much easier! Specifically, a policy without knowing other agents' intentions can be conservative in our crossroad tasks; while intention sharing allows the agents to find the proper timing to cut in the traffic flow.
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
<!-- Table 1: Sharing waypoints vs. Without sharing waypoints -->
|
| 76 |
+
| Sharing waypoints vs. Without sharing waypoints | Sharing waypoints vs. Without sharing waypoints |
|
| 77 |
+
| :---------------------------------------------: | :---------------------------------------------: |
|
| 78 |
+
| **Right turn hard** | **Left turn hard** |
|
| 79 |
+
|       |     <img src="./.assets/left turn raw.gif" style="width: 100%"> |
|
| 80 |
+
|
| 81 |
+
<!-- Table 2: Full observability vs. Partial observability -->
|
| 82 |
+
| Full observability vs. Partial observability |
|
| 83 |
+
| :------------------------------------------: |
|
| 84 |
+
| **Right turn hard** |
|
| 85 |
+
|       |
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
## 📋 Prerequisites
|
| 92 |
+
|
| 93 |
+
Clone the repository:
|
| 94 |
+
|
| 95 |
+
```bash
|
| 96 |
+
git clone https://github.com/ucd-dare/CarDreamer
|
| 97 |
+
cd CarDreamer
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
Download [CARLA release](https://github.com/carla-simulator/carla/releases) of version ``0.9.15`` as we experiemented with this version. Set the following environment variables:
|
| 101 |
+
|
| 102 |
+
```bash
|
| 103 |
+
export CARLA_ROOT="</path/to/carla>"
|
| 104 |
+
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla":${PYTHONPATH}
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
Install the package using flit. The ``--symlink`` flag is used to create a symlink to the package in the Python environment, so that changes to the package are immediately available without reinstallation. (``--pth-file`` also works, as an alternative to ``--symlink``.)
|
| 108 |
+
|
| 109 |
+
```bash
|
| 110 |
+
conda create python=3.10 --name cardreamer
|
| 111 |
+
conda activate cardreamer
|
| 112 |
+
pip install flit
|
| 113 |
+
flit install --symlink
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
## :gear: Quick Start
|
| 117 |
+
|
| 118 |
+
### :mechanical_arm: Training
|
| 119 |
+
|
| 120 |
+
Find ``README.md`` in the corresponding directory of the algorithm you want to use and follow the instructions. For example, to train DreamerV3 agents, use
|
| 121 |
+
|
| 122 |
+
```bash
|
| 123 |
+
bash train_dm3.sh 2000 0 --task carla_four_lane --dreamerv3.logdir ./logdir/carla_four_lane
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
The command will launch CARLA at 2000 port, load task a built-in task named `carla_four_lane`, and start the visualization tool at port 9000 (2000+7000) which can be accessed through `http://localhost:9000/`. You can append flags to the command to overwrite yaml configurations.
|
| 127 |
+
|
| 128 |
+
### Creating Tasks
|
| 129 |
+
|
| 130 |
+
The section explains how to create CarDreamer tasks in a standalone mode without loading our integrated models. This can be helpful if you want to train and evaluate your own models other than our integrated DreamerV2 and DreamerV3 on CarDreamer tasks.
|
| 131 |
+
|
| 132 |
+
CarDreamer offers a range of built-in task classes, which you can explore in the [CarDreamer Docs: Tasks and Configurations](https://car-dreamer.readthedocs.io/en/latest/tasks.html#tasks-and-environments).
|
| 133 |
+
|
| 134 |
+
Each task class can be instantiated with various configurations. For instance, the right-turn task can be set up with simple, medium, or hard settings. These settings are defined in YAML blocks within [tasks.yaml](https://github.com/ucd-dare/CarDreamer/blob/master/car_dreamer/configs/tasks.yaml). The task creation API retrieves the given identifier (e.g., `carla_four_lane_hard`) from these YAML task blocks and injects the settings into the task class to create a gym task instance.
|
| 135 |
+
|
| 136 |
+
```python
|
| 137 |
+
# Create a gym environment with default task configurations
|
| 138 |
+
import car_dreamer
|
| 139 |
+
task, task_configs = car_dreamer.create_task('carla_four_lane_hard')
|
| 140 |
+
|
| 141 |
+
# Or load default environment configurations without instantiation
|
| 142 |
+
task_configs = car_dreamer.load_task_configs('carla_right_turn_hard')
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
To create your own driving tasks using the development suite, refer to [CarDreamer Docs: Customization](https://car-dreamer.readthedocs.io/en/latest/customization.html).
|
| 146 |
+
|
| 147 |
+
### Observation Customization
|
| 148 |
+
|
| 149 |
+
`CarDreamer` employs an `Observer-Handler` architecture to manage complex **multi-modal** observation spaces. Each handler defines its own observation space and lifecycle for stepping, resetting, or fetching information, similar to a gym environment. The agent communicates with the environment through an observer that manages these handlers.
|
| 150 |
+
|
| 151 |
+
Users can enable built-in observation handlers such as BEV, camera, LiDAR, and spectator in task configurations. Check out [common.yaml](https://github.com/ucd-dare/CarDreamer/blob/master/car_dreamer/configs/common.yaml) for all available built-in handlers. Additionally, users can customize observation handlers and settings to suit their specific needs.
|
| 152 |
+
|
| 153 |
+
#### Handler Implementation
|
| 154 |
+
|
| 155 |
+
To implement new handlers for different observation sources and modalities (e.g., text, velocity, locations, or even more complex data), `CarDreamer` provides two methods:
|
| 156 |
+
|
| 157 |
+
1. Register a callback as a [SimpleHandler](https://github.com/ucd-dare/CarDreamer/blob/master/car_dreamer/toolkit/observer/handlers/simple_handler.py) to fetch data at each step.
|
| 158 |
+
2. For observations requiring complex workflows that cannot be conveyed by a `SimpleHandler`, create an handler maintaining the full lifecycle of that observation, similar to our built-in message, BEV, spectator handlers.
|
| 159 |
+
|
| 160 |
+
For more details on defining new observation sources, see [CarDreamer Docs: Defining a new observation source](https://car-dreamer.readthedocs.io/en/latest/customization.html#defining-a-new-observation-source).
|
| 161 |
+
|
| 162 |
+
#### Observation Handler Configurations
|
| 163 |
+
|
| 164 |
+
Each handler can access yaml configurations for further customization. For example, a BEV handler setting can be defined as:
|
| 165 |
+
|
| 166 |
+
```yaml
|
| 167 |
+
birdeye_view:
|
| 168 |
+
# Specify the handler name used to produce `birdeye_view` observation
|
| 169 |
+
handler: birdeye
|
| 170 |
+
# The observation key
|
| 171 |
+
key: birdeye_view
|
| 172 |
+
# Define what to render in the birdeye view
|
| 173 |
+
entities: [roadmap, waypoints, background_waypoints, fov_lines, ego_vehicle, background_vehicles]
|
| 174 |
+
# ... other settings used by the BEV handler
|
| 175 |
+
```
|
| 176 |
+
|
| 177 |
+
The handler field specifies which handler implementation is used to manage that observation key. Then, users can simply enable this observation in the task settings.
|
| 178 |
+
|
| 179 |
+
```yaml
|
| 180 |
+
your_task_name:
|
| 181 |
+
env:
|
| 182 |
+
observation.enabled: [camera, collision, spectator, birdeye_view]
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
#### Environment \& Observer Communications
|
| 186 |
+
|
| 187 |
+
One might need transfer information from the environements to a handler to compute their observations. E.g., a BEV handler might need a location to render the destination spot. These environment information can be accessed either through [WorldManager](https://car-dreamer.readthedocs.io/en/latest/api/toolkit.html#car_dreamer.toolkit.WorldManager) APIs, or through environment state management.
|
| 188 |
+
|
| 189 |
+
A `WorldManager` instance is passed in the handler during its initialization. The environment states are defined by an environment's `get_state()` API, and passed as parameters to handler's `get_observation()`.
|
| 190 |
+
|
| 191 |
+
```python
|
| 192 |
+
class MyHandler(BaseHandler):
|
| 193 |
+
def __init__(self, world: WorldManager, config):
|
| 194 |
+
super().__init__(world, config)
|
| 195 |
+
self._world = world
|
| 196 |
+
|
| 197 |
+
def get_observation(self, env_state: Dict) -> Tuple[Dict, Dict]:
|
| 198 |
+
# Get the waypoints through environment states
|
| 199 |
+
waypoints = env_state.get("waypoints")
|
| 200 |
+
# Get actors through the world manager API
|
| 201 |
+
actors = self._world.actors
|
| 202 |
+
# ...
|
| 203 |
+
|
| 204 |
+
class MyEnv(CarlaBaseEnv):
|
| 205 |
+
# ...
|
| 206 |
+
def get_state(self):
|
| 207 |
+
return {
|
| 208 |
+
# Expose the waypoints through get_state()
|
| 209 |
+
'waypoints': self.waypoints,
|
| 210 |
+
}
|
| 211 |
+
```
|
| 212 |
+
|
| 213 |
+
## :computer: Visualization Tool
|
| 214 |
+
|
| 215 |
+
We stream observations, rewards, terminal conditions, and custom metrics to a web browser for each training session in real-time, making it easier to engineer rewards and debug.
|
| 216 |
+
|
| 217 |
+
<table style="margin-left: auto; margin-right: auto;">
|
| 218 |
+
<tr>
|
| 219 |
+
<td class="center-text">Visualization Server</td>
|
| 220 |
+
</tr>
|
| 221 |
+
<tr>
|
| 222 |
+
<td><img src="./.assets/visualization.png" style="width: 100%"></td>
|
| 223 |
+
</tr>
|
| 224 |
+
</table>
|
| 225 |
+
|
| 226 |
+
## :hammer: System
|
| 227 |
+
|
| 228 |
+
To easily customize your own driving tasks, and observation spaces, etc., please refer to our [CarDreamer API Documents](https://car-dreamer.readthedocs.io/en/latest/).
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+
# :star2: Citation
|
| 233 |
+
|
| 234 |
+
If you find this repository useful, please cite this paper:
|
| 235 |
+
|
| 236 |
+
**[Paper link](https://arxiv.org/abs/2405.09111)**
|
| 237 |
+
```
|
| 238 |
+
@article{CarDreamer2024,
|
| 239 |
+
title = {{CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving}},
|
| 240 |
+
author = {Dechen Gao, Shuangyu Cai, Hanchu Zhou, Hang Wang, Iman Soltani, Junshan Zhang},
|
| 241 |
+
journal = {arXiv preprint arXiv:2405.09111},
|
| 242 |
+
year = {2024},
|
| 243 |
+
month = {May}
|
| 244 |
+
}
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
# Suppliment Material
|
| 249 |
+
## World model imagination
|
| 250 |
+
<p align="center">
|
| 251 |
+
Birdeye view training
|
| 252 |
+
</p>
|
| 253 |
+
<img src="./.assets/right_turn_hard_pre_bev.gif">
|
| 254 |
+
<p align="center">
|
| 255 |
+
Camera view training
|
| 256 |
+
</p>
|
| 257 |
+
<img src="./.assets/right_turn_hard_pre_camera.gif">
|
| 258 |
+
<p align="center">
|
| 259 |
+
LiDAR view training
|
| 260 |
+
</p>
|
| 261 |
+
<img src="./.assets/right_turn_hard_pre_lidar.gif">
|
| 262 |
+
|
| 263 |
+
|
| 264 |
+
# 👥 Contributors
|
| 265 |
+
|
| 266 |
+
### Credits
|
| 267 |
+
|
| 268 |
+
`CarDreamer` builds on the several projects within the autonomous driving and machine learning communities.
|
| 269 |
+
|
| 270 |
+
- [gym-carla](https://github.com/cjy1992/gym-carla)
|
| 271 |
+
- [DreamerV2](https://github.com/danijar/director)
|
| 272 |
+
- [DreamerV3](https://github.com/danijar/dreamerv3)
|
| 273 |
+
- [CuriousReplay](https://github.com/AutonomousAgentsLab/curiousreplay)
|
| 274 |
+
|
| 275 |
+
<!-- readme: contributors -start -->
|
| 276 |
+
<table>
|
| 277 |
+
<tbody>
|
| 278 |
+
<tr>
|
| 279 |
+
<td align="center">
|
| 280 |
+
<a href="https://github.com/tonycaisy">
|
| 281 |
+
<img src="https://avatars.githubusercontent.com/u/92793139?v=4" width="100;" alt="tonycaisy"/>
|
| 282 |
+
<br />
|
| 283 |
+
<sub><b>Shuangyu Cai</b></sub>
|
| 284 |
+
</a>
|
| 285 |
+
</td>
|
| 286 |
+
<td align="center">
|
| 287 |
+
<a href="https://github.com/HanchuZhou">
|
| 288 |
+
<img src="https://avatars.githubusercontent.com/u/99316745?v=4" width="100;" alt="HanchuZhou"/>
|
| 289 |
+
<br />
|
| 290 |
+
<sub><b>Hanchu Zhou</b></sub>
|
| 291 |
+
</a>
|
| 292 |
+
</td>
|
| 293 |
+
<td align="center">
|
| 294 |
+
<a href="https://github.com/gaodechen">
|
| 295 |
+
<img src="https://avatars.githubusercontent.com/u/2103562?v=4" width="100;" alt="gaodechen"/>
|
| 296 |
+
<br />
|
| 297 |
+
<sub><b>GaoDechen</b></sub>
|
| 298 |
+
</a>
|
| 299 |
+
</td>
|
| 300 |
+
<td align="center">
|
| 301 |
+
<a href="https://github.com/junshanzhangJZ2080">
|
| 302 |
+
<img src="https://avatars.githubusercontent.com/u/111560343?v=4" width="100;" alt="junshanzhangJZ2080"/>
|
| 303 |
+
<br />
|
| 304 |
+
<sub><b>junshanzhangJZ2080</b></sub>
|
| 305 |
+
</a>
|
| 306 |
+
</td>
|
| 307 |
+
<td align="center">
|
| 308 |
+
<a href="https://github.com/ustcmike">
|
| 309 |
+
<img src="https://avatars.githubusercontent.com/u/32145615?v=4" width="100;" alt="ustcmike"/>
|
| 310 |
+
<br />
|
| 311 |
+
<sub><b>ucdmike</b></sub>
|
| 312 |
+
</a>
|
| 313 |
+
</td>
|
| 314 |
+
</tr>
|
| 315 |
+
<tbody>
|
| 316 |
+
</table>
|
| 317 |
+
<!-- readme: contributors -end -->
|