Hajorda commited on
Commit
feeadbd
Β·
verified Β·
1 Parent(s): d09101f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +219 -0
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: stable-baselines3
4
+ tags:
5
+ - reinforcement-learning
6
+ - robotics
7
+ - autonomous-navigation
8
+ - ros2
9
+ - gazebo
10
+ - sac
11
+ - lidar
12
+ - camera
13
+ - multi-input
14
+ pipeline_tag: reinforcement-learning
15
+ ---
16
+
17
+ # RC Car Autonomous Navigation β€” SAC (Camera + LiDAR)
18
+
19
+ A **Soft Actor-Critic (SAC)** agent trained to autonomously navigate an RC car in a simulated Gazebo environment using both **camera images** and **LiDAR sensor data** as observations. The agent learns to reach target positions while avoiding obstacles.
20
+
21
+ ---
22
+
23
+ ## Model Description
24
+
25
+ This model uses a **MultiInputPolicy** with a hybrid perception backbone:
26
+
27
+ - **Visual stream** β€” RGB camera frames processed by a CNN (NatureCNN)
28
+ - **Sensor stream** β€” LiDAR point cloud + navigation state processed by an MLP
29
+
30
+ Both streams are fused and fed into the SAC actor/critic networks for end-to-end policy learning.
31
+
32
+ | Property | Value |
33
+ |---|---|
34
+ | Algorithm | Soft Actor-Critic (SAC) |
35
+ | Policy | `MultiInputPolicy` |
36
+ | Observation | `Dict` β€” image `(64Γ—64Γ—3)` + sensor vector `(184,)` |
37
+ | Action Space | `Box([-1, -1], [1, 1])` β€” speed & steering |
38
+ | Simulator | Gazebo (Ignition/Harmonic) via ROS 2 |
39
+ | Framework | Stable-Baselines3 |
40
+
41
+ ---
42
+
43
+ ## Environments
44
+
45
+ Two training environments are available:
46
+
47
+ ### `RcCarTargetEnv`
48
+ The robot spawns at a random position and must navigate to a randomly placed target (red sphere marker). No dynamic obstacles.
49
+
50
+ ### `RcCarComplexEnv`
51
+ Same goal-reaching task but with **6 randomly placed box obstacles** that are reshuffled every episode, requiring active collision avoidance.
52
+
53
+ ---
54
+
55
+ ## Observation Space
56
+
57
+ ```python
58
+ spaces.Dict({
59
+ "image": spaces.Box(low=0, high=255, shape=(64, 64, 3), dtype=np.uint8),
60
+ "sensor": spaces.Box(low=0.0, high=1.0, shape=(184,), dtype=np.float32)
61
+ })
62
+ ```
63
+
64
+ The `sensor` vector contains:
65
+ - **[0:180]** β€” Normalised LiDAR ranges (180 beams, max range 10 m)
66
+ - **[180]** β€” Normalised linear speed
67
+ - **[181]** β€” Normalised steering angle
68
+ - **[182]** β€” Normalised distance to target (clipped at 10 m)
69
+ - **[183]** β€” Normalised relative angle to target
70
+
71
+ ---
72
+
73
+ ## Action Space
74
+
75
+ ```python
76
+ spaces.Box(low=[-1.0, -1.0], high=[1.0, 1.0], dtype=np.float32)
77
+ ```
78
+
79
+ | Index | Meaning | Scale |
80
+ |---|---|---|
81
+ | `action[0]` | Linear speed | Γ— 1.0 m/s |
82
+ | `action[1]` | Steering angle | Γ— 0.6 rad/s |
83
+
84
+ Steering is smoothed with a low-pass filter: `steer = 0.6 Γ— prev + 0.4 Γ— target`.
85
+
86
+ ---
87
+
88
+ ## Reward Function
89
+
90
+ ### `RcCarTargetEnv`
91
+ | Event | Reward |
92
+ |---|---|
93
+ | Progress toward target | `Ξ”distance Γ— 40.0` |
94
+ | Reached target (< 0.6 m) | `+100.0` |
95
+ | Collision (LiDAR < 0.22 m) | `βˆ’50.0` |
96
+ | Per-step penalty | `βˆ’0.05` |
97
+
98
+ ### `RcCarComplexEnv`
99
+ | Event | Reward |
100
+ |---|---|
101
+ | Progress toward target | `Ξ”distance Γ— 40.0` |
102
+ | Forward speed bonus (on progress) | `+speed Γ— 0.5` |
103
+ | Proximity warning (LiDAR < 0.5 m) | `βˆ’0.5` |
104
+ | Collision | `βˆ’50.0` |
105
+ | Reached target | `+100.0` |
106
+ | Per-step penalty | `βˆ’0.1` |
107
+
108
+ ---
109
+
110
+ ## Training Setup
111
+
112
+ ```python
113
+ model = SAC(
114
+ "MultiInputPolicy",
115
+ env,
116
+ learning_rate=3e-4,
117
+ buffer_size=50000,
118
+ policy_kwargs=dict(
119
+ net_arch=dict(pi=[256, 256], qf=[256, 256])
120
+ ),
121
+ device="auto"
122
+ )
123
+ ```
124
+
125
+ - **Action repeat:** 4 steps per agent decision
126
+ - **Frame stacking:** configurable via Hydra config (`n_stack`)
127
+ - **Vectorised env:** `DummyVecEnv` + `VecFrameStack` (channels_order=`"last"`)
128
+ - **Experiment tracking:** Weights & Biases (W&B) with SB3 callback
129
+
130
+ ---
131
+
132
+ ## Hardware & Software Requirements
133
+
134
+ | Component | Requirement |
135
+ |---|---|
136
+ | ROS 2 | Humble or newer |
137
+ | Gazebo | Ignition Fortress / Harmonic |
138
+ | Python | 3.10+ |
139
+ | PyTorch | 2.0+ |
140
+ | stable-baselines3 | β‰₯ 2.0 |
141
+ | gymnasium | β‰₯ 0.29 |
142
+ | opencv-python | any recent |
143
+ | cv_bridge | ROS 2 package |
144
+
145
+ ---
146
+
147
+ ## How to Use
148
+
149
+ ### 1. Install dependencies
150
+ ```bash
151
+ pip install stable-baselines3 wandb hydra-core gymnasium opencv-python
152
+ ```
153
+
154
+ ### 2. Launch the simulator
155
+ ```bash
156
+ ros2 launch my_bot_pkg sim.launch.py
157
+ ```
158
+
159
+ ### 3. Run training
160
+ ```bash
161
+ python train.py experiment.mode=target experiment.total_timesteps=500000
162
+ ```
163
+
164
+ ### 4. Load and run inference
165
+ ```python
166
+ from stable_baselines3 import SAC
167
+ from rc_car_envs_camera import RcCarTargetEnv
168
+
169
+ env = RcCarTargetEnv()
170
+ model = SAC.load("sac_target_camera_final", env=env)
171
+
172
+ obs, _ = env.reset()
173
+ while True:
174
+ action, _ = model.predict(obs, deterministic=True)
175
+ obs, reward, terminated, truncated, info = env.step(action)
176
+ if terminated or truncated:
177
+ obs, _ = env.reset()
178
+ ```
179
+
180
+ ---
181
+
182
+ ## Project Structure
183
+
184
+ ```
185
+ β”œβ”€β”€ rc_car_envs_camera.py # Gym environments (Base, Target, Complex)
186
+ β”œβ”€β”€ train.py # Hydra-based training entry point
187
+ β”œβ”€β”€ configs/
188
+ β”‚ └── config.yaml # Hydra config (mode, timesteps, wandb, etc.)
189
+ └── models/ # Saved checkpoints (W&B)
190
+ ```
191
+
192
+ ---
193
+
194
+ ## Limitations & Known Issues
195
+
196
+ - Training requires a live ROS 2 + Gazebo session; no offline/headless mode currently.
197
+ - `DummyVecEnv` runs a single environment β€” parallelisation would require `SubprocVecEnv` with careful ROS node naming.
198
+ - Camera latency under heavy load may cause the `scan_received` / `cam_received` wait loop to time out, potentially delivering stale observations.
199
+ - The collision threshold (0.22 m) is tuned for the specific robot mesh; adjust for different URDF geometries.
200
+
201
+ ---
202
+
203
+ ## Citation
204
+
205
+ If you use this environment or training code in your research, please cite:
206
+
207
+ ```bibtex
208
+ @misc{rccar_sac_nav,
209
+ title = {RC Car Autonomous Navigation with SAC (Camera + LiDAR)},
210
+ year = {2025},
211
+ url = {https://huggingface.co/Hajorda/SAC_Complex_Camera}
212
+ }
213
+ ```
214
+
215
+ ---
216
+
217
+ ## License
218
+
219
+ MIT License