File size: 2,852 Bytes
1d94f28 5c4976a 1d94f28 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 | ---
tags:
- reinforcement-learning
- robotics
- mujoco
- locomotion
- unitree
- go2
- quadruped
- sac
- stable-baselines3
- strands-robots
library_name: stable-baselines3
model-index:
- name: SAC-Unitree-Go2-MuJoCo
results:
- task:
type: reinforcement-learning
name: Quadruped Locomotion
dataset:
type: custom
name: MuJoCo LocomotionEnv
metrics:
- type: mean_reward
value: 4912
name: Best Mean Reward
- type: mean_distance
value: 21.0
name: Mean Forward Distance (m)
---
# SAC Unitree Go2 — MuJoCo Locomotion Policy
A **Soft Actor-Critic (SAC)** policy trained to make the Unitree Go2 quadruped **walk forward** in MuJoCo simulation.
Trained entirely on a MacBook (CPU, no GPU, no Isaac Gym) using [strands-robots](https://github.com/cagataycali/strands-gtc-nvidia).
## Results
| Metric | Value |
|--------|-------|
| Algorithm | SAC (Soft Actor-Critic) |
| Training steps | 1.74M |
| Training time | ~40 min (MacBook M-series, CPU) |
| Parallel envs | 8 |
| Network | MLP [256, 256] |
| Best reward | **4,912** |
| Mean distance | **21 meters** per episode |
| Forward velocity | ~1 m/s |
| Episode length | 1,000/1,000 (full episodes) |
## Demo Video
<video src="https://huggingface.co/cagataydev/sac-unitree-go2-mujoco/resolve/main/go2_walking.mp4" controls autoplay loop muted></video>
## Usage
```python
from stable_baselines3 import SAC
model = SAC.load("best/best_model")
# In a MuJoCo Go2 environment:
obs, _ = env.reset()
for _ in range(1000):
action, _ = model.predict(obs, deterministic=True)
obs, reward, done, truncated, info = env.step(action)
```
## Reward Function
```
reward = forward_vel × 5.0 # primary: move forward
+ alive_bonus × 1.0 # stay upright
+ upright_reward × 0.3 # orientation bonus
- ctrl_cost × 0.001 # minimize energy
- lateral_penalty × 0.3 # don't drift sideways
- smoothness × 0.0001 # discourage jerky motion
```
## Why SAC > PPO
PPO (500K steps): Go2 learned to stand still. Reward = 615, distance = 0.02m.
SAC (1.74M steps): Go2 walks 21 meters. Reward = 4,912.
SAC's off-policy learning + entropy regularization explores more effectively in continuous action spaces.
## Files
- `best/best_model.zip` — Best checkpoint (highest eval reward)
- `checkpoints/` — All 100K-step checkpoints
- `logs/evaluations.npz` — Evaluation metrics over training
- `go2_walking.mp4` — Demo video
## Environment
- **Simulator**: MuJoCo (via mujoco-python)
- **Robot**: Unitree Go2 (12 DOF) from MuJoCo Menagerie
- **Observation**: joint positions, velocities, torso orientation, height (37-dim)
- **Action**: joint torques (12-dim, continuous)
## License
Apache-2.0
|