Update README.md
Browse files
README.md
CHANGED
|
@@ -35,6 +35,7 @@ This dataset was generated with the MIRROR IDE by webXOS. Download the app in th
|
|
| 35 |
- Final learning rate: **5.40e-4**
|
| 36 |
|
| 37 |
## Network
|
|
|
|
| 38 |
- Architecture: `[256 → 128 → 64 → 32]` (MLP policy/value heads)
|
| 39 |
- Exported: 2026-01-17 03:32 UTC
|
| 40 |
|
|
@@ -50,6 +51,7 @@ This dataset was generated with the MIRROR IDE by webXOS. Download the app in th
|
|
| 50 |
| `training_experiences.jsonl`| PPO-style transitions (state, action, reward, next) | ~1.2 MB |
|
| 51 |
|
| 52 |
## Environment
|
|
|
|
| 53 |
- Room: 60 units
|
| 54 |
- Difficulty: 1
|
| 55 |
- Obstacles: 15 static + 12 floating (0.2–0.5 speed, bounce energy 0.8)
|
|
@@ -57,6 +59,7 @@ This dataset was generated with the MIRROR IDE by webXOS. Download the app in th
|
|
| 57 |
- Reward: mostly distance-based + small shaping
|
| 58 |
|
| 59 |
## Intended Use
|
|
|
|
| 60 |
- Analyze early-stage PPO behavior on 3D continuous control
|
| 61 |
- Study exploration vs exploitation trade-off (ε still ~78% at end)
|
| 62 |
- Visualize drone trajectories in Three.js / Unity / similar
|
|
|
|
| 35 |
- Final learning rate: **5.40e-4**
|
| 36 |
|
| 37 |
## Network
|
| 38 |
+
|
| 39 |
- Architecture: `[256 → 128 → 64 → 32]` (MLP policy/value heads)
|
| 40 |
- Exported: 2026-01-17 03:32 UTC
|
| 41 |
|
|
|
|
| 51 |
| `training_experiences.jsonl`| PPO-style transitions (state, action, reward, next) | ~1.2 MB |
|
| 52 |
|
| 53 |
## Environment
|
| 54 |
+
|
| 55 |
- Room: 60 units
|
| 56 |
- Difficulty: 1
|
| 57 |
- Obstacles: 15 static + 12 floating (0.2–0.5 speed, bounce energy 0.8)
|
|
|
|
| 59 |
- Reward: mostly distance-based + small shaping
|
| 60 |
|
| 61 |
## Intended Use
|
| 62 |
+
|
| 63 |
- Analyze early-stage PPO behavior on 3D continuous control
|
| 64 |
- Study exploration vs exploitation trade-off (ε still ~78% at end)
|
| 65 |
- Visualize drone trajectories in Three.js / Unity / similar
|