webxos commited on
Commit
ed0fe64
·
verified ·
1 Parent(s): a437602

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - reinforcement-learning
5
+ - robotics
6
+ language:
7
+ - en
8
+ tags:
9
+ - drone-navigation
10
+ - rl-dataset
11
+ - threejs
12
+ - ppo
13
+ - telemetry
14
+ - pathfinding
15
+ ---
16
+
17
+ # Drone RL Training Run – Pattern: NW→SE→NE→SW→CENTER
18
+
19
+ Single training run (1 epoch, 4 iterations, 198 steps) of a drone navigating a 60×60 room with 15 static + 12 floating obstacles.
20
+
21
+ This dataset was generated with the MIRROR IDE by webXOS. Download the app in the /mirror/ folder to train your own similar datasets.
22
+
23
+ **Final performance (after 2456 frames):**
24
+ - Best time: **43.821 s**
25
+ - Success rate: **0.0%** (reached SE corner in best run but did not complete full pattern)
26
+ - Collisions: **0** in final recorded path
27
+ - Avg reward: **0.0732**
28
+ - Cumulative reward: **49.24**
29
+ - Final exploration rate: **0.784**
30
+ - Final learning rate: **5.40e-4**
31
+
32
+ ## Network
33
+ - Architecture: `[256 → 128 → 64 → 32]` (MLP policy/value heads)
34
+ - Exported: 2026-01-17 03:32 UTC
35
+
36
+ ## Files
37
+
38
+ | File | Description | Size |
39
+ |-----------------------------|------------------------------------------|----------|
40
+ | `enhanced_network.json` | Final policy weights + shapes + LR | ~small |
41
+ | `metadata.json` | Training summary & config | ~small |
42
+ | `successful_paths.json` | Best 3 partial successes (times, paths) | ~small |
43
+ | `enhanced_telemetry.jsonl` | Full per-frame telemetry (2456 lines) | ~2.4 MB |
44
+ | `enhanced_telemetry.csv` | Same data in CSV format | ~1.8 MB |
45
+ | `training_experiences.jsonl`| PPO-style transitions (state, action, reward, next) | ~1.2 MB |
46
+
47
+ ## Environment
48
+ - Room: 60 units
49
+ - Difficulty: 1
50
+ - Obstacles: 15 static + 12 floating (0.2–0.5 speed, bounce energy 0.8)
51
+ - Pattern targets: NW → SE → NE → SW → CENTER
52
+ - Reward: mostly distance-based + small shaping
53
+
54
+ ## Intended Use
55
+ - Analyze early-stage PPO behavior on 3D continuous control
56
+ - Study exploration vs exploitation trade-off (ε still ~78% at end)
57
+ - Visualize drone trajectories in Three.js / Unity / similar
58
+ - Baseline for future drone racing / obstacle avoidance models