File size: 2,741 Bytes
ed0fe64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cd2057
 
 
 
 
8cc42c9
ed0fe64
47cbaff
ed0fe64
 
 
 
 
 
 
 
 
 
 
 
 
1fd0cdd
ed0fe64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fd0cdd
ed0fe64
 
 
 
 
 
 
1fd0cdd
ed0fe64
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: mit
task_categories:
  - reinforcement-learning
  - robotics
language:
  - en
tags:
  - drone-navigation
  - rl-dataset
  - threejs
  - ppo
  - telemetry
  - pathfinding
---

[![Website](https://img.shields.io/badge/webXOS.netlify.app-Explore_Apps-00d4aa?style=for-the-badge&logo=netlify&logoColor=white)](https://webxos.netlify.app)
[![GitHub](https://img.shields.io/badge/GitHub-webxos/webxos-181717?style=for-the-badge&logo=github&logoColor=white)](https://github.com/webxos/webxos)
[![Hugging Face](https://img.shields.io/badge/Hugging_Face-🤗_webxos-FFD21E?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/webxos)
[![Follow on X](https://img.shields.io/badge/Follow_@webxos-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/webxos)

# DRONE FSD DATASET

Single training run of: 1 epoch, 4 iterations, 198 steps in drone navigation in a 60×60 room with 15 static + 12 floating obstacles.

This dataset was generated with the MIRROR IDE by webXOS. Download the app in the /mirror/ folder to train your own similar datasets.

**Final performance (after 2456 frames):**
- Best time: **43.821 s**
- Success rate: **0.0%** (reached SE corner in best run but did not complete full pattern)
- Collisions: **0** in final recorded path
- Avg reward: **0.0732**
- Cumulative reward: **49.24**
- Final exploration rate: **0.784**
- Final learning rate: **5.40e-4**

## Network

- Architecture: `[256 → 128 → 64 → 32]` (MLP policy/value heads)
- Exported: 2026-01-17 03:32 UTC

## Files

| File                        | Description                              | Size     |
|-----------------------------|------------------------------------------|----------|
| `enhanced_network.json`     | Final policy weights + shapes + LR       | ~small   |
| `metadata.json`             | Training summary & config                | ~small   |
| `successful_paths.json`     | Best 3 partial successes (times, paths)  | ~small   |
| `enhanced_telemetry.jsonl`  | Full per-frame telemetry (2456 lines)    | ~2.4 MB  |
| `enhanced_telemetry.csv`    | Same data in CSV format                  | ~1.8 MB  |
| `training_experiences.jsonl`| PPO-style transitions (state, action, reward, next) | ~1.2 MB |

## Environment

- Room: 60 units
- Difficulty: 1
- Obstacles: 15 static + 12 floating (0.2–0.5 speed, bounce energy 0.8)
- Pattern targets: NW → SE → NE → SW → CENTER
- Reward: mostly distance-based + small shaping

## Intended Use

- Analyze early-stage PPO behavior on 3D continuous control
- Study exploration vs exploitation trade-off (ε still ~78% at end)
- Visualize drone trajectories in Three.js / Unity / similar
- Baseline for future drone racing / obstacle avoidance models