drone_fsd_dataset / README.md
webxos's picture
Update README.md
1fd0cdd verified
metadata
license: mit
task_categories:
  - reinforcement-learning
  - robotics
language:
  - en
tags:
  - drone-navigation
  - rl-dataset
  - threejs
  - ppo
  - telemetry
  - pathfinding

Website GitHub Hugging Face Follow on X

DRONE FSD DATASET

Single training run (1 epoch, 4 iterations, 198 steps) of a drone navigating a 60×60 room with 15 static + 12 floating obstacles.

This dataset was generated with the MIRROR IDE by webXOS. Download the app in the /mirror/ folder to train your own similar datasets.

Final performance (after 2456 frames):

  • Best time: 43.821 s
  • Success rate: 0.0% (reached SE corner in best run but did not complete full pattern)
  • Collisions: 0 in final recorded path
  • Avg reward: 0.0732
  • Cumulative reward: 49.24
  • Final exploration rate: 0.784
  • Final learning rate: 5.40e-4

Network

  • Architecture: [256 → 128 → 64 → 32] (MLP policy/value heads)
  • Exported: 2026-01-17 03:32 UTC

Files

File Description Size
enhanced_network.json Final policy weights + shapes + LR ~small
metadata.json Training summary & config ~small
successful_paths.json Best 3 partial successes (times, paths) ~small
enhanced_telemetry.jsonl Full per-frame telemetry (2456 lines) ~2.4 MB
enhanced_telemetry.csv Same data in CSV format ~1.8 MB
training_experiences.jsonl PPO-style transitions (state, action, reward, next) ~1.2 MB

Environment

  • Room: 60 units
  • Difficulty: 1
  • Obstacles: 15 static + 12 floating (0.2–0.5 speed, bounce energy 0.8)
  • Pattern targets: NW → SE → NE → SW → CENTER
  • Reward: mostly distance-based + small shaping

Intended Use

  • Analyze early-stage PPO behavior on 3D continuous control
  • Study exploration vs exploitation trade-off (ε still ~78% at end)
  • Visualize drone trajectories in Three.js / Unity / similar
  • Baseline for future drone racing / obstacle avoidance models