lakminaG/reactive-VLN-PPO-Controller
Reinforcement Learning • Updated
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Expert navigation episodes collected from the HM3D minival dataset
using GreedyGeodesicFollower inside Facebook Habitat-sim 0.3.1.
Intended for behaviour-cloning pre-training and PPO fine-tuning of a
discrete-action indoor navigation controller.
| Split | Episodes | Scenes | Total steps |
|---|---|---|---|
| train | 160 | 10 | 9,955 |
| eval | 40 | 10 | 2,510 |
| Array | Shape | dtype | Description |
|---|---|---|---|
observations |
(T, 256, 256, 3) | uint8 | RGB frames |
actions |
(T,) | int32 | Discrete action at each step |
positions |
(T, 3) | float32 | Agent XYZ position |
rotations |
(T, 4) | float32 | Agent quaternion (x,y,z,w) |
episode_starts |
(E,) | int32 | Start index of each episode in T |
episode_lengths |
(E,) | int32 | Step count per episode |
scene_id |
scalar | str | e.g. "00800-TEEsavR23oF" |
split |
scalar | str | "train" or "eval" |
import numpy as np
data = np.load("train/00800-TEEsavR23oF_train.npz", allow_pickle=True)
starts = data["episode_starts"]
lengths = data["episode_lengths"]
for i, (s, l) in enumerate(zip(starts, lengths)):
obs = data["observations"][s : s + l] # (T, 256, 256, 3)
acts = data["actions"][s : s + l] # (T,)
print(f"Episode {i}: {l} steps")
00800-TEEsavR23oF00801-HaxA7YrQdEC00802-wcojb4TFT3500803-k1cupFYWXJ600804-BHXhpBwSMLh00805-SUHsP6z2gcJ00806-tQ5s4ShP62700807-rsggHU7g7dh00808-y9hTuugGdiq00809-Qpor2mEya8F{
"total_episodes": 200,
"train_ratio": 0.8,
"image_size": 256,
"min_geodesic_dist": 6.0,
"max_geodesic_dist": 15.0,
"max_steps": 200,
"success_dist": 0.2,
"forward_step": 0.25,
"turn_angle": 30.0,
"seed": 42
}