Physical AI: Unreal Engine - Isaac Sim Navigation Dataset
Demonstration datasets for the Unitree G1 humanoid robot performing vision-based object navigation in procedurally generated indoor environments.
Physics calculation is performed using Isaac Sim while the rendering is performed by Unreal Engine.
We assume the dataset is placed inside demo_data in GR00T repository.
Task
The robot receives a front-facing camera image and a language instruction specifying a target object in the scene. It must navigate to the target object using a combination of high-level velocity commands (HLC) and low-level joint position actions (LLC).
Example instruction:
Scene contains: Plant in a Pot, Towel, Aerosol Spray Can, Light Blue Bucket, Old Clock. Navigate to the Old Clock.
Dataset Variants
| Dataset | Episodes | Description |
|---|---|---|
g1_procedural_room_navigation_20260206_062009 |
100 | 5 objects per scene |
g1_procedural_room_navigation_20260206_080307 |
100 | 1 object per scene |
g1_procedural_room_navigation_20260206_095145 |
100 | 3 objects per scene |
Dataset Format
Each dataset follows the LeRobot v2 format:
g1_procedural_room_navigation_*/
├── meta/
│ ├── info.json # Schema, features, robot config, processing params
│ ├── episodes.jsonl # Per-episode metadata (index, length, task instruction)
│ ├── tasks.jsonl # Task index definitions
│ ├── modality.json # Modality-to-column mapping with slice indices
│ └── stats.json # Per-feature statistics (see Generating Statistics section below)
├── data/
│ └── chunk-{NNN}/
│ └── {episode_index:06d}.parquet
└── videos/
└── chunk-{NNN}/
└── observation.images.front/
└── episode_{episode_index:06d}.mp4
Features
| Feature | Type | Shape | Description |
|---|---|---|---|
observation.images.front |
video | (480, 640, 3) | Front camera RGB at 50 fps |
observation.state.joint_pos |
float32 | (29,) | Joint positions (rad) |
observation.state.joint_vel |
float32 | (29,) | Joint velocities (rad/s) |
observation.state.root_pos_w |
float32 | (3,) | Root position in world frame |
observation.state.root_quat_w |
float32 | (4,) | Root orientation quaternion (w, x, y, z) |
observation.state.root_lin_vel_b |
float32 | (3,) | Root linear velocity in body frame |
observation.state.root_ang_vel_b |
float32 | (3,) | Root angular velocity in body frame |
action.hlc_raw |
float32 | (3,) | Raw high-level command (vx, vy, omega_z) |
action.hlc_processed |
float32 | (3,) | Processed HLC (scaled, shifted, clipped) |
action.llc_raw |
float32 | (29,) | Raw low-level joint position targets |
action.llc_processed |
float32 | (29,) | Processed LLC (scaled around default pose) |
timestamp |
float64 | (1,) | Time in seconds from episode start |
episode_id |
int64 | (1,) | Episode index |
frame_id |
int64 | (1,) | Frame index within episode |
Robot
- Model: Unitree G1
- Joints: 29 DoF (legs, waist, arms, wrists)
- Joint order: IsaacLab convention
- FPS: 50
Combining Datasets
To merge multiple collection sessions into a single dataset, edit SOURCE_DATASETS and OUTPUT_DATASET in the script, then run:
python demo_data/scripts/combine_datasets.py
This will:
- Re-index episodes continuously (0, 1, ..., N-1) across all sources
- Copy parquet files with updated
episode_idcolumns - Symlink video files to the originals (no duplication)
- Merge
episodes.jsonlwith new indices - Create
meta/origin.yamltracking which source datasets were combined - Correctly bucket episodes into
chunk-NNN/directories when total episodes exceedchunks_size
Generating Statistics
After combining (or for any new dataset), generate stats.json using:
python gr00t/data/stats.py <dataset_path> --embodiment-tag <embodiment-tag>
For example:
python gr00t/data/stats.py demo_data/g1_procedural_room_navigation_combined --embodiment-tag unitree_g1_navigation_vel
This computes per-feature statistics (mean, std, min, max, q01, q99) across all parquet files and writes them to meta/stats.json. It also generates meta/relative_stats.json for relative action representations if configured in the embodiment config.
Note: The embodiment's modality configuration must be defined in gr00t/configs/data/embodiment_configs.py and the tag must be added in gr00t/data/embodiment_tags.py before running this script.