|
|
--- |
|
|
license: cc0-1.0 |
|
|
task_categories: |
|
|
- reinforcement-learning |
|
|
- robotics |
|
|
- image-to-video |
|
|
- image-text-to-video |
|
|
- image-to-3d |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- world-model |
|
|
- reinforcement-learning |
|
|
- human-in-the-loop |
|
|
- agent |
|
|
pretty_name: No Man's Sky High-Fidelity Human-in-the-loop World Model Dataset |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
|
|
|
# No Man's Sky High-Fidelity Human-in-the-loop World Model Dataset |
|
|
|
|
|
## Overview |
|
|
This dataset is designed for **world model training** using real human gameplay data from *No Man’s Sky*. |
|
|
It captures **high-fidelity human–computer interaction** by recording both the game video and time-aligned input actions, preserving the realistic latency characteristics of a human-in-the-loop system. |
|
|
|
|
|
Compared with “internal game state” datasets, this dataset retains the **physical interaction chain** (input → game/render → screen → capture), making it well-suited for training models that need to operate under real-world latency and sensory constraints. |
|
|
|
|
|
## Dataset Structure |
|
|
Each recording session is stored in a UUID directory. |
|
|
A typical session contains: |
|
|
<UUID>/ |
|
|
recording.mp4 |
|
|
actions.jsonl |
|
|
events.jsonl |
|
|
metadata.json |
|
|
actions_resampled.jsonl |
|
|
|
|
|
### 1) `recording.mp4` |
|
|
The recorded gameplay video. |
|
|
|
|
|
### 2) `actions.jsonl` (per-frame input state) |
|
|
One JSON object per video frame. Each entry contains the input state sampled at frame time. |
|
|
|
|
|
**Schema:** |
|
|
- `frame` (int): frame index |
|
|
- `timestamp_ms` (int): wall-clock timestamp in milliseconds |
|
|
- `frame_pts_ms` (float): frame time in milliseconds (PTS-based) |
|
|
- `capture_ns` (int): OBS compositor timestamp in nanoseconds |
|
|
- `key` (string[]): list of pressed keys at this frame |
|
|
- `mouse` (object): |
|
|
- `dx` (int): accumulated mouse delta X during the frame |
|
|
- `dy` (int): accumulated mouse delta Y during the frame |
|
|
- `x` (int): absolute mouse X position |
|
|
- `y` (int): absolute mouse Y position |
|
|
- `scroll_dy` (int): scroll delta during the frame |
|
|
- `button` (string[]): pressed mouse buttons (e.g., `LeftButton`, `Button4`) |
|
|
|
|
|
### 3) `events.jsonl` (raw sub-frame input events) |
|
|
Raw input events with microsecond timing, captured from the OS event stream. |
|
|
|
|
|
**Schema:** |
|
|
- `type` (string): event type |
|
|
- `key_down`, `key_up`, `flags_changed` |
|
|
- `mouse_move`, `mouse_button_down`, `mouse_button_up` |
|
|
- `scroll` |
|
|
- `timestamp_ms` (int): wall-clock timestamp |
|
|
- `session_offset_us` (int): microsecond offset from session start |
|
|
- `key` (string): key name for key events |
|
|
- `button` (string): mouse button name |
|
|
- `dx`, `dy`, `x`, `y` (int): mouse movement |
|
|
- `scroll_dy` (int): scroll delta |
|
|
|
|
|
### 4) `metadata.json` |
|
|
Session-level metadata and video info. |
|
|
|
|
|
**Schema:** |
|
|
- `stream_name` (string): session UUID |
|
|
- `game_name` (string): game name |
|
|
- `platform` (string): `mac` / `windows` / `linux` |
|
|
- `video_meta` (object): |
|
|
- `width` (int) |
|
|
- `height` (int) |
|
|
- `fps` (float) |
|
|
- `total_frames` (int) |
|
|
- `duration_ms` (int) |
|
|
- `input_latency_bias_ms` (number): recommended latency bias for alignment |
|
|
|
|
|
### 5) `actions_resampled.jsonl` |
|
|
High-precision resampled per-frame actions reconstructed from `events.jsonl` using latency compensation. |
|
|
This is the recommended aligned input stream for training. |
|
|
|
|
|
--- |
|
|
|
|
|
## Suggested Usage |
|
|
- For **world model training**, use `recording.<ext>` + `actions_resampled.jsonl`. |
|
|
- For **analysis or recalibration**, use `events.jsonl` and `metadata.json`. |
|
|
|
|
|
--- |
|
|
|
|
|
## Notes |
|
|
- The dataset captures realistic system latency; alignment is provided but does **not** remove physical pipeline delay. |
|
|
- This design targets **high-fidelity human-in-the-loop interaction** for robust world-model learning. |
|
|
|