Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,103 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc0-1.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc0-1.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- reinforcement-learning
|
| 5 |
+
- robotics
|
| 6 |
+
- image-to-video
|
| 7 |
+
- image-text-to-video
|
| 8 |
+
- image-to-3d
|
| 9 |
+
language:
|
| 10 |
+
- en
|
| 11 |
+
tags:
|
| 12 |
+
- world-model
|
| 13 |
+
- reinforcement-learning
|
| 14 |
+
- human-in-the-loop
|
| 15 |
+
- agent
|
| 16 |
+
pretty_name: No Man's Sky High-Fidelity Human-in-the-loop World Model Dataset
|
| 17 |
+
size_categories:
|
| 18 |
+
- 100K<n<1M
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# No Man's Sky High-Fidelity Human-in-the-loop World Model Dataset
|
| 22 |
+
|
| 23 |
+
## Overview
|
| 24 |
+
This dataset is designed for **world model training** using real human gameplay data from *No Man’s Sky*.
|
| 25 |
+
It captures **high-fidelity human–computer interaction** by recording both the game video and time-aligned input actions, preserving the realistic latency characteristics of a human-in-the-loop system.
|
| 26 |
+
|
| 27 |
+
Compared with “internal game state” datasets, this dataset retains the **physical interaction chain** (input → game/render → screen → capture), making it well-suited for training models that need to operate under real-world latency and sensory constraints.
|
| 28 |
+
|
| 29 |
+
## Dataset Structure
|
| 30 |
+
Each recording session is stored in a UUID directory.
|
| 31 |
+
A typical session contains:
|
| 32 |
+
<UUID>/
|
| 33 |
+
recording.mp4
|
| 34 |
+
actions.jsonl
|
| 35 |
+
events.jsonl
|
| 36 |
+
metadata.json
|
| 37 |
+
actions_resampled.jsonl
|
| 38 |
+
|
| 39 |
+
### 1) `recording.mp4`
|
| 40 |
+
The recorded gameplay video.
|
| 41 |
+
|
| 42 |
+
### 2) `actions.jsonl` (per-frame input state)
|
| 43 |
+
One JSON object per video frame. Each entry contains the input state sampled at frame time.
|
| 44 |
+
|
| 45 |
+
**Schema:**
|
| 46 |
+
- `frame` (int): frame index
|
| 47 |
+
- `timestamp_ms` (int): wall-clock timestamp in milliseconds
|
| 48 |
+
- `frame_pts_ms` (float): frame time in milliseconds (PTS-based)
|
| 49 |
+
- `capture_ns` (int): OBS compositor timestamp in nanoseconds
|
| 50 |
+
- `key` (string[]): list of pressed keys at this frame
|
| 51 |
+
- `mouse` (object):
|
| 52 |
+
- `dx` (int): accumulated mouse delta X during the frame
|
| 53 |
+
- `dy` (int): accumulated mouse delta Y during the frame
|
| 54 |
+
- `x` (int): absolute mouse X position
|
| 55 |
+
- `y` (int): absolute mouse Y position
|
| 56 |
+
- `scroll_dy` (int): scroll delta during the frame
|
| 57 |
+
- `button` (string[]): pressed mouse buttons (e.g., `LeftButton`, `Button4`)
|
| 58 |
+
|
| 59 |
+
### 3) `events.jsonl` (raw sub-frame input events)
|
| 60 |
+
Raw input events with microsecond timing, captured from the OS event stream.
|
| 61 |
+
|
| 62 |
+
**Schema:**
|
| 63 |
+
- `type` (string): event type
|
| 64 |
+
- `key_down`, `key_up`, `flags_changed`
|
| 65 |
+
- `mouse_move`, `mouse_button_down`, `mouse_button_up`
|
| 66 |
+
- `scroll`
|
| 67 |
+
- `timestamp_ms` (int): wall-clock timestamp
|
| 68 |
+
- `session_offset_us` (int): microsecond offset from session start
|
| 69 |
+
- `key` (string): key name for key events
|
| 70 |
+
- `button` (string): mouse button name
|
| 71 |
+
- `dx`, `dy`, `x`, `y` (int): mouse movement
|
| 72 |
+
- `scroll_dy` (int): scroll delta
|
| 73 |
+
|
| 74 |
+
### 4) `metadata.json`
|
| 75 |
+
Session-level metadata and video info.
|
| 76 |
+
|
| 77 |
+
**Schema:**
|
| 78 |
+
- `stream_name` (string): session UUID
|
| 79 |
+
- `game_name` (string): game name
|
| 80 |
+
- `platform` (string): `mac` / `windows` / `linux`
|
| 81 |
+
- `video_meta` (object):
|
| 82 |
+
- `width` (int)
|
| 83 |
+
- `height` (int)
|
| 84 |
+
- `fps` (float)
|
| 85 |
+
- `total_frames` (int)
|
| 86 |
+
- `duration_ms` (int)
|
| 87 |
+
- `input_latency_bias_ms` (number): recommended latency bias for alignment
|
| 88 |
+
|
| 89 |
+
### 5) `actions_resampled.jsonl`
|
| 90 |
+
High-precision resampled per-frame actions reconstructed from `events.jsonl` using latency compensation.
|
| 91 |
+
This is the recommended aligned input stream for training.
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
## Suggested Usage
|
| 96 |
+
- For **world model training**, use `recording.<ext>` + `actions_resampled.jsonl`.
|
| 97 |
+
- For **analysis or recalibration**, use `events.jsonl` and `metadata.json`.
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
## Notes
|
| 102 |
+
- The dataset captures realistic system latency; alignment is provided but does **not** remove physical pipeline delay.
|
| 103 |
+
- This design targets **high-fidelity human-in-the-loop interaction** for robust world-model learning.
|