EasonUwU commited on
Commit
651546d
Β·
verified Β·
1 Parent(s): fa8534c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +293 -1
README.md CHANGED
@@ -1 +1,293 @@
1
- MIND: Benchmarking Memory Consistency and Action Control in World Models
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img src="assets/Logo.png" alt="logo" width="240"/>
3
+ </p>
4
+ <h2 align="center"><span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>: Benchmarking Memory Consistency and Action Control in World Models</h2>
5
+ <h5 align="center"><span style="color:red">TL;DR:</span> The first open-domain closed-loop revisited benchmark for evaluating memory consistency and action control in world models</h5>
6
+ <div align="center">
7
+
8
+ 🌐[Homepage](https://csu-jpg.github.io/MIND.github.io/) | πŸ‘‰ [Dataset](https://csu-jpg.github.io/MIND.github.io/) | πŸ“„ [Paper](https://csu-jpg.github.io/MIND.github.io/) | πŸ† [Leaderboard (coming soon)](https://csu-jpg.github.io/MIND.github.io/)
9
+
10
+ </div>
11
+
12
+ ## πŸ“’ Updates
13
+
14
+ - **[2026-2-9]**: **MIND** is online πŸŽ‰ πŸŽ‰ πŸŽ‰
15
+
16
+ ## πŸ“ TODO
17
+ - [ ] Open-source **MIND-World (1.3B)** all training and inference code including a detailed code tutorial
18
+ - [ ] Release the weights of all stages for **MIND-World (1.3B)** including frame-wised student model
19
+ - [ ] Building Leaderboard
20
+ - [ ] Building repo Awesomeβ€”Interactive World Model
21
+
22
+ ## πŸ“‘ Table of Contents
23
+ - [πŸ“œ Abstract](#-abstract)
24
+ - [🌟 Project Overview](#-project-overview)
25
+ - [πŸ“Š Dataset Overview](#-dataset-overview)
26
+ - [πŸš€ Setup](#-setup)
27
+ - [πŸ—‚ Dataset Format](#-dataset-format)
28
+ - [πŸ† LeaderBoard (Coming soon)](#--leaderboard)
29
+ - [πŸŽ“ BibTex](#-bibtex)
30
+ - [πŸ“§ Contact](#-contact)
31
+ - [πŸ™ Acknowledgements](#-acknowledgements])
32
+ ## πŸ“œ Abstract
33
+ ​ World models aim to understand, remember, and predict dynamic visual environments, yet a unified benchmark for evaluating their fundamental abilities remains lacking. To address this gap, we introduce **<span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>**, the first open-domain closed-loop revisited benchmark for evaluating **<span style="color:#1F82C0">M</span>**emory cons**<span style="color:#1CBF91">I</span>**stency and action co**<span style="color:#39C46E">N</span>**trol in worl**<span style="color:#149C7E">D</span>** models. **<span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>** contains 250 high-quality videos at 1080p and 24 FPS, including 100 (first-person) + 100 (third-person) video clips under a shared action space and 25 + 25 clips across varied action spaces covering eight diverse scenes. We design an efficient evaluation framework to measure two core abilities: memory consistency and action control, capturing temporal stability and contextual coherence across viewpoints. Furthermore, we design various action spaces, including different character movement speeds and camera rotation angles, to evaluate the action generalization capability across different action spaces under shared scenes. To facilitate future performance benchmarking on **<span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>**, we introduce **MIND-World**, a novel interactive Video-to-World baseline. Extensive experiments demonstrate the completeness of **<span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>** and reveal key challenges in current world models, including the difficulty of maintaining long-term memory consistency and generalizing across action spaces.
34
+
35
+ ## 🌟 Project Overview
36
+
37
+ <p align="center">
38
+ <img src="assets/Overview.jpg" alt="defense" width="100%" />
39
+ </p>
40
+
41
+ <b>Fig 1. Overview of the <span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>. We build and collect the first open-domain benchmark using Unreal Engine 5, supporting both first-person and third-person perspectives with 1080p resolution at 24 FPS.</b>
42
+
43
+ ## πŸ“Š Dataset Overview
44
+ <p align="center">
45
+ <img src="assets/Dataset.jpg" alt="defense" width="100%" />
46
+ </p><b>Fig 2. Distribution for Scene Categories and Action Space in <span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span> Dataset. <span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span> supports open-domain scenarios with diverse and well-balanced action spaces.</b></p>
47
+
48
+ ## πŸš€ Setup
49
+
50
+ ##### 1. Environment setup
51
+
52
+ - Follow [ViPE's](https://github.com/nv-tlabs/vipe) instruction to build conda envrionment, until ```ViPE``` command is avilable
53
+
54
+ - install our requirements under the same conda env (in the same above env.):
55
+
56
+ ```bash
57
+ pip install -r requirements.txt
58
+ ```
59
+ ##### 2. Command Line Arguments
60
+
61
+ - `--gt_root`: Ground truth data root directory (required)
62
+ - `--test_root`: Test data root directory (required)
63
+ - `--dino_path`: DINOv3 model weights directory (default: `./dinov3_vitb16`)
64
+ - `--num_gpus`: Number of GPUs to use for parallel processing (default: 1)
65
+ - `--video_max_time`: Maximum video frames to process (default: `None` = use all frames)
66
+ - `--output`: Output JSON file path (default: `result_{test_root}_{timestamp}.json`)
67
+ - `--metrics`: Comma-separated metrics to compute (default: `lcm,visual,dino,action`)
68
+ ##### 3. Multi-GPU Support
69
+
70
+ The metrics computation supports multi-GPU parallel processing for faster evaluation.
71
+
72
+ ```bash
73
+ python src/process.py --gt_root /path/to/MIND-Data --test_root /path/to/test/videos --num_gpus 8 --metrics lcm,visual,action
74
+ ```
75
+ **How Multi-GPU Works**
76
+ - Videos are put into a task queue.
77
+ - Each GPU process take one task from the queue when vacant.
78
+ - If failed, the task will be put back into the queue.
79
+ - Progress bars show accumulation for all results.
80
+ - Every time when a task is finished, the result file is updated. You can obtain intermediate results from the file.
81
+ ##### 4. How to order your test files
82
+ ```
83
+ {model_name}
84
+ β”œβ”€β”€ 1st_data
85
+ β”‚ β”œβ”€β”€ action_space_test
86
+ β”‚ β”‚ β”œβ”€β”€ {corresponding data name}
87
+ β”‚ β”‚ β”‚ └── video.mp4
88
+ | | ...
89
+ | |
90
+ β”‚ └── mem_test
91
+ β”‚ β”œβ”€β”€ {corresponding data name}
92
+ β”‚ β”‚ └── video.mp4
93
+ | ...
94
+ |
95
+ β”œβ”€β”€ 3rd_data
96
+ β”‚ β”œβ”€β”€ action_space_test
97
+ β”‚ β”‚ β”œβ”€β”€ {corresponding data name}
98
+ β”‚ β”‚ β”‚ └── video.mp4
99
+ | | ...
100
+ | |
101
+ β”‚ └── mem_test
102
+ β”‚ β”œβ”€β”€ {corresponding data name}
103
+ β”‚ β”‚ └── video.mp4
104
+ | ...
105
+ ```
106
+ - `{model_name}`: custom your model name
107
+
108
+ - `{corresponding data name}`: corresponding ground truth data file name
109
+
110
+ ##### 5. The detailed information of output **<span style="color:red">`Result.json`</span>**
111
+
112
+ ```
113
+ {
114
+ "video_max_time": [int] video_max_time given in cmd parameters; max frames of the sample video to compute metrics (except action accuracy).
115
+ "data": [
116
+ {
117
+ "path": [string] the directory name of the video data.
118
+ "perspective": [string] 1st_data/3rd_data, the perspective of the video data.
119
+ "test_type": [string] mem_test/action_space_test, the test set of the video data.
120
+ "error": [string] the error occur when computing metrics
121
+ "mark_time": [int] the divider of memory context and expected perdiction; the start frame index of the expected prediction.
122
+ "total_time": [int] the total frames of the ground truth video.
123
+ "sample_frames": [int ]the total frames of the video to be tested.
124
+ "lcm": { the long context memory metric result.
125
+ "mse": [list[float]] the per-frame mean square error.
126
+ "avg_mse": [float] the average of mse.
127
+ "lpips": [list[float]] the per-frame Learned Perceptual Image Patch Similarity.
128
+ "avg_lpips": [float] the average of lpips.
129
+ "ssim": [list[float]] the per-frame Structural Similarity Index Measure.
130
+ "avg_ssim": [float] the average of ssim.
131
+ "psnr": [list[float]] the per-frame Peak Signal-to-Noise Ratio.
132
+ "avg_psnr": [float] the average of psnr.
133
+ },
134
+ "visual_quality": { the visual quality metric result.
135
+ "imaging": [list[float]] the per-frame imaging quality.
136
+ "avg_imaging": [float] the average of imaging quality.
137
+ "aesthetic": [list[float]] the per-frame aesthetic quality.
138
+ "avg_imaging": [float] the average of aesthetic quality.
139
+ },
140
+ "action": { the action accuracy metric result. computed by ViPE pose estimation and trajectory alignment.
141
+ "__overall__": { the overall statistics of all valid frames after outlier filtering.
142
+ "count": [int] number of valid samples used for statistics.
143
+ "rpe_trans_mean": [float] mean of Relative Pose Error for translation (in meters).
144
+ "rpe_trans_median": [float] median of RPE translation.
145
+ "rpe_rot_mean_deg": [float] mean of RPE rotation in degrees.
146
+ "rpe_rot_median_deg": [float] median of RPE rotation.
147
+ },
148
+ "translation": { the statistics of pure translation actions (forward/backward/left/right).
149
+ "count": [int] number of valid samples for translation actions.
150
+ "rpe_trans_mean": [float] mean RPE translation for translation actions.
151
+ "rpe_trans_median": [float] median RPE translation for translation actions.
152
+ "rpe_rot_mean_deg": [float] mean RPE rotation for translation actions.
153
+ "rpe_rot_median_deg": [float] median RPE rotation for translation actions.
154
+ },
155
+ "rotation": { the statistics of pure rotation actions (cam_left/cam_right/cam_up/cam_down).
156
+ "count": [int] number of valid samples for rotation actions.
157
+ ...
158
+ },
159
+ "other": { the statistics of combined actions (e.g., forward+look_right).
160
+ "count": [int] number of valid samples for other actions.
161
+ ...
162
+ },
163
+ "act:forward": { the statistics of specific action "forward".
164
+ "count": [int] number of valid samples for this action.
165
+ "rpe_trans_mean": [float] mean RPE translation.
166
+ "rpe_trans_median": [float] median RPE translation.
167
+ "rpe_rot_mean_deg": [float] mean RPE rotation.
168
+ "rpe_rot_median_deg": [float] median RPE rotation.
169
+ },
170
+ "act:look_right": { the statistics of specific action "look_right".
171
+ ...
172
+ },
173
+ ...
174
+ },
175
+ "dino": { the dino mse metric result.
176
+ "dino_mse": [list[float]] the per-frame mse of dino features.
177
+ "avg_dino_mse": [float] the average of dino_mse.
178
+ }
179
+ },
180
+ ...
181
+ ]
182
+ }
183
+ ```
184
+
185
+ ## πŸ—‚ Dataset Format
186
+
187
+ #### <span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span> is available [here](https://huggingface.co/datasets) ! ! !
188
+
189
+ ##### 1. The structure of **<span style="color:#1F82C0">M</span><span style="color:#1CBF91">I</span><span style="color:#39C46E">N</span><span style="color:#149C7E">D</span>** ground truth videos **(both for training and for testing)**
190
+
191
+ ```bash
192
+ MIND-Data
193
+ β”œβ”€β”€ 1st_data
194
+ β”‚ β”œβ”€β”€ test
195
+ β”‚ β”‚ β”œβ”€β”€ action_space_test
196
+ β”‚ β”‚ β”‚ β”œβ”€β”€ data-1
197
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ action.json
198
+ β”‚ β”‚ β”‚ β”‚ └── video.mp4
199
+ | | | ...
200
+ | | |
201
+ β”‚ β”‚ └── mem_test
202
+ β”‚ β”‚ β”œβ”€β”€ data-26
203
+ β”‚ β”‚ β”‚ β”œβ”€β”€ action.json
204
+ β”‚ β”‚ β”‚ └── video.mp4
205
+ | | ...
206
+ | └── train
207
+ | β”œβ”€β”€ data-76
208
+ | β”‚ β”œβ”€β”€ action.json
209
+ | β”‚ └── video.mp4
210
+ | ...
211
+ |
212
+ β”œβ”€β”€ 3rd_data
213
+ β”‚ β”œβ”€β”€ test
214
+ β”‚ β”‚ β”œβ”€β”€ action_space_test
215
+ β”‚ β”‚ β”‚ β”œβ”€β”€ data-126
216
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ action.json
217
+ β”‚ β”‚ β”‚ β”‚ └── video.mp4
218
+ | | | ...
219
+ | | |
220
+ β”‚ β”‚ └── mem_test
221
+ β”‚ β”‚ β”œβ”€β”€ data-151
222
+ β”‚ β”‚ β”‚ β”œβ”€β”€ action.json
223
+ β”‚ β”‚ β”‚ └── video.mp4
224
+ | | ...
225
+ | └── train
226
+ | β”œβ”€β”€ data-251
227
+ | β”‚ β”œβ”€β”€ action.json
228
+ | β”‚ └── video.mp4
229
+ | ...
230
+ ```
231
+ ##### 2. The detailed information of <span style="color:red">`Action.json`</span>
232
+
233
+ ```
234
+ {
235
+ "mark_time": [int] the divider of memory context and expected perdiction; the start frame index of the expected prediction
236
+ "total_time": [int] the total frames of the ground truth video
237
+ "caption" : [text] the text description of the ground truth video
238
+ "data": [
239
+ {
240
+ "time": [int] frame index
241
+ "ws": [int] 0: move forward, 1: move backward
242
+ "ad": [int] 0: move left, 1: move right
243
+ "ud": [int] 0: look up, 1: look down
244
+ "lr": [int] 0: look left, 1: look right
245
+ "actor_pos": {
246
+ "x": [float] the x-coordinate of the character
247
+ "y": [float] the y-coordinate of the character
248
+ "z": [float] the z-coordinate of the character
249
+ },
250
+ "actor_rpy": {
251
+ "x": [float] the roll angle of the character (Euler angles)
252
+ "y": [float] the pitch angle of the character
253
+ "z": [float] the yaw angle of the character
254
+ },
255
+ "camera_pos": {
256
+ # only exists in 3rd-person mode
257
+ "x": [float] the x-coordinate of the camera
258
+ "y": [float] the y-coordinate of the camera
259
+ "z": [float] the z-coordinate of the camera
260
+ },
261
+ "camera_rpy": {
262
+ # only exists in 3rd-person mode
263
+ "x": [float] the roll angle of the camera (Euler angles)
264
+ "y": [float] the pitch angle of the camera
265
+ "z": [float] the yaw angle of the camera
266
+ }
267
+ },
268
+ ...
269
+ ]
270
+ }
271
+ ```
272
+ ## πŸ† LeaderBoard
273
+ The leaderboard is coming...
274
+ ## πŸŽ“ BibTex
275
+
276
+ If you find our work can be helpful, we would appreciate your citation and star:
277
+
278
+ ```bibtex
279
+ @misc{ye2026mind,
280
+ title={MIND: Benchmarking Memory Consistency and Action Control in World Models},
281
+ author={Yixuan Ye, Xuanyu Lu, Yuxin Jiang, Yuchao Gu, Rui Zhao, Qiwei Liang, Jiachun Pan, Fengda Zhang, Weijia Wu, Alex Jinpeng Wang},
282
+ year={2026},
283
+ eprint={xxx},
284
+ archivePrefix={arXiv},
285
+ primaryClass={cs.CV},
286
+ url={https://arxiv.org/abs/xxx},
287
+ }
288
+ ```
289
+ ## πŸ“§ Contact
290
+ Please send emails to **yixuanye12@gmail.com** if there is any question
291
+
292
+ ## πŸ™ Acknowledgements
293
+ We would like to thank [ViPE](https://github.com/nv-tlabs/vipe) and [SkyReels-V2](https://github.com/SkyworkAI/SkyReels-V2) for their great work.