File size: 5,261 Bytes
111ad6e
 
74db362
 
 
 
 
 
 
 
 
 
111ad6e
 
74db362
 
 
 
 
 
 
 
111ad6e
 
 
 
 
 
1da9c77
74db362
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32351d0
a200b60
32351d0
 
 
563fdca
32351d0
 
 
563fdca
32351d0
563fdca
32351d0
74db362
1da9c77
91d4ac7
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
 
32351d0
bbdfc0a
1da9c77
 
 
 
74db362
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: mit
task_categories:
- image-text-to-text
language:
- en
tags:
- vlm
- spatial-reasoning
- multi-view
- vqa
- cognition
---

# ReMindView-Bench Dataset

[Paper](https://huggingface.co/papers/2512.02340) | [Code](https://github.com/pittisl/ReMindView-Bench)

## Introduction
ReMindView-Bench is a cognitively grounded benchmark for evaluating how Vision-Language Models (VLMs) construct, align, and maintain spatial mental models across complementary viewpoints. It addresses the struggle of current VLMs to maintain geometric coherence and cross-view consistency for spatial reasoning in multi-view settings by providing a fine-grained benchmark that isolates multi-view reasoning.

## Reconstructing the dataset

The dataset archive is split into 45GB parts to comply with the per-file limit. To rebuild the original tar after downloading all parts:

```bash
cat ReMindView-Bench.tar.part-* > ReMindView-Bench.tar
```

## Dataset Generation

To generate scenes, renders, and QA CSVs for the benchmark, follow these steps from the GitHub repository:

1.  **Install Blender and Python Dependencies**:
    You need to install Blender (headless is fine) and the Python dependencies used by Infinigen plus common packages. Run the scripts with Blender’s bundled Python or `blender --background --python <script> -- --flags` so `bpy` is available.

2.  **Generate Scenes and Renders**:
    From the repo root, generate scenes and renders:
    ```bash
    bash scene_generation.sh
    ```
    This script sweeps seeds 0–9 across five room types and writes scenes to `outputs/indoors/<ROOM>_<SEED>`, object-centric frames to `object_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`, and view-centric frames to `view_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`.

3.  **Clean Empty/Invalid Views**:
    ```bash
    python clean_visual_data.py --dir_path object_centric_view_frame_outputs
    ```
    And the same for `view_centric_view_frame_outputs`.

4.  **Produce QA CSVs**:
    Choose one of `view_view`, `view_object`, `object_object`. For example:
    ```bash
    python ground_truth_generation.py --image_folder object_centric_view_frame_outputs --qa_type object_object
    ```
    The output CSV will be saved beside the image folder (e.g., `object_centric_view_frame_outputs/object_object_qa.csv`).

## Dataset content

VQA samples are stored in CSV files with the following columns: `folder_path` (scene/view folder), `query_type` (query relationship type), `query`, `ground_truth`, `choices`, `cross_frame` (whether cross frame reasoning is necessary), `perspective_changing` (whether requiring perspective changing), and `object_num` (object number in all frames).

Example row:
- `folder_path`: `dense_view_centric_view_frame_outputs_processed/Bedroom/Bedroom_1/MattressFactory(7143095).spawn_asset(3158442)/level_20`
- `query_type`: `object-object|relative_distance|non_perspective_changing|0` 
- `query`: “Which object is the closest to the shell?”
- `choices`: `A.pillow, B.toy animal, C.shell`
- `ground_truth`: `B.toy animal`
- `cross_frame`: `True`
- `perspective_changing`: `False`
- `object_num`: `18` 

## Sample scenes

Below are several example renders from ReMindView-Bench showing indoor layouts and object detail captured in the benchmark.
Example 1:
![](figures/figure1.png)

- Query: If you are positioned where the white sofa is, facing the same direction of the white sofa, what is the spatial relationship of the white TV stand to shelf trinket?  
- Choice: A. front-right, B. left, C. back, D. back-right  
- Answer: B. left
---
Example 2:
![](figures/figure2.png)

- Query: From the perspective of frame3, which object is the closest to you?
- Choice: A. white cabinet, B. beverage fridge, C. desk lamp, D. glass jar  
- Answer: C. glass jar
---
Example 3:
![](figures/figure3.png)

- Query: If you are positioned where the lamp is, which object is the closest to you?
- Choice: A. vertical bookstack, B. wall art, C. window, D. green bottle  
- Answer: D. green bottle
---
Example 4:
![](figures/figure4.png)

- Query: If you are positioned where the black microwave is and facing the same direction of the black microwave, what is the direction of the window to you?
- Choice: A. front, B. left, C. back-left, D. back  
- Answer: A. front
---
Example 5:
![](figures/figure5.png)

- Query: Which frame taken position has a further distance to frame1 taken position?
- Choice: A. frame2, B. farme4, C. frame3
- Answer: C. frame3
---
Example 6:
![](figures/figure6.png)

- Query: How did you likely move from the taken position of frame2 to the taken position of frame3?
- Choice: A. go opposite, B. go left and go forward, C. go right and go forward
- Answer: B. go left and go forward
---
Example 7:
![](figures/figure7.png)

- Query: Which object is the closest to the yellow sofa?
- Choice: A. bookstack, B. white lamp, C. wall art, D. shelf trinket
- Answer: C. wall art
---
Example 8:
![](figures/figure8.png)

- Query: If you are positioned where the white small kitchen cabinet is, facing the same direction of the white small kitchen cabinet and then turn left, which object would be in the front of the dining table from this view direction?
- Choice: A. wineglass, B. pot, C. chair
- Answer: C. chair