Datasets:
Tasks:
Image-Text-to-Text
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
ArXiv:
License:
| license: mit | |
| task_categories: | |
| - image-text-to-text | |
| language: | |
| - en | |
| tags: | |
| - vlm | |
| - spatial-reasoning | |
| - multi-view | |
| - vqa | |
| - cognition | |
| # ReMindView-Bench Dataset | |
| [Paper](https://huggingface.co/papers/2512.02340) | [Code](https://github.com/pittisl/ReMindView-Bench) | |
| ## Introduction | |
| ReMindView-Bench is a cognitively grounded benchmark for evaluating how Vision-Language Models (VLMs) construct, align, and maintain spatial mental models across complementary viewpoints. It addresses the struggle of current VLMs to maintain geometric coherence and cross-view consistency for spatial reasoning in multi-view settings by providing a fine-grained benchmark that isolates multi-view reasoning. | |
| ## Reconstructing the dataset | |
| The dataset archive is split into 45GB parts to comply with the per-file limit. To rebuild the original tar after downloading all parts: | |
| ```bash | |
| cat ReMindView-Bench.tar.part-* > ReMindView-Bench.tar | |
| ``` | |
| ## Dataset Generation | |
| To generate scenes, renders, and QA CSVs for the benchmark, follow these steps from the GitHub repository: | |
| 1. **Install Blender and Python Dependencies**: | |
| You need to install Blender (headless is fine) and the Python dependencies used by Infinigen plus common packages. Run the scripts with Blender’s bundled Python or `blender --background --python <script> -- --flags` so `bpy` is available. | |
| 2. **Generate Scenes and Renders**: | |
| From the repo root, generate scenes and renders: | |
| ```bash | |
| bash scene_generation.sh | |
| ``` | |
| This script sweeps seeds 0–9 across five room types and writes scenes to `outputs/indoors/<ROOM>_<SEED>`, object-centric frames to `object_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`, and view-centric frames to `view_centric_view_frame_outputs/<ROOM>/<ROOM>_<SEED>`. | |
| 3. **Clean Empty/Invalid Views**: | |
| ```bash | |
| python clean_visual_data.py --dir_path object_centric_view_frame_outputs | |
| ``` | |
| And the same for `view_centric_view_frame_outputs`. | |
| 4. **Produce QA CSVs**: | |
| Choose one of `view_view`, `view_object`, `object_object`. For example: | |
| ```bash | |
| python ground_truth_generation.py --image_folder object_centric_view_frame_outputs --qa_type object_object | |
| ``` | |
| The output CSV will be saved beside the image folder (e.g., `object_centric_view_frame_outputs/object_object_qa.csv`). | |
| ## Dataset content | |
| VQA samples are stored in CSV files with the following columns: `folder_path` (scene/view folder), `query_type` (query relationship type), `query`, `ground_truth`, `choices`, `cross_frame` (whether cross frame reasoning is necessary), `perspective_changing` (whether requiring perspective changing), and `object_num` (object number in all frames). | |
| Example row: | |
| - `folder_path`: `dense_view_centric_view_frame_outputs_processed/Bedroom/Bedroom_1/MattressFactory(7143095).spawn_asset(3158442)/level_20` | |
| - `query_type`: `object-object|relative_distance|non_perspective_changing|0` | |
| - `query`: “Which object is the closest to the shell?” | |
| - `choices`: `A.pillow, B.toy animal, C.shell` | |
| - `ground_truth`: `B.toy animal` | |
| - `cross_frame`: `True` | |
| - `perspective_changing`: `False` | |
| - `object_num`: `18` | |
| ## Sample scenes | |
| Below are several example renders from ReMindView-Bench showing indoor layouts and object detail captured in the benchmark. | |
| Example 1: | |
|  | |
| - Query: If you are positioned where the white sofa is, facing the same direction of the white sofa, what is the spatial relationship of the white TV stand to shelf trinket? | |
| - Choice: A. front-right, B. left, C. back, D. back-right | |
| - Answer: B. left | |
| --- | |
| Example 2: | |
|  | |
| - Query: From the perspective of frame3, which object is the closest to you? | |
| - Choice: A. white cabinet, B. beverage fridge, C. desk lamp, D. glass jar | |
| - Answer: C. glass jar | |
| --- | |
| Example 3: | |
|  | |
| - Query: If you are positioned where the lamp is, which object is the closest to you? | |
| - Choice: A. vertical bookstack, B. wall art, C. window, D. green bottle | |
| - Answer: D. green bottle | |
| --- | |
| Example 4: | |
|  | |
| - Query: If you are positioned where the black microwave is and facing the same direction of the black microwave, what is the direction of the window to you? | |
| - Choice: A. front, B. left, C. back-left, D. back | |
| - Answer: A. front | |
| --- | |
| Example 5: | |
|  | |
| - Query: Which frame taken position has a further distance to frame1 taken position? | |
| - Choice: A. frame2, B. farme4, C. frame3 | |
| - Answer: C. frame3 | |
| --- | |
| Example 6: | |
|  | |
| - Query: How did you likely move from the taken position of frame2 to the taken position of frame3? | |
| - Choice: A. go opposite, B. go left and go forward, C. go right and go forward | |
| - Answer: B. go left and go forward | |
| --- | |
| Example 7: | |
|  | |
| - Query: Which object is the closest to the yellow sofa? | |
| - Choice: A. bookstack, B. white lamp, C. wall art, D. shelf trinket | |
| - Answer: C. wall art | |
| --- | |
| Example 8: | |
|  | |
| - Query: If you are positioned where the white small kitchen cabinet is, facing the same direction of the white small kitchen cabinet and then turn left, which object would be in the front of the dining table from this view direction? | |
| - Choice: A. wineglass, B. pot, C. chair | |
| - Answer: C. chair |