--- license: mit task_categories: - image-text-to-text language: - en tags: - vlm - spatial-reasoning - multi-view - vqa - cognition --- # ReMindView-Bench Dataset [Paper](https://huggingface.co/papers/2512.02340) | [Code](https://github.com/pittisl/ReMindView-Bench) ## Introduction ReMindView-Bench is a cognitively grounded benchmark for evaluating how Vision-Language Models (VLMs) construct, align, and maintain spatial mental models across complementary viewpoints. It addresses the struggle of current VLMs to maintain geometric coherence and cross-view consistency for spatial reasoning in multi-view settings by providing a fine-grained benchmark that isolates multi-view reasoning. ## Reconstructing the dataset The dataset archive is split into 45GB parts to comply with the per-file limit. To rebuild the original tar after downloading all parts: ```bash cat ReMindView-Bench.tar.part-* > ReMindView-Bench.tar ``` ## Dataset Generation To generate scenes, renders, and QA CSVs for the benchmark, follow these steps from the GitHub repository: 1. **Install Blender and Python Dependencies**: You need to install Blender (headless is fine) and the Python dependencies used by Infinigen plus common packages. Run the scripts with Blender’s bundled Python or `blender --background --python