---
license: cc-by-4.0
task_categories:
- visual-question-answering
tags:
- VPT
pretty_name: BlenderGaze
size_categories:
- 1K
# BlenderGaze
The **BlenderGaze** dataset is specifically created to further investigate Visual Perspective Taking (VPT) in Vision Language Models (VLMs). This dataset extends the previously introduced [Isle dataset](https://huggingface.co/datasets/Gracjan/Isle), primarily expanding in terms of size rather than diversity.
BlenderGaze consists of two complementary subsets:
- **BlenderGaze-Isolated:** Contains isolated scenes featuring a humanoid figure and a red box. These scenes are explicitly designed to isolate and test the basic VPT capabilities of models (see Figure 1).
- **BlenderGaze-Texture:** Includes more complex and realistic scenes **with 10 distinct backgrounds**, each featuring 256 unique scenarios involving interactions between humanoids and robots. This subset aims to explore VPT within *human-robot* interaction scenarios.
In both subsets, objects are randomly placed, and camera viewpoints vary. Visibility is determined by a *120-degree field-of-view* cone originating from the humanoid figure's eyes, representing human peripheral vision. Each scene is provided in two versions:
- **With visual cone:** The visibility cone is explicitly shown, serving as a clear indicator of the humanoid figure's perspective (ground truth).
- **Without visual cone:** Simulates a natural viewing experience without explicit indicators of visual perspective.
Due to the rendering process, some minor errors in ground truth annotations might occur, though such **inaccuracies are expected to be under 10%**.
## Scene and Background Textures
The following environment textures from [Poly Haven](https://polyhaven.com/) were used to generate diverse and realistic backgrounds:
- [Belfast Sunset](https://polyhaven.com/a/belfast_sunset)
- [Christmas Photo Studio 02](https://polyhaven.com/a/christmas_photo_studio_02)
- [Moonlit Golf](https://polyhaven.com/a/moonlit_golf)
- [Park Parking](https://polyhaven.com/a/park_parking)
- [Paul Lobe Haus](https://polyhaven.com/a/paul_lobe_haus)
- [Simon's Town Road](https://polyhaven.com/a/simons_town_road)
- [Studio Country Hall](https://polyhaven.com/a/studio_country_hall)
- [Floral Tent](https://polyhaven.com/a/floral_tent)
- [Orlando Stadium](https://polyhaven.com/a/orlando_stadium)
- [Courtyard Night](https://polyhaven.com/a/courtyard_night)
### 3D Models
- **Robot Model:** [Free3D - Basic Robot](https://free3d.com/3d-model/robot_basic-166128.html)
- **Humanoid Model:** [Sketchfab - Rigged T-Pose Human Male](https://sketchfab.com/3d-models/rigged-t-pose-human-male-w-50-face-blendshapes-cc7e4596bcd145208a6992c757854c07)
This dataset supports researchers and psychologists studying visual perspective taking, spatial reasoning, and scene understanding in both humans and VLMs.
For experiments conducted on BlenderGaze, see [blog](https://medium.com/@gracjan.highlander/building-trust-with-invisible-robots-3aecd6180fa5).
Figure 1. An example of isolated scenes, featuring a humanoid figure and a red box. Left with visible cone, right without.
## Citation
```bibtex
@dataset{blendergaze2025,
title={BlenderGaze},
author={Gracjan Góral, Chris Thomas and Paweł Budzianowski},
year={2025},
note={Extension of Isle dataset: https://github.com/GracjanGoral/ISLE}
}
```