Datasets:
license: mit
configs:
- config_name: Isle-Brick-V2
data_files:
- split: test
path: Isle-Brick-V2/*
features:
- name: image
dtype: image
- name: Q1
dtype: int64
- name: Q2
dtype: int64
- name: Q3
dtype: string
- name: Q4
sequence: string
- name: Q5
sequence: string
- name: Q6
dtype: string
- name: Q7
sequence: string
- config_name: Isle-Brick-V1
data_files: Isle-Brick-V1/
features:
- name: image
dtype: image
Dataset Details
The LEGO Visual Tasks Dataset is designed for research in visual perspective taking (VPT), scene understanding, and spatial reasoning. It contains 144 visual tasks inspired by human VPT tests, such as those detailed in O'Grady et al. (2020) and Lukosiunaite et al. (2024). Each task involves a minifigure-object pair created using LEGO components and systematically photographed in varying spatial arrangements and orientations.
This dataset can be utilized for evaluating models’ abilities in object recognition, spatial reasoning, and perspective-taking, making it a valuable resource for studies in artificial intelligence, cognitive science, and computer vision.
Dataset Sources
- Repository: https://github.com/GracjanGoral/ISLE
- Paper: https://arxiv.org/abs/2409.12969
Direct Use
The dataset is suitable for:
- Training and evaluating models in visual perspective-taking tasks.
- Studying scene understanding, spatial reasoning, and object recognition.
- Testing abilities to answer diagnostic questions related to the dataset's scenarios.
Out-of-Scope Use
The dataset should not be used for:
- Malicious purposes such as creating misleading or biased AI applications.
- Tasks unrelated to the intended goals of visual reasoning and perspective-taking.
Dataset Structure
The dataset consists of:
Images: Photographs of nine unique LEGO minifigure-object pairs systematically varied by:
- Spatial Position: Object relative to the minifigure’s left, right, behind, or in front.
- Minifigure Orientation: Facing toward or away from the object.
- Camera Viewpoints: Bird’s-eye view and surface-level view.
- Image Resolution: Each image has 4000 x 3000 pixels.
Metadata File: A structured file containing:
image_id: Unique identifier for each image.question_1toquestion_7: Questions testing scene understanding, spatial reasoning, and visual perspective-taking.gold_answer: The correct answer for each question.
Curation Rationale
This dataset was created to systematically test visual reasoning models on tasks inspired by human cognitive processes. It aims to bridge the gap between machine learning performance and human abilities in tasks requiring scene understanding, spatial reasoning, and perspective-taking.
Data Collection and Processing
The images were collected by:
- Designing nine unique minifigure-object pairs using LEGO pieces.
- Systematically varying spatial positions, orientations, and camera angles for each pair.
- Photographing each arrangement with consistent lighting and surface conditions.
- The images were manually created, and each has a resolution of 4000 x 3000 pixels.
Who are the source data producers?
The source data producers include the dataset creator Gracjan Goral, who designed the LEGO minifigure-object pairs and performed systematic photography.
Annotation Process
- Three annotators labeled data, and the gold answers were produced by majority choice among the annotators.
- The labeling process achieved over 99% agreement among annotators
Personal and Sensitive Information
The dataset does not contain any personal, sensitive, or private information. All data consists of LEGO objects and metadata derived from the tasks.
Citation
BibTeX: @dataset{LEGO_Visual_Tasks, author = {Gracjan Goral}, title = {LEGO Visual Tasks Dataset}, year = {2025}, publisher = {Hugging Face}, note = {https://huggingface.co/datasets/lego_visual_tasks}, howpublished = {Used in article: https://arxiv.org/abs/2409.12969} }
APA: Goral, G. (2025). LEGO Visual Tasks Dataset. Available at https://huggingface.co/datasets/lego_visual_tasks. Used in article: https://arxiv.org/abs/2409.12969.