surprise-3d / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper, code, task category, and details
b744436 verified
|
raw
history blame
4.71 kB
metadata
language:
  - en
license: mit
size_categories:
  - 100K<n<1M
task_categories:
  - other
library_name: datasets
tags:
  - 3d
  - spatial-reasoning
  - segmentation
  - vision-language
  - scannet
  - embodied-ai

SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes

This repository contains the dataset for the paper SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes.

Codebase: https://github.com/hhllzz/surprise-3d

overview

We introduce Surprise3D, a novel dataset designed to evaluate language-guided spatial reasoning segmentation in complex 3D scenes. The integration of language and 3D perception is critical for embodied AI and robotic systems to perceive, understand, and interact with the physical world. Spatial reasoning, a key capability for understanding spatial relationships between objects, remains underexplored in current 3D vision-language research.

Existing datasets often mix semantic cues (e.g., object name) with spatial context, leading models to rely on superficial shortcuts rather than genuinely interpreting spatial relationships. To address this gap, S\textsc{urprise}3D consists of more than 200k vision language pairs across 900+ detailed indoor scenes from ScanNet++ v2, including more than 2.8k unique object classes. The dataset contains 89k+ human-annotated spatial queries deliberately crafted without object name, thereby mitigating shortcut biases in spatial understanding.

These queries comprehensively cover various spatial reasoning skills, such as:

  • Relative position (e.g., "Find the object behind the chair.")
  • Narrative perspective (e.g., "Locate the object visible from the sofa.")
  • Parametric perspective (e.g., "Select the object 2 meters to the left of the table.")
  • Absolute distance reasoning (e.g., "Identify the object exactly 3 meters in front of you.").

Initial benchmarks demonstrate significant challenges for current state-of-the-art expert 3D visual grounding methods and 3D-LLMs, underscoring the necessity of our dataset and the accompanying 3D Spatial Reasoning Segmentation (3D-SRS) benchmark suite. S\textsc{urprise}3D and 3D-SRS aim to facilitate advancements in spatially aware AI, paving the way for effective embodied interaction and robotic planning.


🔍 Data Analysis

Data Analysis

We provide a detailed analysis of the dataset:

  1. Augmentation for Low-Frequency Objects: Boosting the number of questions targeting rarely occurring objects to improve model robustness.
  2. Object Frequency (%) by Question Type (Top 15 Objects): Examining how frequently the top 15 objects are referenced across different question types.
  3. Distribution of Question Types: Visualizing the proportion of questions across various reasoning categories.

Our dataset ensures a balanced distribution of reasoning types and incorporates augmentation techniques to reduce biases caused by object frequency disparities. This analysis supports the development of models that generalize better across diverse reasoning tasks.


⚙️ Train and Evaluation

We have modified parts of the Reason3D codebase to support training and testing on our Surprise3D dataset. These modifications enable the preprocessing of ScanNet++ data and the use of Reason3D for segmentation tasks on Surprise3D.

Please refer to the Models/reason3d directory within the codebase repository for scripts to preprocess the ScanNet++ data required for the Surprise3D dataset and for training and evaluation using Reason3D.

These updates allow us to leverage the powerful capabilities of Reason3D while ensuring compatibility with the unique structure and annotations of Surprise3D.


Citation

If you find our dataset or work useful for your research, please consider citing the paper:

@inproceedings{huang2024surprise3d,
      title={SURPRISE3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes},
      author={Jiaxin Huang and Ziwen Li and Hanlue Zhang},
      booktitle={The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
      year={2024},
      url={https://huggingface.co/papers/2507.07781}
}