Real-3DQA / README.md
Oliver-Ma's picture
Update dataset card metadata and links (#2)
2b20350
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - 1K<n<10K
task_categories:
  - image-text-to-text
  - question-answering
  - visual-question-answering
pretty_name: Real-3DQA
tags:
  - 3D
  - spatial-reasoning
  - point-cloud
  - ScanNet
  - SQA3D
  - benchmark
  - debiasing
  - viewpoint-rotation

Real-3DQA

Do 3D Large Language Models Really Understand 3D Spatial Relationships?

🌐 Project Page Β· πŸ“„ Paper Β· πŸ’» GitHub

Overview

Real-3DQA is a debiased 3D spatial QA benchmark with viewpoint rotation consistency evaluation. It addresses two key shortcomings of existing benchmarks:

  1. Language Shortcut Filtering β€” Questions answerable through linguistic priors alone are removed by comparing 3D-LLMs against blind text-only counterparts.
  2. Viewpoint Rotation Score (VRS) β€” Each question is augmented with 4 viewpoints (0Β°, 90Β°, 180Β°, 270Β°) to test whether models produce consistent answers regardless of the observer's orientation.

Dataset Structure

data/
└── test/
    β”œβ”€β”€ debiased_test.jsonl    # 1485 debiased questions (Stage 1)
    β”œβ”€β”€ rotation_0.jsonl       # 750 questions at original (0Β°) viewpoint (Stage 2)
    β”œβ”€β”€ rotation_90.jsonl      # same questions rotated 90Β° counter-clockwise
    β”œβ”€β”€ rotation_180.jsonl     # same questions rotated 180Β°
    └── rotation_270.jsonl     # same questions rotated 270Β° counter-clockwise

point_clouds/
β”œβ”€β”€ scene0025_00.pth      # Pre-processed ScanNet point clouds (33 scenes)
β”œβ”€β”€ scene0046_00.pth
└── ...

Two-Stage Structure

  • Stage 1 β€” Debiased Test Set (debiased_test.jsonl): 1,485 questions across 67 scenes and 11 question types. These are questions that survive the language shortcut filtering β€” blind text-only models cannot solve them.
  • Stage 2 β€” Rotation Test Set (rotation_*.jsonl): 750 questions Γ— 4 viewpoints (0Β°, 90Β°, 180Β°, 270Β°). A subset of Stage 1 focused on 4 spatial question types (distance, direction, counting, existence), augmented with viewpoint rotations.

Data Format

Each JSONL file contains one question per line.

Rotation files (rotation_*.jsonl) have these fields:

Field Type Description
question_id string Unique identifier. Rotated versions append 01/02/03 suffix
scene_id string ScanNet scene identifier (e.g., scene0300_00)
question_type string One of: distance, direction, counting, existence
situation string Natural language description of the observer's position and orientation
question string The spatial question
answer string Ground-truth answer
position object Observer position {x, y, z} in ScanNet coordinates (z=0, ground plane)
rotation object Observer orientation as quaternion {_x, _y, _z, _w} (rotation around Z-axis)

Example

{
  "question_id": "220602000035",
  "scene_id": "scene0300_00",
  "question_type": "distance",
  "situation": "I am facing the divider with the bookshelf on my right side.",
  "question": "What is the first object closest to my left?",
  "answer": "window",
  "position": {"x": 1.032, "y": 0.734, "z": 0},
  "rotation": {"_x": 0, "_y": 0, "_z": 0.996, "_w": 0.087}
}

Debiased test set (debiased_test.jsonl) shares the same core fields, plus:

Field Type Description
alternative_situation list of strings Alternative situation descriptions for the same position/orientation

Construction Pipeline

Real-3DQA construction pipeline

Question Type Distribution

Stage 1 β€” Debiased Test Set (1,485 questions, 11 types):

Type Count
Spatial Relation 386
Number 297
Navigation 247
Object 179
Color 138
Visibility 76
State 73
Shape 57
Reasoning 22
Measurement 5
Other 5

Stage 2 β€” Rotation Test Set (750 questions, 4 types):

Type Count
Distance 594
Direction 113
Counting 30
Existence 13

Point Clouds

The point_clouds/ directory contains 33 pre-processed ScanNet v2 point clouds in .pth format (from the SQA3D processing pipeline). These cover all scenes referenced in the benchmark.

Evaluation

Exact Match (EM): Standard QA accuracy at each rotation angle.

Viewpoint Rotation Score (VRS): A question is scored as correct only if the model answers all 4 rotations correctly. This measures genuine 3D understanding rather than lucky guesses.

Contributing & Submitting Corrections

We care deeply about the quality of this benchmark. That said, annotations involving 3D spatial reasoning are inherently challenging, and there may still be inaccuracies or oversights in the data.

We warmly welcome everyone to use Real-3DQA, and if you spot any issues β€” however small β€” please don't hesitate to let us know. We will respond promptly and merge valid corrections as quickly as possible. Your contributions directly help the community build more reliable 3D understanding systems.

How to Submit a PR

  1. Go to the Community tab on this dataset page
  2. Click "New Pull Request" β†’ "Create a Pull Request"
  3. Edit the relevant JSONL file directly on HuggingFace:
    • Navigate to the file (e.g., data/test/rotation_0.jsonl)
    • Click the ✏️ edit button
    • Find the line with the question_id you want to fix and make your correction
    • Write a commit message and submit
  4. In the PR description, please include:
    • The question_id(s) of the affected entry
    • A clear description of what's wrong
    • Evidence if possible (e.g., a screenshot from the 3D scene)

Example PR Description

Fix incorrect answer for question_id 220602000035

- File: data/test/rotation_0.jsonl
- question_id: 220602000035
- Current answer: "window"
- Corrected answer: "bookshelf"
- Reason: The bookshelf is closer to the observer's left based on
  the scene geometry in scene0300_00.

Alternatively: Open a Discussion

If you're unsure whether something is a bug or have questions about the data, feel free to open a Discussion in the Community tab instead.

Citation

@inproceedings{ma2026real3dqa,
  title={Do 3D Large Language Models Really Understand 3D Spatial Relationships?},
  author={Xianzheng Ma and Tao Sun and Shuai Chen and Yash Bhalgat and Jindong Gu and Angel X Chang and Iro Armeni and Iro Laina and Songyou Peng and Victor Adrian Prisacariu},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://openreview.net/forum?id=3vlMiJwo8b}
}

License

This dataset is released under the Creative Commons Attribution 4.0 International License.