Datasets:
pretty_name: PinpointQA
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- video-text-to-text
tags:
- benchmark
- spatial-understanding
- small-object
- indoor-scenes
configs:
- config_name: default
data_files:
- split: train
path: train.jsonl
- split: validation
path: validation.jsonl
- split: test
path: test.jsonl
PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
Important: This repository releases benchmark annotations and grounded intermediate spatial representations only. It does not redistribute the original scene assets or converted video files.
π§ Overview
PinpointQA focuses on a practical question: given a known small object such as a phone, charger, remote, or bottle, can a model determine whether it appears, localize it through nearby references, describe its position precisely, and provide an output that is directly useful for downstream systems?
In addition to benchmark annotations, this repository also releases grounded intermediate spatial representations constructed during scene curation. These files preserve the target-centered local spatial context used to generate the released QA pairs and can support further analysis or the construction of additional grounded tasks.
π Task Overview
PinpointQA is organized as a progressive four-stage benchmark:
| Task | Name | Goal | Output Format |
|---|---|---|---|
| TPV | Target Presence Verification | Determine whether a queried small object appears in the scene | Yes / No |
| NRI | Nearest Reference Identification | Identify the nearest reference object to the target, excluding the support surface | Multiple choice |
| FSD | Fine-Grained Spatial Description | Describe the target location with support surface, nearby references, and centimeter-level distances | Natural language |
| SSP | Structured Spatial Prediction | Output the same grounded spatial information in structured form | JSON |
π Key Statistics
- Scenes: 1,024
- QA pairs: 10,094
- Canonical target categories: 102
- Source datasets: ScanNet++, ScanNet200
- Task distribution over all released QA pairs: TPV 26.47%, NRI 23.10%, FSD 25.08%, SSP 25.34%
- Source distribution over all released QA pairs: ScanNet++ 73.2%, ScanNet200 26.8%
- Released splits: train 6,121 / validation 1,954 / test 2,019
π·οΈ Category Naming Note
PinpointQA contains 102 canonical target categories at the benchmark-definition level.
You may notice that the dataset viewer reports more distinct string values in the target column. This is expected: some semantically equivalent or near-equivalent names are preserved as surface forms in released text fields for readability and compatibility with source annotations or task phrasing. Examples include naming variants such as mobile phone and phone.
When reporting benchmark statistics in the paper and project page, we count categories at the canonical category level rather than the raw string-surface level.
π Quick Start
Install dependencies
pip install datasets
Load the dataset
from datasets import load_dataset
dataset = load_dataset("RainChow/PinpointQA")
print(dataset)
print(dataset["train"][0])
Access a specific split
train_set = dataset["train"]
val_set = dataset["validation"]
test_set = dataset["test"]
Save the dataset locally
from datasets import load_dataset
dataset = load_dataset("RainChow/PinpointQA")
dataset.save_to_disk("./PinpointQA_hf")
ποΈ Dataset Organization
PinpointQA/
βββ train.jsonl
βββ validation.jsonl
βββ test.jsonl
βββ intermediate_spatial_representations/
β βββ scene_xxx.json
β βββ scene_yyy.json
β βββ ...
βββ README.md
Released Fields
id: globally unique sample identifierscene_id: scene identifiersource_dataset:scannetpporscannet200local_sample_id: scene-local sample indextask: short task label (TPV,NRI,FSD,SSP)question_type: original long-form task nameinstruction: task instructionquestion: user-facing question textchoices: candidate options for NRI, otherwisenullanswer: ground-truth answertarget: queried small object name used in the released sample textsplit: split name
Example Record
{
"id": "scene0000_00_0",
"scene_id": "scene0000_00",
"source_dataset": "scannet200",
"local_sample_id": "0",
"task": "TPV",
"question_type": "target presence verification",
"instruction": "Answer only with exactly one word: Yes or No. Do not add any explanation.",
"question": "In the entire scene, did the coffee kettle appear?",
"choices": null,
"answer": "No",
"target": "coffee kettle",
"split": "train"
}
Field Notes by Task
- TPV:
answerisYesorNo - NRI:
choicescontains four candidate objects;answeris the correct option text - FSD:
answeris a natural-language location description - SSP:
answeris a JSON-formatted string representing structured spatial grounding
Intermediate Spatial Representations
The intermediate_spatial_representations/ folder stores the grounded scene-level representations used to instantiate TPV, NRI, FSD, and SSP.
- Each file corresponds to a scene and is aligned with
scene_id. - These files preserve the target-centered local spatial context used for QA construction.
- The released content includes grounded information such as target objects, support surfaces, nearby references, and local spatial relations/distances.
For example, a file such as scene0000_00.json corresponds to scene_id = "scene0000_00" and provides the grounded scene context from which the released QA samples for that scene are derived.
π Spatial Semantics
Support Surface vs. Reference Objects
The support surface is the surface that directly supports the target object in the final grounded representation.
- In NRI, the support surface is excluded from candidate reference options.
- In FSD and SSP, the support surface is retained as a distinct field because it is often a necessary localization anchor.
- Nearby references are additional local objects used to describe or structure the final location of the target.
Depending on scene semantics and released wording, a surface-like object may appear in text fields as a location anchor, but the benchmark definition still treats support surface and reference objects as functionally different roles.
Distances
Distances in FSD and SSP are derived from grounded scene geometry and expressed in centimeters in the released benchmark outputs.
π§± Source Data Preparation
This repository releases benchmark annotations and intermediate spatial representations only. It does not redistribute the original scene assets or converted videos.
To reproduce video-based experiments, users should first obtain the original assets from the official sources of ScanNet++ and ScanNet v2 / ScanNet200, subject to their respective licenses and access requirements. Note that ScanNet200 shares the same underlying source data as ScanNet v2 and mainly differs in annotation parsing and label space, so the video assets used here still come from the ScanNet v2 RGB-D data.
ScanNet++
- Official website: ScanNet++
- Obtain access through the official ScanNet++ release.
- Download the scenes required by your target split or evaluation subset.
- Match local assets to the released
scene_idvalues.
ScanNet v2 / ScanNet200
- Official ScanNet website: ScanNet
- ScanNet200 benchmark documentation: ScanNet200 Benchmark Documentation
- Obtain access to the original data and prepare the scenes required by your pipeline.
- Match local assets to the released
scene_idvalues used in this benchmark.
Video Conversion Tools
The source assets from ScanNet++ and ScanNet v2 / ScanNet200 are not distributed as ready-to-use MP4 videos. If your pipeline expects standard video files, we provide conversion scripts in the project GitHub repository:
tools/convert_mkv_to_mp4.pytools/convert_sens_to_mp4.py
Tools folder:
Recommended Local Organization
workspace/
βββ PinpointQA/
β βββ train.jsonl
β βββ validation.jsonl
β βββ test.jsonl
β βββ intermediate_spatial_representations/
βββ raw_data/
β βββ scannetpp/
β βββ scannet200/
βββ videos/
βββ scene_or_video_1.mp4
βββ scene_or_video_2.mp4
βββ ...
Users may organize local files differently depending on their own training or inference pipeline.
π§ Intended Use
PinpointQA is intended for:
- benchmarking multimodal models on small object-centric spatial understanding in indoor videos
- instruction tuning or supervised fine-tuning for grounded spatial QA tasks
- studying progressive capability breakdown from target presence to structured spatial output
- analyzing reference-based localization and spatial grounding behavior in multimodal systems
π« Out-of-Scope Use
PinpointQA is not intended as:
- a general-purpose benchmark for all video understanding abilities
- a substitute for open-world object tracking or dense video captioning benchmarks
- a benchmark for outdoor scenes, unconstrained robotics, or dynamic multi-agent interaction
- a standalone source of original scene assets or video files
β οΈ Limitations and Biases
Users should be aware of the following limitations:
- The benchmark is restricted to indoor scenes.
- It focuses specifically on small object-centric localization and spatial expression, rather than full-scene understanding.
- Released QA pairs are constructed from grounded scene geometry and benchmark logic, so some answer styles may be more regular than unconstrained human language.
- Some target names are preserved as different released surface forms even when they map to the same canonical category.
- The repository does not redistribute original videos or raw scene assets, so reproduction requires separate access to the source datasets.
β Quality Assurance
We use a combination of automatic filtering and manual review to improve dataset accuracy and consistency.
- Invalid labels and background or structural objects are filtered out.
- Only target instances satisfying the predefined small-object vocabulary are retained.
- Questions are generated only for target instances with unique labels within a scene.
- NRI samples contain four distinct candidate options.
- FSD answers are constrained to be human-readable and localization-oriented.
- SSP outputs are required to contain parsable key fields.
- Iterative manual spot-checking is applied to refine templates and QA logic.
π License and Upstream Data Notice
The Apache-2.0 license in this repository applies to the released benchmark annotations and intermediate spatial representations in this repository.
The original scene assets remain subject to the official terms, licenses, and access conditions of ScanNet++ and ScanNet v2 / ScanNet200. Users are responsible for obtaining and using upstream source data in compliance with the corresponding original terms.
π Performance Snapshot
The table below shows a representative subset of overall benchmark results. We report averaged scores across TPV, NRI, FSD, and SSP, where Avg Micro is the arithmetic mean of task-level micro scores and Avg Macro is the arithmetic mean of task-level macro scores.
| Rank | Model | Avg Micro | Avg Macro |
|---|---|---|---|
| 1 | Qwen3-VL-8B-Instruct-SFT | 0.48 | 0.49 |
| 2 | InternVL3.5-8B-Instruct-SFT | 0.45 | 0.45 |
| 3 | Kimi K2.5 | 0.42 | 0.44 |
| 4 | Qwen3-VL-8B-Instruct | 0.39 | 0.40 |
| 5 | GPT-5.4 | 0.38 | 0.40 |
For full evaluation details, please refer to the paper and project page.
π Resources
- Project Page: PinpointQA Project Page
- GitHub Repository: https://github.com/rainchowz/PinpointQA
- Discussions: Hugging Face Discussions
- Contact: zhouzy1622@mails.jlu.edu.cn
π Citation
If you use PinpointQA, please cite:
@article{zhou2026pinpointqa,
author = {Zhiyu Zhou and Peilin Liu and Ruoxuan Zhang and Luyang Zhang and Cheng Zhang and Hongxia Xie and Wen-Huang Cheng},
title = {PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos},
journal = {arXiv preprint arXiv:2604.08991},
year = {2026}
}