Datasets:
Tasks:
Video-Text-to-Text
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Add paper, project page and code links to dataset card
#3
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,311 +1,313 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
tags:
|
| 11 |
-
- benchmark
|
| 12 |
-
- spatial-understanding
|
| 13 |
-
- small-object
|
| 14 |
-
- indoor-scenes
|
| 15 |
-
configs:
|
| 16 |
-
- config_name: default
|
| 17 |
-
data_files:
|
| 18 |
-
- split: train
|
| 19 |
-
path: train.jsonl
|
| 20 |
-
- split: validation
|
| 21 |
-
path: validation.jsonl
|
| 22 |
-
- split: test
|
| 23 |
-
path: test.jsonl
|
| 24 |
-
---
|
| 25 |
-
|
| 26 |
-
# PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
|
| 43 |
-
|
|
| 44 |
-
|
|
| 45 |
-
|
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
- **
|
| 52 |
-
- **
|
| 53 |
-
- **
|
| 54 |
-
- **Source
|
| 55 |
-
- **
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
##
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
```
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
βββ
|
| 108 |
-
βββ
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
β
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
- `
|
| 120 |
-
- `
|
| 121 |
-
- `
|
| 122 |
-
- `
|
| 123 |
-
- `
|
| 124 |
-
- `
|
| 125 |
-
- `
|
| 126 |
-
- `
|
| 127 |
-
- `
|
| 128 |
-
- `
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
"
|
| 137 |
-
"
|
| 138 |
-
"
|
| 139 |
-
"
|
| 140 |
-
"
|
| 141 |
-
"
|
| 142 |
-
"
|
| 143 |
-
"
|
| 144 |
-
"
|
| 145 |
-
"
|
| 146 |
-
|
| 147 |
-
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
- **
|
| 154 |
-
- **
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
##
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
-
|
| 193 |
-
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
-
|
| 200 |
-
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
-
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
β βββ
|
| 220 |
-
β
|
| 221 |
-
βββ
|
| 222 |
-
β
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
-
|
| 239 |
-
-
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
|
| 245 |
-
|
| 246 |
-
|
| 247 |
-
- a benchmark for
|
| 248 |
-
- a
|
| 249 |
-
|
| 250 |
-
|
| 251 |
-
|
| 252 |
-
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
-
|
| 257 |
-
-
|
| 258 |
-
-
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
-
|
| 267 |
-
-
|
| 268 |
-
-
|
| 269 |
-
-
|
| 270 |
-
-
|
| 271 |
-
|
| 272 |
-
|
| 273 |
-
|
| 274 |
-
|
| 275 |
-
|
| 276 |
-
The
|
| 277 |
-
|
| 278 |
-
|
| 279 |
-
|
| 280 |
-
|
| 281 |
-
|
| 282 |
-
|
| 283 |
-
|
| 284 |
-
|
|
| 285 |
-
|
|
| 286 |
-
|
|
| 287 |
-
|
|
| 288 |
-
|
|
| 289 |
-
|
| 290 |
-
|
| 291 |
-
|
| 292 |
-
|
| 293 |
-
|
| 294 |
-
|
| 295 |
-
|
| 296 |
-
- **
|
| 297 |
-
- **
|
| 298 |
-
|
| 299 |
-
|
| 300 |
-
|
| 301 |
-
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
|
| 306 |
-
|
| 307 |
-
|
| 308 |
-
|
| 309 |
-
|
| 310 |
-
}
|
| 311 |
-
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
+
task_categories:
|
| 8 |
+
- video-text-to-text
|
| 9 |
+
pretty_name: PinpointQA
|
| 10 |
+
tags:
|
| 11 |
+
- benchmark
|
| 12 |
+
- spatial-understanding
|
| 13 |
+
- small-object
|
| 14 |
+
- indoor-scenes
|
| 15 |
+
configs:
|
| 16 |
+
- config_name: default
|
| 17 |
+
data_files:
|
| 18 |
+
- split: train
|
| 19 |
+
path: train.jsonl
|
| 20 |
+
- split: validation
|
| 21 |
+
path: validation.jsonl
|
| 22 |
+
- split: test
|
| 23 |
+
path: test.jsonl
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
# PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
|
| 27 |
+
|
| 28 |
+
[**Project Page**](https://rainchowz.github.io/PinpointQA) | [**Paper**](https://huggingface.co/papers/2604.08991) | [**Code**](https://github.com/rainchowz/PinpointQA)
|
| 29 |
+
|
| 30 |
+
> **Important:** This repository releases **benchmark annotations** and **grounded intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted video files.
|
| 31 |
+
|
| 32 |
+
## π§ Overview
|
| 33 |
+
|
| 34 |
+
PinpointQA focuses on a practical question: given a known small object such as a phone, charger, remote, or bottle, can a model determine whether it appears, localize it through nearby references, describe its position precisely, and provide an output that is directly useful for downstream systems?
|
| 35 |
+
|
| 36 |
+
In addition to benchmark annotations, this repository also releases grounded **intermediate spatial representations** constructed during scene curation. These files preserve the target-centered local spatial context used to generate the released QA pairs and can support further analysis or the construction of additional grounded tasks.
|
| 37 |
+
|
| 38 |
+
## π Task Overview
|
| 39 |
+
|
| 40 |
+
PinpointQA is organized as a progressive four-stage benchmark:
|
| 41 |
+
|
| 42 |
+
| Task | Name | Goal | Output Format |
|
| 43 |
+
|---|---|---|---|
|
| 44 |
+
| TPV | Target Presence Verification | Determine whether a queried small object appears in the scene | `Yes` / `No` |
|
| 45 |
+
| NRI | Nearest Reference Identification | Identify the nearest reference object to the target, excluding the support surface | Multiple choice |
|
| 46 |
+
| FSD | Fine-Grained Spatial Description | Describe the target location with support surface, nearby references, and centimeter-level distances | Natural language |
|
| 47 |
+
| SSP | Structured Spatial Prediction | Output the same grounded spatial information in structured form | JSON |
|
| 48 |
+
|
| 49 |
+
## π Key Statistics
|
| 50 |
+
|
| 51 |
+
- **Scenes:** 1,024
|
| 52 |
+
- **QA pairs:** 10,094
|
| 53 |
+
- **Canonical target categories:** 102
|
| 54 |
+
- **Source datasets:** ScanNet++, ScanNet200
|
| 55 |
+
- **Task distribution over all released QA pairs:** TPV 26.47%, NRI 23.10%, FSD 25.08%, SSP 25.34%
|
| 56 |
+
- **Source distribution over all released QA pairs:** ScanNet++ 73.2%, ScanNet200 26.8%
|
| 57 |
+
- **Released splits:** train 6,121 / validation 1,954 / test 2,019
|
| 58 |
+
|
| 59 |
+
## π·οΈ Category Naming Note
|
| 60 |
+
|
| 61 |
+
PinpointQA contains **102 canonical target categories** at the benchmark-definition level.
|
| 62 |
+
|
| 63 |
+
You may notice that the dataset viewer reports **more distinct string values** in the target column. This is expected: some semantically equivalent or near-equivalent names are preserved as **surface forms** in released text fields for readability and compatibility with source annotations or task phrasing. Examples include naming variants such as **`mobile phone`** and **`phone`**.
|
| 64 |
+
|
| 65 |
+
When reporting benchmark statistics in the paper and project page, we count categories at the **canonical category** level rather than the raw string-surface level.
|
| 66 |
+
|
| 67 |
+
## π Quick Start
|
| 68 |
+
|
| 69 |
+
### Install dependencies
|
| 70 |
+
|
| 71 |
+
```bash
|
| 72 |
+
pip install datasets
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
### Load the dataset
|
| 76 |
+
|
| 77 |
+
```python
|
| 78 |
+
from datasets import load_dataset
|
| 79 |
+
|
| 80 |
+
dataset = load_dataset("RainChow/PinpointQA")
|
| 81 |
+
|
| 82 |
+
print(dataset)
|
| 83 |
+
print(dataset["train"][0])
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
### Access a specific split
|
| 87 |
+
|
| 88 |
+
```python
|
| 89 |
+
train_set = dataset["train"]
|
| 90 |
+
val_set = dataset["validation"]
|
| 91 |
+
test_set = dataset["test"]
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
### Save the dataset locally
|
| 95 |
+
|
| 96 |
+
```python
|
| 97 |
+
from datasets import load_dataset
|
| 98 |
+
|
| 99 |
+
dataset = load_dataset("RainChow/PinpointQA")
|
| 100 |
+
dataset.save_to_disk("./PinpointQA_hf")
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
## ποΈ Dataset Organization
|
| 104 |
+
|
| 105 |
+
```text
|
| 106 |
+
PinpointQA/
|
| 107 |
+
βββ train.jsonl
|
| 108 |
+
βββ validation.jsonl
|
| 109 |
+
βββ test.jsonl
|
| 110 |
+
βββ intermediate_spatial_representations/
|
| 111 |
+
β βββ scene_xxx.json
|
| 112 |
+
β βββ scene_yyy.json
|
| 113 |
+
β βββ ...
|
| 114 |
+
βββ README.md
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
### Released Fields
|
| 118 |
+
|
| 119 |
+
- `id`: globally unique sample identifier
|
| 120 |
+
- `scene_id`: scene identifier
|
| 121 |
+
- `source_dataset`: `scannetpp` or `scannet200`
|
| 122 |
+
- `local_sample_id`: scene-local sample index
|
| 123 |
+
- `task`: short task label (`TPV`, `NRI`, `FSD`, `SSP`)
|
| 124 |
+
- `question_type`: original long-form task name
|
| 125 |
+
- `instruction`: task instruction
|
| 126 |
+
- `question`: user-facing question text
|
| 127 |
+
- `choices`: candidate options for NRI, otherwise `null`
|
| 128 |
+
- `answer`: ground-truth answer
|
| 129 |
+
- `target`: queried small object name used in the released sample text
|
| 130 |
+
- `split`: split name
|
| 131 |
+
|
| 132 |
+
### Example Record
|
| 133 |
+
|
| 134 |
+
```json
|
| 135 |
+
{
|
| 136 |
+
"id": "scene0000_00_0",
|
| 137 |
+
"scene_id": "scene0000_00",
|
| 138 |
+
"source_dataset": "scannet200",
|
| 139 |
+
"local_sample_id": "0",
|
| 140 |
+
"task": "TPV",
|
| 141 |
+
"question_type": "target presence verification",
|
| 142 |
+
"instruction": "Answer only with exactly one word: Yes or No. Do not add any explanation.",
|
| 143 |
+
"question": "In the entire scene, did the coffee kettle appear?",
|
| 144 |
+
"choices": null,
|
| 145 |
+
"answer": "No",
|
| 146 |
+
"target": "coffee kettle",
|
| 147 |
+
"split": "train"
|
| 148 |
+
}
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
### Field Notes by Task
|
| 152 |
+
|
| 153 |
+
- **TPV:** `answer` is `Yes` or `No`
|
| 154 |
+
- **NRI:** `choices` contains four candidate objects; `answer` is the correct option text
|
| 155 |
+
- **FSD:** `answer` is a natural-language location description
|
| 156 |
+
- **SSP:** `answer` is a JSON-formatted string representing structured spatial grounding
|
| 157 |
+
|
| 158 |
+
### Intermediate Spatial Representations
|
| 159 |
+
|
| 160 |
+
The `intermediate_spatial_representations/` folder stores the grounded scene-level representations used to instantiate TPV, NRI, FSD, and SSP.
|
| 161 |
+
|
| 162 |
+
- Each file corresponds to a scene and is aligned with `scene_id`.
|
| 163 |
+
- These files preserve the target-centered local spatial context used for QA construction.
|
| 164 |
+
- The released content includes grounded information such as target objects, support surfaces, nearby references, and local spatial relations/distances.
|
| 165 |
+
|
| 166 |
+
For example, a file such as `scene0000_00.json` corresponds to `scene_id = "scene0000_00"` and provides the grounded scene context from which the released QA samples for that scene are derived.
|
| 167 |
+
|
| 168 |
+
## π Spatial Semantics
|
| 169 |
+
|
| 170 |
+
### Support Surface vs. Reference Objects
|
| 171 |
+
|
| 172 |
+
The **support surface** is the surface that directly supports the target object in the final grounded representation.
|
| 173 |
+
|
| 174 |
+
- In **NRI**, the support surface is **excluded** from candidate reference options.
|
| 175 |
+
- In **FSD** and **SSP**, the support surface is retained as a distinct field because it is often a necessary localization anchor.
|
| 176 |
+
- Nearby **references** are additional local objects used to describe or structure the final location of the target.
|
| 177 |
+
|
| 178 |
+
Depending on scene semantics and released wording, a surface-like object may appear in text fields as a location anchor, but the benchmark definition still treats **support surface** and **reference objects** as functionally different roles.
|
| 179 |
+
|
| 180 |
+
### Distances
|
| 181 |
+
|
| 182 |
+
Distances in FSD and SSP are derived from grounded scene geometry and expressed in **centimeters** in the released benchmark outputs.
|
| 183 |
+
|
| 184 |
+
## π§± Source Data Preparation
|
| 185 |
+
|
| 186 |
+
This repository releases **benchmark annotations** and **intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted videos.
|
| 187 |
+
|
| 188 |
+
To reproduce video-based experiments, users should first obtain the original assets from the official sources of **ScanNet++** and **ScanNet v2 / ScanNet200**, subject to their respective licenses and access requirements. Note that **ScanNet200** shares the same underlying source data as **ScanNet v2** and mainly differs in annotation parsing and label space, so the video assets used here still come from the **ScanNet v2** RGB-D data.
|
| 189 |
+
|
| 190 |
+
### ScanNet++
|
| 191 |
+
|
| 192 |
+
- Official website: [ScanNet++](https://scannetpp.mlsg.cit.tum.de/scannetpp/)
|
| 193 |
+
- Obtain access through the official ScanNet++ release.
|
| 194 |
+
- Download the scenes required by your target split or evaluation subset.
|
| 195 |
+
- Match local assets to the released `scene_id` values.
|
| 196 |
+
|
| 197 |
+
### ScanNet v2 / ScanNet200
|
| 198 |
+
|
| 199 |
+
- Official ScanNet website: [ScanNet](http://www.scan-net.org/)
|
| 200 |
+
- ScanNet200 benchmark documentation: [ScanNet200 Benchmark Documentation](https://kaldir.vc.in.tum.de/scannet_benchmark/documentation)
|
| 201 |
+
- Obtain access to the original data and prepare the scenes required by your pipeline.
|
| 202 |
+
- Match local assets to the released `scene_id` values used in this benchmark.
|
| 203 |
+
|
| 204 |
+
### Video Conversion Tools
|
| 205 |
+
|
| 206 |
+
The source assets from **ScanNet++** and **ScanNet v2 / ScanNet200** are **not distributed as ready-to-use MP4 videos**. If your pipeline expects standard video files, we provide conversion scripts in the project GitHub repository:
|
| 207 |
+
|
| 208 |
+
- `tools/convert_mkv_to_mp4.py`
|
| 209 |
+
- `tools/convert_sens_to_mp4.py`
|
| 210 |
+
|
| 211 |
+
Tools folder:
|
| 212 |
+
- [https://github.com/rainchowz/PinpointQA/tree/main/tools](https://github.com/rainchowz/PinpointQA/tree/main/tools)
|
| 213 |
+
|
| 214 |
+
### Recommended Local Organization
|
| 215 |
+
|
| 216 |
+
```text
|
| 217 |
+
workspace/
|
| 218 |
+
βββ PinpointQA/
|
| 219 |
+
β βββ train.jsonl
|
| 220 |
+
β βββ validation.jsonl
|
| 221 |
+
β βββ test.jsonl
|
| 222 |
+
β βββ intermediate_spatial_representations/
|
| 223 |
+
βββ raw_data/
|
| 224 |
+
β βββ scannetpp/
|
| 225 |
+
β βββ scannet200/
|
| 226 |
+
βββ videos/
|
| 227 |
+
βββ scene_or_video_1.mp4
|
| 228 |
+
βββ scene_or_video_2.mp4
|
| 229 |
+
βββ ...
|
| 230 |
+
```
|
| 231 |
+
|
| 232 |
+
Users may organize local files differently depending on their own training or inference pipeline.
|
| 233 |
+
|
| 234 |
+
## π§ Intended Use
|
| 235 |
+
|
| 236 |
+
PinpointQA is intended for:
|
| 237 |
+
|
| 238 |
+
- benchmarking multimodal models on small object-centric spatial understanding in indoor videos
|
| 239 |
+
- instruction tuning or supervised fine-tuning for grounded spatial QA tasks
|
| 240 |
+
- studying progressive capability breakdown from target presence to structured spatial output
|
| 241 |
+
- analyzing reference-based localization and spatial grounding behavior in multimodal systems
|
| 242 |
+
|
| 243 |
+
## π« Out-of-Scope Use
|
| 244 |
+
|
| 245 |
+
PinpointQA is **not** intended as:
|
| 246 |
+
|
| 247 |
+
- a general-purpose benchmark for all video understanding abilities
|
| 248 |
+
- a substitute for open-world object tracking or dense video captioning benchmarks
|
| 249 |
+
- a benchmark for outdoor scenes, unconstrained robotics, or dynamic multi-agent interaction
|
| 250 |
+
- a standalone source of original scene assets or video files
|
| 251 |
+
|
| 252 |
+
## β οΈ Limitations and Biases
|
| 253 |
+
|
| 254 |
+
Users should be aware of the following limitations:
|
| 255 |
+
|
| 256 |
+
- The benchmark is restricted to **indoor scenes**.
|
| 257 |
+
- It focuses specifically on **small object-centric localization and spatial expression**, rather than full-scene understanding.
|
| 258 |
+
- Released QA pairs are constructed from grounded scene geometry and benchmark logic, so some answer styles may be more regular than unconstrained human language.
|
| 259 |
+
- Some target names are preserved as different released **surface forms** even when they map to the same canonical category.
|
| 260 |
+
- The repository does not redistribute original videos or raw scene assets, so reproduction requires separate access to the source datasets.
|
| 261 |
+
|
| 262 |
+
## β
Quality Assurance
|
| 263 |
+
|
| 264 |
+
We use a combination of automatic filtering and manual review to improve dataset accuracy and consistency.
|
| 265 |
+
|
| 266 |
+
- Invalid labels and background or structural objects are filtered out.
|
| 267 |
+
- Only target instances satisfying the predefined small-object vocabulary are retained.
|
| 268 |
+
- Questions are generated only for target instances with unique labels within a scene.
|
| 269 |
+
- NRI samples contain four distinct candidate options.
|
| 270 |
+
- FSD answers are constrained to be human-readable and localization-oriented.
|
| 271 |
+
- SSP outputs are required to contain parsable key fields.
|
| 272 |
+
- Iterative manual spot-checking is applied to refine templates and QA logic.
|
| 273 |
+
|
| 274 |
+
## π License and Upstream Data Notice
|
| 275 |
+
|
| 276 |
+
The **Apache-2.0** license in this repository applies to the released benchmark annotations and intermediate spatial representations in this repository.
|
| 277 |
+
|
| 278 |
+
The original scene assets remain subject to the official terms, licenses, and access conditions of **ScanNet++** and **ScanNet v2 / ScanNet200**. Users are responsible for obtaining and using upstream source data in compliance with the corresponding original terms.
|
| 279 |
+
|
| 280 |
+
## π Performance Snapshot
|
| 281 |
+
|
| 282 |
+
The table below shows a **representative subset** of overall benchmark results. We report averaged scores across TPV, NRI, FSD, and SSP, where **Avg Micro** is the arithmetic mean of task-level micro scores and **Avg Macro** is the arithmetic mean of task-level macro scores.
|
| 283 |
+
|
| 284 |
+
| Rank | Model | Avg Micro | Avg Macro |
|
| 285 |
+
|---|---|---:|---:|
|
| 286 |
+
| 1 | Qwen3-VL-8B-Instruct-SFT | 0.48 | 0.49 |
|
| 287 |
+
| 2 | InternVL3.5-8B-Instruct-SFT | 0.45 | 0.45 |
|
| 288 |
+
| 3 | Kimi K2.5 | 0.42 | 0.44 |
|
| 289 |
+
| 4 | Qwen3-VL-8B-Instruct | 0.39 | 0.40 |
|
| 290 |
+
| 5 | GPT-5.4 | 0.38 | 0.40 |
|
| 291 |
+
|
| 292 |
+
For full evaluation details, please refer to the paper and project page.
|
| 293 |
+
|
| 294 |
+
## π Resources
|
| 295 |
+
|
| 296 |
+
- **Project Page:** [PinpointQA Project Page](https://rainchowz.github.io/PinpointQA)
|
| 297 |
+
- **GitHub Repository:** [https://github.com/rainchowz/PinpointQA](https://github.com/rainchowz/PinpointQA)
|
| 298 |
+
- **Discussions:** [Hugging Face Discussions](https://huggingface.co/datasets/RainChow/PinpointQA/discussions)
|
| 299 |
+
- **Contact:** [zhouzy1622@mails.jlu.edu.cn](mailto:zhouzy1622@mails.jlu.edu.cn)
|
| 300 |
+
|
| 301 |
+
## π Citation
|
| 302 |
+
|
| 303 |
+
If you use PinpointQA, please cite:
|
| 304 |
+
|
| 305 |
+
```bibtex
|
| 306 |
+
@misc{zhou2026pinpointqa,
|
| 307 |
+
title={PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos},
|
| 308 |
+
author={Zhiyu Zhou and Peilin Liu and Ruoxuan Zhang and Luyang Zhang and Cheng Zhang and Hongxia Xie and Wen-Huang Cheng},
|
| 309 |
+
year={2026},
|
| 310 |
+
note={ACM Multimedia 2026 Dataset Track submission / project release},
|
| 311 |
+
url={https://huggingface.co/datasets/RainChow/PinpointQA}
|
| 312 |
+
}
|
| 313 |
+
```
|