nielsr HF Staff commited on
Commit
b35556f
Β·
verified Β·
1 Parent(s): 08ac9e9

Add paper, project page and code links to dataset card

Browse files

Hi, I'm Niels from the Hugging Face team. This PR updates the dataset card to include direct links to the paper, project page, and official GitHub repository for better discoverability and accessibility.

Files changed (1) hide show
  1. README.md +313 -311
README.md CHANGED
@@ -1,311 +1,313 @@
1
- ---
2
- pretty_name: PinpointQA
3
- language:
4
- - en
5
- license: apache-2.0
6
- size_categories:
7
- - 10K<n<100K
8
- task_categories:
9
- - video-text-to-text
10
- tags:
11
- - benchmark
12
- - spatial-understanding
13
- - small-object
14
- - indoor-scenes
15
- configs:
16
- - config_name: default
17
- data_files:
18
- - split: train
19
- path: train.jsonl
20
- - split: validation
21
- path: validation.jsonl
22
- - split: test
23
- path: test.jsonl
24
- ---
25
-
26
- # PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
27
-
28
- > **Important:** This repository releases **benchmark annotations** and **grounded intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted video files.
29
-
30
- ## 🧭 Overview
31
-
32
- PinpointQA focuses on a practical question: given a known small object such as a phone, charger, remote, or bottle, can a model determine whether it appears, localize it through nearby references, describe its position precisely, and provide an output that is directly useful for downstream systems?
33
-
34
- In addition to benchmark annotations, this repository also releases grounded **intermediate spatial representations** constructed during scene curation. These files preserve the target-centered local spatial context used to generate the released QA pairs and can support further analysis or the construction of additional grounded tasks.
35
-
36
- ## πŸ‘€ Task Overview
37
-
38
- PinpointQA is organized as a progressive four-stage benchmark:
39
-
40
- | Task | Name | Goal | Output Format |
41
- |---|---|---|---|
42
- | TPV | Target Presence Verification | Determine whether a queried small object appears in the scene | `Yes` / `No` |
43
- | NRI | Nearest Reference Identification | Identify the nearest reference object to the target, excluding the support surface | Multiple choice |
44
- | FSD | Fine-Grained Spatial Description | Describe the target location with support surface, nearby references, and centimeter-level distances | Natural language |
45
- | SSP | Structured Spatial Prediction | Output the same grounded spatial information in structured form | JSON |
46
-
47
- ## πŸ“Š Key Statistics
48
-
49
- - **Scenes:** 1,024
50
- - **QA pairs:** 10,094
51
- - **Canonical target categories:** 102
52
- - **Source datasets:** ScanNet++, ScanNet200
53
- - **Task distribution over all released QA pairs:** TPV 26.47%, NRI 23.10%, FSD 25.08%, SSP 25.34%
54
- - **Source distribution over all released QA pairs:** ScanNet++ 73.2%, ScanNet200 26.8%
55
- - **Released splits:** train 6,121 / validation 1,954 / test 2,019
56
-
57
- ## 🏷️ Category Naming Note
58
-
59
- PinpointQA contains **102 canonical target categories** at the benchmark-definition level.
60
-
61
- You may notice that the dataset viewer reports **more distinct string values** in the target column. This is expected: some semantically equivalent or near-equivalent names are preserved as **surface forms** in released text fields for readability and compatibility with source annotations or task phrasing. Examples include naming variants such as **`mobile phone`** and **`phone`**.
62
-
63
- When reporting benchmark statistics in the paper and project page, we count categories at the **canonical category** level rather than the raw string-surface level.
64
-
65
- ## πŸš€ Quick Start
66
-
67
- ### Install dependencies
68
-
69
- ```bash
70
- pip install datasets
71
- ```
72
-
73
- ### Load the dataset
74
-
75
- ```python
76
- from datasets import load_dataset
77
-
78
- dataset = load_dataset("RainChow/PinpointQA")
79
-
80
- print(dataset)
81
- print(dataset["train"][0])
82
- ```
83
-
84
- ### Access a specific split
85
-
86
- ```python
87
- train_set = dataset["train"]
88
- val_set = dataset["validation"]
89
- test_set = dataset["test"]
90
- ```
91
-
92
- ### Save the dataset locally
93
-
94
- ```python
95
- from datasets import load_dataset
96
-
97
- dataset = load_dataset("RainChow/PinpointQA")
98
- dataset.save_to_disk("./PinpointQA_hf")
99
- ```
100
-
101
- ## πŸ—‚οΈ Dataset Organization
102
-
103
- ```text
104
- PinpointQA/
105
- β”œβ”€β”€ train.jsonl
106
- β”œβ”€β”€ validation.jsonl
107
- β”œβ”€β”€ test.jsonl
108
- β”œβ”€β”€ intermediate_spatial_representations/
109
- β”‚ β”œβ”€β”€ scene_xxx.json
110
- β”‚ β”œβ”€β”€ scene_yyy.json
111
- β”‚ └── ...
112
- └── README.md
113
- ```
114
-
115
- ### Released Fields
116
-
117
- - `id`: globally unique sample identifier
118
- - `scene_id`: scene identifier
119
- - `source_dataset`: `scannetpp` or `scannet200`
120
- - `local_sample_id`: scene-local sample index
121
- - `task`: short task label (`TPV`, `NRI`, `FSD`, `SSP`)
122
- - `question_type`: original long-form task name
123
- - `instruction`: task instruction
124
- - `question`: user-facing question text
125
- - `choices`: candidate options for NRI, otherwise `null`
126
- - `answer`: ground-truth answer
127
- - `target`: queried small object name used in the released sample text
128
- - `split`: split name
129
-
130
- ### Example Record
131
-
132
- ```json
133
- {
134
- "id": "scene0000_00_0",
135
- "scene_id": "scene0000_00",
136
- "source_dataset": "scannet200",
137
- "local_sample_id": "0",
138
- "task": "TPV",
139
- "question_type": "target presence verification",
140
- "instruction": "Answer only with exactly one word: Yes or No. Do not add any explanation.",
141
- "question": "In the entire scene, did the coffee kettle appear?",
142
- "choices": null,
143
- "answer": "No",
144
- "target": "coffee kettle",
145
- "split": "train"
146
- }
147
- ```
148
-
149
- ### Field Notes by Task
150
-
151
- - **TPV:** `answer` is `Yes` or `No`
152
- - **NRI:** `choices` contains four candidate objects; `answer` is the correct option text
153
- - **FSD:** `answer` is a natural-language location description
154
- - **SSP:** `answer` is a JSON-formatted string representing structured spatial grounding
155
-
156
- ### Intermediate Spatial Representations
157
-
158
- The `intermediate_spatial_representations/` folder stores the grounded scene-level representations used to instantiate TPV, NRI, FSD, and SSP.
159
-
160
- - Each file corresponds to a scene and is aligned with `scene_id`.
161
- - These files preserve the target-centered local spatial context used for QA construction.
162
- - The released content includes grounded information such as target objects, support surfaces, nearby references, and local spatial relations/distances.
163
-
164
- For example, a file such as `scene0000_00.json` corresponds to `scene_id = "scene0000_00"` and provides the grounded scene context from which the released QA samples for that scene are derived.
165
-
166
- ## πŸ“ Spatial Semantics
167
-
168
- ### Support Surface vs. Reference Objects
169
-
170
- The **support surface** is the surface that directly supports the target object in the final grounded representation.
171
-
172
- - In **NRI**, the support surface is **excluded** from candidate reference options.
173
- - In **FSD** and **SSP**, the support surface is retained as a distinct field because it is often a necessary localization anchor.
174
- - Nearby **references** are additional local objects used to describe or structure the final location of the target.
175
-
176
- Depending on scene semantics and released wording, a surface-like object may appear in text fields as a location anchor, but the benchmark definition still treats **support surface** and **reference objects** as functionally different roles.
177
-
178
- ### Distances
179
-
180
- Distances in FSD and SSP are derived from grounded scene geometry and expressed in **centimeters** in the released benchmark outputs.
181
-
182
- ## 🧱 Source Data Preparation
183
-
184
- This repository releases **benchmark annotations** and **intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted videos.
185
-
186
- To reproduce video-based experiments, users should first obtain the original assets from the official sources of **ScanNet++** and **ScanNet v2 / ScanNet200**, subject to their respective licenses and access requirements. Note that **ScanNet200** shares the same underlying source data as **ScanNet v2** and mainly differs in annotation parsing and label space, so the video assets used here still come from the **ScanNet v2** RGB-D data.
187
-
188
- ### ScanNet++
189
-
190
- - Official website: [ScanNet++](https://scannetpp.mlsg.cit.tum.de/scannetpp/)
191
- - Obtain access through the official ScanNet++ release.
192
- - Download the scenes required by your target split or evaluation subset.
193
- - Match local assets to the released `scene_id` values.
194
-
195
- ### ScanNet v2 / ScanNet200
196
-
197
- - Official ScanNet website: [ScanNet](http://www.scan-net.org/)
198
- - ScanNet200 benchmark documentation: [ScanNet200 Benchmark Documentation](https://kaldir.vc.in.tum.de/scannet_benchmark/documentation)
199
- - Obtain access to the original data and prepare the scenes required by your pipeline.
200
- - Match local assets to the released `scene_id` values used in this benchmark.
201
-
202
- ### Video Conversion Tools
203
-
204
- The source assets from **ScanNet++** and **ScanNet v2 / ScanNet200** are **not distributed as ready-to-use MP4 videos**. If your pipeline expects standard video files, we provide conversion scripts in the project GitHub repository:
205
-
206
- - `tools/convert_mkv_to_mp4.py`
207
- - `tools/convert_sens_to_mp4.py`
208
-
209
- Tools folder:
210
- - [https://github.com/rainchowz/PinpointQA/tree/main/tools](https://github.com/rainchowz/PinpointQA/tree/main/tools)
211
-
212
- ### Recommended Local Organization
213
-
214
- ```text
215
- workspace/
216
- β”œβ”€β”€ PinpointQA/
217
- β”‚ β”œβ”€β”€ train.jsonl
218
- β”‚ β”œβ”€β”€ validation.jsonl
219
- β”‚ β”œβ”€β”€ test.jsonl
220
- β”‚ └── intermediate_spatial_representations/
221
- β”œβ”€β”€ raw_data/
222
- β”‚ β”œβ”€β”€ scannetpp/
223
- β”‚ └── scannet200/
224
- └── videos/
225
- β”œβ”€β”€ scene_or_video_1.mp4
226
- β”œβ”€β”€ scene_or_video_2.mp4
227
- └── ...
228
- ```
229
-
230
- Users may organize local files differently depending on their own training or inference pipeline.
231
-
232
- ## 🧠 Intended Use
233
-
234
- PinpointQA is intended for:
235
-
236
- - benchmarking multimodal models on small object-centric spatial understanding in indoor videos
237
- - instruction tuning or supervised fine-tuning for grounded spatial QA tasks
238
- - studying progressive capability breakdown from target presence to structured spatial output
239
- - analyzing reference-based localization and spatial grounding behavior in multimodal systems
240
-
241
- ## 🚫 Out-of-Scope Use
242
-
243
- PinpointQA is **not** intended as:
244
-
245
- - a general-purpose benchmark for all video understanding abilities
246
- - a substitute for open-world object tracking or dense video captioning benchmarks
247
- - a benchmark for outdoor scenes, unconstrained robotics, or dynamic multi-agent interaction
248
- - a standalone source of original scene assets or video files
249
-
250
- ## ⚠️ Limitations and Biases
251
-
252
- Users should be aware of the following limitations:
253
-
254
- - The benchmark is restricted to **indoor scenes**.
255
- - It focuses specifically on **small object-centric localization and spatial expression**, rather than full-scene understanding.
256
- - Released QA pairs are constructed from grounded scene geometry and benchmark logic, so some answer styles may be more regular than unconstrained human language.
257
- - Some target names are preserved as different released **surface forms** even when they map to the same canonical category.
258
- - The repository does not redistribute original videos or raw scene assets, so reproduction requires separate access to the source datasets.
259
-
260
- ## βœ… Quality Assurance
261
-
262
- We use a combination of automatic filtering and manual review to improve dataset accuracy and consistency.
263
-
264
- - Invalid labels and background or structural objects are filtered out.
265
- - Only target instances satisfying the predefined small-object vocabulary are retained.
266
- - Questions are generated only for target instances with unique labels within a scene.
267
- - NRI samples contain four distinct candidate options.
268
- - FSD answers are constrained to be human-readable and localization-oriented.
269
- - SSP outputs are required to contain parsable key fields.
270
- - Iterative manual spot-checking is applied to refine templates and QA logic.
271
-
272
- ## πŸ“œ License and Upstream Data Notice
273
-
274
- The **Apache-2.0** license in this repository applies to the released benchmark annotations and intermediate spatial representations in this repository.
275
-
276
- The original scene assets remain subject to the official terms, licenses, and access conditions of **ScanNet++** and **ScanNet v2 / ScanNet200**. Users are responsible for obtaining and using upstream source data in compliance with the corresponding original terms.
277
-
278
- ## πŸ† Performance Snapshot
279
-
280
- The table below shows a **representative subset** of overall benchmark results. We report averaged scores across TPV, NRI, FSD, and SSP, where **Avg Micro** is the arithmetic mean of task-level micro scores and **Avg Macro** is the arithmetic mean of task-level macro scores.
281
-
282
- | Rank | Model | Avg Micro | Avg Macro |
283
- |---|---|---:|---:|
284
- | 1 | Qwen3-VL-8B-Instruct-SFT | 0.48 | 0.49 |
285
- | 2 | InternVL3.5-8B-Instruct-SFT | 0.45 | 0.45 |
286
- | 3 | Kimi K2.5 | 0.42 | 0.44 |
287
- | 4 | Qwen3-VL-8B-Instruct | 0.39 | 0.40 |
288
- | 5 | GPT-5.4 | 0.38 | 0.40 |
289
-
290
- For full evaluation details, please refer to the paper and project page.
291
-
292
- ## πŸ”— Resources
293
-
294
- - **Project Page:** [PinpointQA Project Page](https://rainchowz.github.io/PinpointQA)
295
- - **GitHub Repository:** [https://github.com/rainchowz/PinpointQA](https://github.com/rainchowz/PinpointQA)
296
- - **Discussions:** [Hugging Face Discussions](https://huggingface.co/datasets/RainChow/PinpointQA/discussions)
297
- - **Contact:** [zhouzy1622@mails.jlu.edu.cn](mailto:zhouzy1622@mails.jlu.edu.cn)
298
-
299
- ## πŸ“š Citation
300
-
301
- If you use PinpointQA, please cite:
302
-
303
- ```bibtex
304
- @misc{zhou2026pinpointqa,
305
- title={PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos},
306
- author={Zhiyu Zhou and Peilin Liu and Ruoxuan Zhang and Luyang Zhang and Cheng Zhang and Hongxia Xie and Wen-Huang Cheng},
307
- year={2026},
308
- note={ACM Multimedia 2026 Dataset Track submission / project release},
309
- url={https://huggingface.co/datasets/RainChow/PinpointQA}
310
- }
311
- ```
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - 10K<n<100K
7
+ task_categories:
8
+ - video-text-to-text
9
+ pretty_name: PinpointQA
10
+ tags:
11
+ - benchmark
12
+ - spatial-understanding
13
+ - small-object
14
+ - indoor-scenes
15
+ configs:
16
+ - config_name: default
17
+ data_files:
18
+ - split: train
19
+ path: train.jsonl
20
+ - split: validation
21
+ path: validation.jsonl
22
+ - split: test
23
+ path: test.jsonl
24
+ ---
25
+
26
+ # PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos
27
+
28
+ [**Project Page**](https://rainchowz.github.io/PinpointQA) | [**Paper**](https://huggingface.co/papers/2604.08991) | [**Code**](https://github.com/rainchowz/PinpointQA)
29
+
30
+ > **Important:** This repository releases **benchmark annotations** and **grounded intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted video files.
31
+
32
+ ## 🧭 Overview
33
+
34
+ PinpointQA focuses on a practical question: given a known small object such as a phone, charger, remote, or bottle, can a model determine whether it appears, localize it through nearby references, describe its position precisely, and provide an output that is directly useful for downstream systems?
35
+
36
+ In addition to benchmark annotations, this repository also releases grounded **intermediate spatial representations** constructed during scene curation. These files preserve the target-centered local spatial context used to generate the released QA pairs and can support further analysis or the construction of additional grounded tasks.
37
+
38
+ ## πŸ‘€ Task Overview
39
+
40
+ PinpointQA is organized as a progressive four-stage benchmark:
41
+
42
+ | Task | Name | Goal | Output Format |
43
+ |---|---|---|---|
44
+ | TPV | Target Presence Verification | Determine whether a queried small object appears in the scene | `Yes` / `No` |
45
+ | NRI | Nearest Reference Identification | Identify the nearest reference object to the target, excluding the support surface | Multiple choice |
46
+ | FSD | Fine-Grained Spatial Description | Describe the target location with support surface, nearby references, and centimeter-level distances | Natural language |
47
+ | SSP | Structured Spatial Prediction | Output the same grounded spatial information in structured form | JSON |
48
+
49
+ ## πŸ“Š Key Statistics
50
+
51
+ - **Scenes:** 1,024
52
+ - **QA pairs:** 10,094
53
+ - **Canonical target categories:** 102
54
+ - **Source datasets:** ScanNet++, ScanNet200
55
+ - **Task distribution over all released QA pairs:** TPV 26.47%, NRI 23.10%, FSD 25.08%, SSP 25.34%
56
+ - **Source distribution over all released QA pairs:** ScanNet++ 73.2%, ScanNet200 26.8%
57
+ - **Released splits:** train 6,121 / validation 1,954 / test 2,019
58
+
59
+ ## 🏷️ Category Naming Note
60
+
61
+ PinpointQA contains **102 canonical target categories** at the benchmark-definition level.
62
+
63
+ You may notice that the dataset viewer reports **more distinct string values** in the target column. This is expected: some semantically equivalent or near-equivalent names are preserved as **surface forms** in released text fields for readability and compatibility with source annotations or task phrasing. Examples include naming variants such as **`mobile phone`** and **`phone`**.
64
+
65
+ When reporting benchmark statistics in the paper and project page, we count categories at the **canonical category** level rather than the raw string-surface level.
66
+
67
+ ## πŸš€ Quick Start
68
+
69
+ ### Install dependencies
70
+
71
+ ```bash
72
+ pip install datasets
73
+ ```
74
+
75
+ ### Load the dataset
76
+
77
+ ```python
78
+ from datasets import load_dataset
79
+
80
+ dataset = load_dataset("RainChow/PinpointQA")
81
+
82
+ print(dataset)
83
+ print(dataset["train"][0])
84
+ ```
85
+
86
+ ### Access a specific split
87
+
88
+ ```python
89
+ train_set = dataset["train"]
90
+ val_set = dataset["validation"]
91
+ test_set = dataset["test"]
92
+ ```
93
+
94
+ ### Save the dataset locally
95
+
96
+ ```python
97
+ from datasets import load_dataset
98
+
99
+ dataset = load_dataset("RainChow/PinpointQA")
100
+ dataset.save_to_disk("./PinpointQA_hf")
101
+ ```
102
+
103
+ ## πŸ—‚οΈ Dataset Organization
104
+
105
+ ```text
106
+ PinpointQA/
107
+ β”œβ”€β”€ train.jsonl
108
+ β”œβ”€β”€ validation.jsonl
109
+ β”œβ”€β”€ test.jsonl
110
+ β”œβ”€β”€ intermediate_spatial_representations/
111
+ β”‚ β”œβ”€β”€ scene_xxx.json
112
+ β”‚ β”œβ”€β”€ scene_yyy.json
113
+ β”‚ └── ...
114
+ └── README.md
115
+ ```
116
+
117
+ ### Released Fields
118
+
119
+ - `id`: globally unique sample identifier
120
+ - `scene_id`: scene identifier
121
+ - `source_dataset`: `scannetpp` or `scannet200`
122
+ - `local_sample_id`: scene-local sample index
123
+ - `task`: short task label (`TPV`, `NRI`, `FSD`, `SSP`)
124
+ - `question_type`: original long-form task name
125
+ - `instruction`: task instruction
126
+ - `question`: user-facing question text
127
+ - `choices`: candidate options for NRI, otherwise `null`
128
+ - `answer`: ground-truth answer
129
+ - `target`: queried small object name used in the released sample text
130
+ - `split`: split name
131
+
132
+ ### Example Record
133
+
134
+ ```json
135
+ {
136
+ "id": "scene0000_00_0",
137
+ "scene_id": "scene0000_00",
138
+ "source_dataset": "scannet200",
139
+ "local_sample_id": "0",
140
+ "task": "TPV",
141
+ "question_type": "target presence verification",
142
+ "instruction": "Answer only with exactly one word: Yes or No. Do not add any explanation.",
143
+ "question": "In the entire scene, did the coffee kettle appear?",
144
+ "choices": null,
145
+ "answer": "No",
146
+ "target": "coffee kettle",
147
+ "split": "train"
148
+ }
149
+ ```
150
+
151
+ ### Field Notes by Task
152
+
153
+ - **TPV:** `answer` is `Yes` or `No`
154
+ - **NRI:** `choices` contains four candidate objects; `answer` is the correct option text
155
+ - **FSD:** `answer` is a natural-language location description
156
+ - **SSP:** `answer` is a JSON-formatted string representing structured spatial grounding
157
+
158
+ ### Intermediate Spatial Representations
159
+
160
+ The `intermediate_spatial_representations/` folder stores the grounded scene-level representations used to instantiate TPV, NRI, FSD, and SSP.
161
+
162
+ - Each file corresponds to a scene and is aligned with `scene_id`.
163
+ - These files preserve the target-centered local spatial context used for QA construction.
164
+ - The released content includes grounded information such as target objects, support surfaces, nearby references, and local spatial relations/distances.
165
+
166
+ For example, a file such as `scene0000_00.json` corresponds to `scene_id = "scene0000_00"` and provides the grounded scene context from which the released QA samples for that scene are derived.
167
+
168
+ ## πŸ“ Spatial Semantics
169
+
170
+ ### Support Surface vs. Reference Objects
171
+
172
+ The **support surface** is the surface that directly supports the target object in the final grounded representation.
173
+
174
+ - In **NRI**, the support surface is **excluded** from candidate reference options.
175
+ - In **FSD** and **SSP**, the support surface is retained as a distinct field because it is often a necessary localization anchor.
176
+ - Nearby **references** are additional local objects used to describe or structure the final location of the target.
177
+
178
+ Depending on scene semantics and released wording, a surface-like object may appear in text fields as a location anchor, but the benchmark definition still treats **support surface** and **reference objects** as functionally different roles.
179
+
180
+ ### Distances
181
+
182
+ Distances in FSD and SSP are derived from grounded scene geometry and expressed in **centimeters** in the released benchmark outputs.
183
+
184
+ ## 🧱 Source Data Preparation
185
+
186
+ This repository releases **benchmark annotations** and **intermediate spatial representations** only. It does **not** redistribute the original scene assets or converted videos.
187
+
188
+ To reproduce video-based experiments, users should first obtain the original assets from the official sources of **ScanNet++** and **ScanNet v2 / ScanNet200**, subject to their respective licenses and access requirements. Note that **ScanNet200** shares the same underlying source data as **ScanNet v2** and mainly differs in annotation parsing and label space, so the video assets used here still come from the **ScanNet v2** RGB-D data.
189
+
190
+ ### ScanNet++
191
+
192
+ - Official website: [ScanNet++](https://scannetpp.mlsg.cit.tum.de/scannetpp/)
193
+ - Obtain access through the official ScanNet++ release.
194
+ - Download the scenes required by your target split or evaluation subset.
195
+ - Match local assets to the released `scene_id` values.
196
+
197
+ ### ScanNet v2 / ScanNet200
198
+
199
+ - Official ScanNet website: [ScanNet](http://www.scan-net.org/)
200
+ - ScanNet200 benchmark documentation: [ScanNet200 Benchmark Documentation](https://kaldir.vc.in.tum.de/scannet_benchmark/documentation)
201
+ - Obtain access to the original data and prepare the scenes required by your pipeline.
202
+ - Match local assets to the released `scene_id` values used in this benchmark.
203
+
204
+ ### Video Conversion Tools
205
+
206
+ The source assets from **ScanNet++** and **ScanNet v2 / ScanNet200** are **not distributed as ready-to-use MP4 videos**. If your pipeline expects standard video files, we provide conversion scripts in the project GitHub repository:
207
+
208
+ - `tools/convert_mkv_to_mp4.py`
209
+ - `tools/convert_sens_to_mp4.py`
210
+
211
+ Tools folder:
212
+ - [https://github.com/rainchowz/PinpointQA/tree/main/tools](https://github.com/rainchowz/PinpointQA/tree/main/tools)
213
+
214
+ ### Recommended Local Organization
215
+
216
+ ```text
217
+ workspace/
218
+ β”œβ”€β”€ PinpointQA/
219
+ β”‚ β”œβ”€β”€ train.jsonl
220
+ β”‚ β”œβ”€β”€ validation.jsonl
221
+ β”‚ β”œβ”€β”€ test.jsonl
222
+ β”‚ └── intermediate_spatial_representations/
223
+ β”œβ”€β”€ raw_data/
224
+ β”‚ β”œβ”€β”€ scannetpp/
225
+ β”‚ └── scannet200/
226
+ └── videos/
227
+ β”œβ”€β”€ scene_or_video_1.mp4
228
+ β”œβ”€β”€ scene_or_video_2.mp4
229
+ └── ...
230
+ ```
231
+
232
+ Users may organize local files differently depending on their own training or inference pipeline.
233
+
234
+ ## 🧠 Intended Use
235
+
236
+ PinpointQA is intended for:
237
+
238
+ - benchmarking multimodal models on small object-centric spatial understanding in indoor videos
239
+ - instruction tuning or supervised fine-tuning for grounded spatial QA tasks
240
+ - studying progressive capability breakdown from target presence to structured spatial output
241
+ - analyzing reference-based localization and spatial grounding behavior in multimodal systems
242
+
243
+ ## 🚫 Out-of-Scope Use
244
+
245
+ PinpointQA is **not** intended as:
246
+
247
+ - a general-purpose benchmark for all video understanding abilities
248
+ - a substitute for open-world object tracking or dense video captioning benchmarks
249
+ - a benchmark for outdoor scenes, unconstrained robotics, or dynamic multi-agent interaction
250
+ - a standalone source of original scene assets or video files
251
+
252
+ ## ⚠️ Limitations and Biases
253
+
254
+ Users should be aware of the following limitations:
255
+
256
+ - The benchmark is restricted to **indoor scenes**.
257
+ - It focuses specifically on **small object-centric localization and spatial expression**, rather than full-scene understanding.
258
+ - Released QA pairs are constructed from grounded scene geometry and benchmark logic, so some answer styles may be more regular than unconstrained human language.
259
+ - Some target names are preserved as different released **surface forms** even when they map to the same canonical category.
260
+ - The repository does not redistribute original videos or raw scene assets, so reproduction requires separate access to the source datasets.
261
+
262
+ ## βœ… Quality Assurance
263
+
264
+ We use a combination of automatic filtering and manual review to improve dataset accuracy and consistency.
265
+
266
+ - Invalid labels and background or structural objects are filtered out.
267
+ - Only target instances satisfying the predefined small-object vocabulary are retained.
268
+ - Questions are generated only for target instances with unique labels within a scene.
269
+ - NRI samples contain four distinct candidate options.
270
+ - FSD answers are constrained to be human-readable and localization-oriented.
271
+ - SSP outputs are required to contain parsable key fields.
272
+ - Iterative manual spot-checking is applied to refine templates and QA logic.
273
+
274
+ ## πŸ“œ License and Upstream Data Notice
275
+
276
+ The **Apache-2.0** license in this repository applies to the released benchmark annotations and intermediate spatial representations in this repository.
277
+
278
+ The original scene assets remain subject to the official terms, licenses, and access conditions of **ScanNet++** and **ScanNet v2 / ScanNet200**. Users are responsible for obtaining and using upstream source data in compliance with the corresponding original terms.
279
+
280
+ ## πŸ† Performance Snapshot
281
+
282
+ The table below shows a **representative subset** of overall benchmark results. We report averaged scores across TPV, NRI, FSD, and SSP, where **Avg Micro** is the arithmetic mean of task-level micro scores and **Avg Macro** is the arithmetic mean of task-level macro scores.
283
+
284
+ | Rank | Model | Avg Micro | Avg Macro |
285
+ |---|---|---:|---:|
286
+ | 1 | Qwen3-VL-8B-Instruct-SFT | 0.48 | 0.49 |
287
+ | 2 | InternVL3.5-8B-Instruct-SFT | 0.45 | 0.45 |
288
+ | 3 | Kimi K2.5 | 0.42 | 0.44 |
289
+ | 4 | Qwen3-VL-8B-Instruct | 0.39 | 0.40 |
290
+ | 5 | GPT-5.4 | 0.38 | 0.40 |
291
+
292
+ For full evaluation details, please refer to the paper and project page.
293
+
294
+ ## πŸ”— Resources
295
+
296
+ - **Project Page:** [PinpointQA Project Page](https://rainchowz.github.io/PinpointQA)
297
+ - **GitHub Repository:** [https://github.com/rainchowz/PinpointQA](https://github.com/rainchowz/PinpointQA)
298
+ - **Discussions:** [Hugging Face Discussions](https://huggingface.co/datasets/RainChow/PinpointQA/discussions)
299
+ - **Contact:** [zhouzy1622@mails.jlu.edu.cn](mailto:zhouzy1622@mails.jlu.edu.cn)
300
+
301
+ ## πŸ“š Citation
302
+
303
+ If you use PinpointQA, please cite:
304
+
305
+ ```bibtex
306
+ @misc{zhou2026pinpointqa,
307
+ title={PinpointQA: A Dataset and Benchmark for Small Object-Centric Spatial Understanding in Indoor Videos},
308
+ author={Zhiyu Zhou and Peilin Liu and Ruoxuan Zhang and Luyang Zhang and Cheng Zhang and Hongxia Xie and Wen-Huang Cheng},
309
+ year={2026},
310
+ note={ACM Multimedia 2026 Dataset Track submission / project release},
311
+ url={https://huggingface.co/datasets/RainChow/PinpointQA}
312
+ }
313
+ ```