Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 90, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: Unable to merge: Field json has incompatible types: list<item: struct<GT Scenario: string, bbox_2d: list<item: int64>, bbox_2d_diag: string, dataset: string, frames: list<item: string>, ground_truth: string, id: int64, mc_answer: string, occ_level: string, occlusion_avg: double, options: list<item: string>, overlap_avg: double, overlap_level: string, question: string, question_type: string, scene_name: string, vis_level: string>> vs list<item: struct<conversations: list<item: struct<from: string, value: string>>, data_source: string, frames: list<item: string>, id: string, question_type: string, scene_name: string>>: Unable to merge: Field item has incompatible types: struct<GT Scenario: string, bbox_2d: list<item: int64>, bbox_2d_diag: string, dataset: string, frames: list<item: string>, ground_truth: string, id: int64, mc_answer: string, occ_level: string, occlusion_avg: double, options: list<item: string>, overlap_avg: double, overlap_level: string, question: string, question_type: string, scene_name: string, vis_level: string> vs struct<conversations: list<item: struct<from: string, value: string>>, data_source: string, frames: list<item: string>, id: string, question_type: string, scene_name: string>: Unable to merge: Field id has incompatible types: int64 vs string
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Spatial Mosaic VQA

This repository contains Spatial Mosaic VQA annotations for evaluating and training vision-language models on multi-view spatial reasoning tasks.

Dataset Details

Dataset Description

Spatial Mosaic VQA is a visual question answering dataset focused on spatial understanding across multiple image frames. The dataset includes VQA tasks covering object existence, object counting, relative spatial relations, object localization, distance estimation, object size, object attribute, and best-view selection.

The released archive contains annotation JSON files only. It does not redistribute original ScanNet++, Waymo, image. Some entries reference ScanNet++ and Waymo scene identifiers, and users must obtain any required source data from the official dataset providers under their respective terms.

  • Curated by: Anonymous submission authors
  • Language(s) (NLP): English
  • License: CC BY-NC 4.0

Uses

Direct Use

This dataset is intended for non-commercial research on visual question answering, spatial reasoning, multiview understanding, and evaluation of vision-language models. It can be used to train or evaluate models that answer questions from multiple frames and reason about object visibility, occlusion, position, count, distance, and size.

Out-of-Scope Use

This dataset should not be used for commercial purposes under the released license. It should not be used to redistribute or reconstruct restricted source datasets. Users should not treat the annotations as a substitute for the original ScanNet++ or Waymo datasets, and must comply with the original terms of those datasets when using referenced source data.

Dataset Structure

The archive contains four JSON files:

spatial_mosaic_vqa/
├── train/
│   ├── spatialmosaic_indoor_train.json
│   └── spatialmosaic_outdoor_train.json
└── test/
    ├── spatialmosaic_indoor_test.json
    └── spatialmosaic_outdoor_test.json

Dataset sizes:

Split File Source reference Entries
Train spatialmosaic_indoor_train.json ScanNet++ 234,979
Train spatialmosaic_outdoor_train.json Waymo 28,845
Test spatialmosaic_indoor_test.json ScanNet++ 126,795
Test spatialmosaic_outdoor_test.json Waymo 12,688

Total entries: 403,307.

Training files use a conversation-style format with fields such as id, data_source, scene_name, question_type, frames, and conversations.

Test files use an evaluation-style multiple-choice format with fields such as dataset, scene_name, question_type, frames, question, options, mc_answer, and task-specific metadata such as visibility, occlusion, overlap, and bounding-box information when available.

Archive checksum:

File SHA256
spatial_mosaic_vqa.tar.gz 34b8f822076098d9f6a0249ecea035fbcf930b46bf0eb1187f80cfd28d30d587

Dataset Creation

Curation Rationale

Spatial Mosaic VQA was created to support research on multi-view spatial reasoning in vision-language models. The dataset emphasizes questions that require models to compare visual evidence across multiple frames rather than relying on a single image.

Source Data

The annotations reference indoor scenes from ScanNet++ and outdoor scenes from Waymo. The released files contain only VQA annotations and metadata references. Original source data must be obtained separately from the official dataset providers.

Data Collection and Processing

Questions were created for multi-view visual reasoning tasks using scene-level and frame-level metadata. The dataset includes training annotations in a dialogue format and test annotations in a multiple-choice format for evaluation.

Who are the source data producers?

The source visual data referenced by the annotations originates from ScanNet++ and Waymo. This repository does not redistribute the original source data.

Personal and Sensitive Information

The released archive was checked for direct personal identifiers, local filesystem paths, account names, emails, URLs, API keys, and related metadata. The tar archive was also anonymized so that local owner/group metadata is not included. The repository does not intentionally contain personal or sensitive information.

Bias, Risks, and Limitations

The dataset inherits limitations from the source datasets and from the annotation-generation process. Scene coverage, object categories, camera viewpoints, and environmental conditions may be unevenly distributed. Model performance on this dataset may not generalize to all real-world spatial reasoning settings.

The annotations reference source dataset scene identifiers. Users are responsible for ensuring that their use of any corresponding original data complies with the ScanNet++ and Waymo terms of use.

Recommendations

Users should evaluate models across indoor and outdoor splits separately and report any preprocessing assumptions. Users should also verify compliance with the original dataset licenses before combining these annotations with source images.

Dataset Card Contact

Anonymous submission authors.

Downloads last month
46