Datasets:
Tasks:
Visual Question Answering
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
satellite-imagery
spatial-reasoning
benchmark
quantitative-reasoning
VLM
language-understanding
DOI:
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - visual-question-answering | |
| language: | |
| - en | |
| tags: | |
| - satellite-imagery | |
| - spatial-reasoning | |
| - benchmark | |
| - quantitative-reasoning | |
| - VLM | |
| - language-understanding | |
| size_categories: | |
| - 1K<n<10K | |
| pretty_name: SQuID | |
| # SQuID: Satellite Quantitative Intelligence Dataset | |
| A comprehensive benchmark for evaluating quantitative spatial reasoning in Vision-Language Models using satellite imagery. | |
| ## Related Resources | |
| - **Code repository**: https://github.com/PeterAMassih/qvlm-squid | |
| - **Paper (arXiv)**: https://arxiv.org/abs/2601.13401 | |
| ## Dataset Overview | |
| - **2000 questions** testing spatial reasoning on satellite imagery | |
| - **587 unique images** across four datasets | |
| - **1950 auto-labeled** questions from segmentation masks (DeepGlobe, EarthVQA, Solar Panels) | |
| - **50 human-annotated** questions from NAIP imagery with consensus answers | |
| - **1577 questions** include human-agreement ranges for numeric answers | |
| - **3 difficulty tiers**: Basic (710), Spatial (616), Complex (674) | |
| - **3 resolution levels**: 0.3m, 0.5m, 1.0m GSD | |
| ## Human Annotation & Agreement Methodology | |
| ### Human Annotation Process | |
| - **50 questions** on NAIP 1.0m GSD imagery were annotated by humans | |
| - **10 annotators per question** resulting in 500 total annotations | |
| - **Answer aggregation**: | |
| - **Numeric questions**: Used MEDIAN of all responses for robustness | |
| - **Categorical questions** (connected/fragmented): Used MAJORITY voting | |
| - **Binary questions**: Converted yes/no to 1/0 and used majority | |
| ### Human Agreement Quantification | |
| From the 500 human annotations, we computed the Mean Median Absolute Deviation (MAD) for each question type: | |
| - **Percentage questions**: MAD = ±1.74 percentage points | |
| - **Proximity questions**: MAD = ±2.25 percentage points | |
| - **Count questions**: Normalized MADc = 0.19 (proportional to count magnitude) | |
| For count questions, we use a normalized MAD (MADc) that makes the acceptable range proportional to the count value: | |
| ``` | |
| MADc = median(|Xi - median(X)|) / median(X) = 0.19 | |
| ``` | |
| ### Acceptable Range Calculation | |
| These MAD values were applied to ALL numeric questions in the benchmark to define acceptable ranges: | |
| ```python | |
| import math | |
| # For percentage questions (absolute deviation) | |
| if question_type == 'percentage': | |
| lower = max(0.0, answer - 1.74) | |
| upper = min(100.0, answer + 1.74) | |
| # For count questions (proportional deviation) | |
| # range(C) = [C - max(1, C × MADc), C + max(1, C × MADc)] | |
| elif question_type in ['count', 'building_proximity', 'building_flood_risk', | |
| 'building_fire_risk', 'connectivity']: | |
| MADc = 0.19 | |
| dr = max(1, answer * MADc) # At least ±1 deviation | |
| lower = max(0, math.floor(answer - dr)) | |
| upper = math.ceil(answer + dr) | |
| # For proximity percentage questions (absolute deviation) | |
| elif 'within' in question and 'm of' in question: | |
| lower = max(0.0, answer - 2.25) | |
| upper = min(100.0, answer + 2.25) | |
| ``` | |
| **Example count ranges with MADc = 0.19:** | |
| - C=5 → range [4, 7] | |
| - C=10 → range [8, 12] | |
| - C=50 → range [40, 60] | |
| - C=100 → range [81, 120] | |
| Special cases: | |
| - Zero values have no range (exact match required) | |
| - Binary/fragmentation questions have no range (exact match) | |
| - Ranges are capped at valid bounds (0-100 for percentages, ≥0 for counts) | |
| ## Question Types | |
| The benchmark includes 24 distinct question types organized into three tiers: | |
| ### Tier 1: Basic Questions (710 questions) | |
| - **percentage**: Coverage percentage of a land use class | |
| - **count**: Number of separate regions or objects | |
| - **size**: Area measurements of regions | |
| - **total_area**: Total area covered by a class | |
| - **binary_comparison**: Comparing quantities between two classes | |
| - **binary_presence**: Checking if a class exists | |
| - **binary_threshold**: Testing if values exceed thresholds | |
| - **binary_multiple**: Checking for multiple instances | |
| ### Tier 2: Spatial Questions (616 questions) | |
| - **proximity_percentage**: Percentage of one class near another | |
| - **proximity_area**: Area of one class near another | |
| - **binary_proximity**: Presence of one class near another | |
| - **building_proximity**: Number of buildings near other features | |
| - **building_flood_risk**: Buildings at flood risk (near water) | |
| - **building_fire_risk**: Buildings at fire risk (near forest) | |
| - **connectivity**: Counting isolated patches by size | |
| - **fragmentation**: Assessing if regions are connected or fragmented | |
| - **power_calculation**: Calculating solar panel power output | |
| ### Tier 3: Complex Questions (674 questions) | |
| - **complex_multi_condition**: Areas meeting multiple spatial criteria | |
| - **complex_urban_flood_risk**: Urban areas at flood risk (near water) | |
| - **complex_urban_fire_risk**: Urban areas at fire risk (near forest) | |
| - **complex_agriculture_water_access**: Agricultural land with irrigation potential | |
| - **complex_size_filter**: Filtering by size thresholds | |
| - **complex_average**: Average sizes of regions | |
| ## Loading the Dataset | |
| ```python | |
| from datasets import load_dataset | |
| # Load dataset | |
| dataset = load_dataset("PeterAM4/SQuID") | |
| # Access a sample | |
| sample = dataset['train'][0] | |
| image = sample['image'] # PIL Image | |
| question = sample['question'] | |
| answer = sample['answer'] # String or numeric | |
| type = sample['type'] | |
| # Convert answer based on type | |
| if type in ['percentage', 'count', 'proximity_percentage', 'proximity_area', | |
| 'building_proximity', 'building_flood_risk', 'building_fire_risk', | |
| 'connectivity', 'size', 'total_area', 'power_calculation'] or 'complex' in type: | |
| answer_value = float(answer) | |
| elif 'binary' in type: | |
| answer_value = int(answer) # 0 or 1 | |
| elif type == 'fragmentation': | |
| answer_value = answer # "connected" or "fragmented" | |
| ``` | |
| ## Fields | |
| - **id**: Question identifier (e.g., "SQuID_0001") | |
| - **image**: Satellite image path | |
| - **question**: Question text with GSD notation | |
| - **answer**: Ground truth answer | |
| - **type**: One of 24 question types | |
| - **tier**: Difficulty level (1=Basic, 2=Spatial, 3=Complex) | |
| - **gsd**: Ground sampling distance in meters | |
| - **acceptable_range**: [lower, upper] bounds for numeric questions (when applicable) | |
| ## Evaluation | |
| For numeric questions, check if predictions fall within the acceptable range: | |
| ```python | |
| import math | |
| def evaluate(prediction, sample): | |
| if 'acceptable_range' in sample: | |
| # Numeric question - check if within human agreement range | |
| lower, upper = sample['acceptable_range'] | |
| return lower <= float(prediction) <= upper | |
| else: | |
| # Non-numeric question - exact match required | |
| return str(prediction).lower() == str(sample['answer']).lower() | |
| ``` | |
| The acceptable ranges represent the natural variation in human perception for spatial measurements. | |
| ## Dataset Distribution | |
| ### By Tier | |
| - **Tier 1 (Basic)**: 710 questions (35.5%) | |
| - **Tier 2 (Spatial)**: 616 questions (30.8%) | |
| - **Tier 3 (Complex)**: 674 questions (33.7%) | |
| ### Top Question Types | |
| - **complex_multi_condition**: 490 questions (24.5%) | |
| - **count**: 178 questions (8.9%) | |
| - **binary_comparison**: 172 questions (8.6%) | |
| - **size**: 166 questions (8.3%) | |
| - **percentage**: 157 questions (7.8%) | |
| - **proximity_percentage**: 123 questions (6.2%) | |
| - **binary_proximity**: 122 questions (6.1%) | |
| - **proximity_area**: 107 questions (5.3%) | |
| - **connectivity**: 104 questions (5.2%) | |
| - **fragmentation**: 98 questions (4.9%) | |
| ### By Source | |
| - **DeepGlobe (0.5m GSD)**: 612 questions, 174 images - Land use classification masks | |
| - **EarthVQA (0.3m GSD)**: 1241 questions, 364 images - Building detection and land cover masks | |
| - **Solar Panels (0.3m GSD)**: 97 questions, 35 images - Solar panel segmentation masks | |
| - **NAIP (1.0m GSD)**: 50 questions, 14 images - Human-annotated diverse scenes | |
| ## Statistics Summary | |
| - **Zero-valued answers**: 102 (5.1%) | |
| - **Questions with ranges**: 1577 (78.8%) | |
| - **Average questions per image**: 3.4 | |
| ## Notes | |
| - Questions explicitly state minimum area thresholds (e.g., "ignore patches smaller than 0.125 hectares") | |
| - Zero-valued answers indicate absence of features (intentionally included for robustness testing) | |
| - The benchmark tests both presence and absence of spatial features to avoid positive-only bias | |
| - Human agreement ranges allow for natural variation in spatial perception and counting | |
| - All measurements use metric units based on the specified GSD (Ground Sampling Distance) | |
| - Count ranges use proportional MADc (0.19) so larger counts have wider acceptable ranges | |
| ## Source Datasets & Attribution | |
| SQuID is constructed from publicly available remote-sensing datasets. We use only images from published validation or test splits and comply with the original dataset licenses. | |
| - **DeepGlobe** | |
| Ilke Demir et al., *DeepGlobe 2018: A Challenge to Parse the Earth through Satellite Images*, CVPR Workshops 2018. | |
| Source: https://deepglobe.org/ | |
| - **EarthVQA** | |
| Junjue Wang et al., *EarthVQA: Towards Queryable Earth via Relational Reasoning-based Remote Sensing Visual Question Answering*, ICCV 2023. | |
| Source: https://github.com/WangJunjue/EarthVQA | |
| - **Photovoltaic (Solar Panels) Dataset** | |
| H. Jiang et al., *Multi-resolution dataset for photovoltaic panel segmentation from satellite and aerial imagery*, Earth System Science Data, 2021. | |
| Source: https://essd.copernicus.org/articles/13/5389/2021/ | |
| - **NAIP Imagery** | |
| U.S. Geological Survey, *National Agriculture Imagery Program (NAIP)*. | |
| Source: https://www.usgs.gov/core-science-systems/national-geospatial-program/national-agriculture-imagery-program | |
| All derived annotations, questions, and acceptable answer ranges introduced in SQuID are released under **CC BY 4.0**. | |
| ## Citation | |
| If you use this dataset, please cite: | |
| ```bibtex | |
| @misc{massih2026reasoningpixellevelprecisionqvlm, | |
| title={Reasoning with Pixel-level Precision: QVLM Architecture and SQuID Dataset for Quantitative Geospatial Analytics}, | |
| author={Peter A. Massih and Eric Cosatto}, | |
| year={2026}, | |
| eprint={2601.13401}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CV}, | |
| url={https://arxiv.org/abs/2601.13401}, | |
| } | |
| ``` | |
| --- |