GroundSet / README.md
RogerFerrod's picture
Update README.md
deb8e12 verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
  - object-detection
  - image-segmentation
  - text-generation
  - zero-shot-image-classification
language:
  - en
tags:
  - image
  - text
  - geospatial
  - remote-sensing
  - earth-observation
  - spatial-understanding
  - vision-language-model
  - cadastral
  - vector-data
  - aerial-imagery
  - parquet
  - datasets
  - geopandas
  - arxiv:2603.12345
pretty_name: GroundSet
size_categories:
  - 100K<n<1M

GroundSet: A Cadastral-Grounded Dataset for Spatial Understanding with Vector Data

Paper Model Code

GroundSet is a large-scale Earth Observation dataset grounded in verifiable cadastral vector data, designed to bridge the gap in fine-grained spatial understanding for modern Multimodal Models.

The dataset is built upon high-resolution (20 cm) optical aerial orthophotos and legally verified vector data provided by the French national mapping agency (IGN). It features high semantic richness and geometric precision, enabling robust model training for complex geospatial tasks.

📊 Key Statistics

  • Pretraining Scale: 3.8 million annotated objects across 510k high-resolution images.
  • Finetuning Scale: 880k objects across 60k images, with 1.8M instruction queries.
  • Semantic Granularity: 135 highly specific semantic categories (e.g., power plants, heritage sites, crops, etc.).
  • Supported Tasks: Scene captioning, localized classification, object detection, multi-class detection, referring expression comprehension (REC), segmentation and Visual Question Answering (VQA).

GroundSet Dataset Overview showing finetuning tasks GroundSet overview showing finetuning tasks


🗂️ Dataset Structure

The repository is organized into two primary components: a pretraining dataset featuring raw geometric annotations (including bounding boxes, lines and polygons), and a supervised fine-tuning (SFT) dataset specifically tailored for Multimodal Large Language Models (MLLMs) using an instruction-based format. For efficiency, all image and metadata pairs are stored in Parquet files.

GroundSet/
│
├── pretraining/        <-- (Parquet files containing images and raw JSON annotations)
├── finetuning/         <-- (Parquet files containing images and raw JSON annotations)
└── instructions/       
    ├── train/          <-- (Single JSON file with training instructions)
    └── test/           <-- (JSONL files with testing instructions)

1. Pretraining Split

The pretraining dataset contains the full-scale release. It comprises 510,483 patches containing 3,829,755 objects across 135 unique categories. This release is intended to support broader research beyond the standard MLLM instruction tuning paradigm.

⚠️ Warning: To prevent data leakage, the pretraining dataset does not contain any of the samples used in the finetuning subset.

2. Finetuning Split

This subset is tailored specifically for supervised fine-tuning and contains the visual data (60k images) and raw annotations corresponding to the instruction sets. We partition the visual data of this finetuning dataset into 14,000 images for testing and 45,988 images for training.

💡 Note: This split houses the actual image files referenced by the Q&A pairs in the Instructions subset.

3. Instructions

The instructions subset contains the actual textual question-answer pairs required to train and evaluate vision-language models on the images from the Finetuning split:

  • Train: A single JSON file containing a total of 1,845,076 instructions used to train our baseline.
  • Test: JSONL files containing 72,597 evaluation instructions.

💾 Data Format

The dataset uses Parquet files to bundle the images and their corresponding metadata. Each row in the Parquet files contains the following fields:

  • image: The PIL Image object (decoded from the raw bytes). The images are patches with a size of 672x672 pixels, corresponding to a spatial extent of approximately 134x134 meters.
  • file_name: The original filename of the aerial patch.
  • json: A stringified JSON payload containing the raw metadata and geometric annotations. Annotations can include Horizontal Bounding Boxes (HBB), Oriented Bounding Boxes (OBB) and polygonal segmentation masks.

The instruction files format the question-answer pairs and are stored directly in standard JSON or JSONL format.


🔍 Qualitative Samples

The cadastral vector data provides exact boundaries for diverse and highly specific infrastructure across varied environments (urban, rural, alpine, maritime).

Qualitative samples from the GroundSet dataset showing semantic and geometric annotations Qualitative samples from the GroundSet dataset showing semantic and geometric annotations


💻 Usage Example

You can easily load and parse the dataset using the Hugging Face datasets library. Because the json column is stored as a string, you can parse it into a Python dictionary using the standard json library.

from datasets import load_dataset
import json

# 1. Load the finetuning dataset split
dataset = load_dataset("RogerFerrod/GroundSet", data_dir="finetuning", split="train")

# 2. Inspect a single sample
sample = dataset[0]

# The image is immediately accessible as a PIL object
image = sample["image"]
file_name = sample["file_name"]

# 3. Parse the JSON string into a dictionary
annotations = json.loads(sample["json"])

print(f"File: {file_name}")
print(f"Raw Annotations: {annotations}")

# --- Loading Instructions ---
# You can load the instructions similarly:
train_instructions = load_dataset("RogerFerrod/GroundSet", data_dir="instructions/train", split="train")
print(train_instructions[0])

📝 Citation

If you utilize this dataset in your research, please consider citing the original work:

@article{groundset,
  title={GroundSet: A Cadastral-Grounded Dataset for Spatial Understanding with Vector Data},
  author={Ferrod, Roger and Lecene, Ma{\"e}l and Sapkota, Krishna and Leifman, George and Silverman, Vered and Beryozkin, Genady and Lobry, Sylvain},
  journal={arXiv preprint},
  year={2026}
}

🙌 Acknowledgements

This work was supported by Google under a research collaboration agreement with Université Paris Cité. The underlying GroundSet dataset leverages official data from IGN (French National Institute of Geographic and Forest Information), specifically BD ORTHO® and BD TOPO®, released under Open Licence 2.0.