--- license: cc-by-4.0 task_categories: - visual-question-answering - object-detection - image-segmentation - text-generation - zero-shot-image-classification language: - en tags: - image - text - geospatial - remote-sensing - earth-observation - spatial-understanding - vision-language-model - cadastral - vector-data - aerial-imagery - parquet - datasets - geopandas - arxiv:2603.12345 pretty_name: GroundSet size_categories: - 100K **โš ๏ธ Warning:** To prevent data leakage, the pretraining dataset does not contain any of the samples used in the finetuning subset. ### 2. Finetuning Split This subset is tailored specifically for supervised fine-tuning and contains the visual data (60k images) and raw annotations corresponding to the instruction sets. We partition the visual data of this finetuning dataset into 14,000 images for testing and 45,988 images for training. > **๐Ÿ’ก Note:** This split houses the actual image files referenced by the Q&A pairs in the Instructions subset. ### 3. Instructions The instructions subset contains the actual textual question-answer pairs required to train and evaluate vision-language models on the images from the Finetuning split: * **Train:** A single JSON file containing a total of 1,845,076 instructions used to train our baseline. * **Test:** JSONL files containing 72,597 evaluation instructions. --- ## ๐Ÿ’พ Data Format The dataset uses Parquet files to bundle the images and their corresponding metadata. Each row in the Parquet files contains the following fields: * `image`: The PIL Image object (decoded from the raw bytes). The images are patches with a size of 672x672 pixels, corresponding to a spatial extent of approximately 134x134 meters. * `file_name`: The original filename of the aerial patch. * `json`: A stringified JSON payload containing the raw metadata and geometric annotations. Annotations can include Horizontal Bounding Boxes (HBB), Oriented Bounding Boxes (OBB) and polygonal segmentation masks. The instruction files format the question-answer pairs and are stored directly in standard JSON or JSONL format. --- ## ๐Ÿ” Qualitative Samples The cadastral vector data provides exact boundaries for diverse and highly specific infrastructure across varied environments (urban, rural, alpine, maritime). ![Qualitative samples from the GroundSet dataset showing semantic and geometric annotations](./docs/examples.jpg) *Qualitative samples from the GroundSet dataset showing semantic and geometric annotations* --- ## ๐Ÿ’ป Usage Example You can easily load and parse the dataset using the Hugging Face datasets library. Because the json column is stored as a string, you can parse it into a Python dictionary using the standard json library. ```python from datasets import load_dataset import json # 1. Load the finetuning dataset split dataset = load_dataset("RogerFerrod/GroundSet", data_dir="finetuning", split="train") # 2. Inspect a single sample sample = dataset[0] # The image is immediately accessible as a PIL object image = sample["image"] file_name = sample["file_name"] # 3. Parse the JSON string into a dictionary annotations = json.loads(sample["json"]) print(f"File: {file_name}") print(f"Raw Annotations: {annotations}") # --- Loading Instructions --- # You can load the instructions similarly: train_instructions = load_dataset("RogerFerrod/GroundSet", data_dir="instructions/train", split="train") print(train_instructions[0]) ``` ## ๐Ÿ“ Citation If you utilize this dataset in your research, please consider citing the original work: ```bibtex @article{groundset, title={GroundSet: A Cadastral-Grounded Dataset for Spatial Understanding with Vector Data}, author={Ferrod, Roger and Lecene, Ma{\"e}l and Sapkota, Krishna and Leifman, George and Silverman, Vered and Beryozkin, Genady and Lobry, Sylvain}, journal={arXiv preprint}, year={2026} } ``` ## ๐Ÿ™Œ Acknowledgements This work was supported by Google under a research collaboration agreement with Universitรฉ Paris Citรฉ. The underlying GroundSet dataset leverages official data from IGN (French National Institute of Geographic and Forest Information), specifically BD ORTHOยฎ and BD TOPOยฎ, released under Open Licence 2.0.