RogerFerrod commited on
Commit
a2392da
·
verified ·
1 Parent(s): 3860f3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +151 -28
README.md CHANGED
@@ -1,28 +1,151 @@
1
- ---
2
- license: cc-by-4.0
3
- configs:
4
- - config_name: finetuning
5
- data_files:
6
- - split: test
7
- path: finetuning/test-*
8
- - split: train
9
- path: finetuning/train-*
10
- dataset_info:
11
- config_name: finetuning
12
- features:
13
- - name: image
14
- dtype: image
15
- - name: file_name
16
- dtype: string
17
- - name: json
18
- dtype: string
19
- splits:
20
- - name: test
21
- num_bytes: 9583122507.0
22
- num_examples: 14000
23
- - name: train
24
- num_bytes: 31466597491.292
25
- num_examples: 45988
26
- download_size: 40936826619
27
- dataset_size: 41049719998.292
28
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - object-detection
6
+ - image-segmentation
7
+ - text-generation
8
+ - zero-shot-image-classification
9
+ language:
10
+ - en
11
+ tags:
12
+ - image
13
+ - text
14
+ - geospatial
15
+
16
+ - remote-sensing
17
+ - earth-observation
18
+ - spatial-understanding
19
+ - vision-language-model
20
+ - cadastral
21
+ - vector-data
22
+ - aerial-imagery
23
+
24
+ - parquet
25
+ - datasets
26
+ - geopandas
27
+ # ArXiv Link
28
+ #- arxiv:2603.12345
29
+ pretty_name: GroundSet
30
+ size_categories:
31
+ - 100K<n<1M
32
+ ---
33
+
34
+ # GroundSet: A Cadastral-Grounded Dataset for Spatial Understanding with Vector Data
35
+
36
+ [comment]: [![Paper](https://img.shields.io/badge/arXiv-Paper-b31b1b.svg)](https://arxiv.org/abs/YOUR_ARXIV_ID)
37
+ [comment]:[![Code](https://img.shields.io/badge/GitHub-Code-181717.svg)](https://github.com/YOUR_GITHUB_REPO)
38
+ [comment]:[![Model](https://img.shields.io/badge/🤗_Hub-Models-ffd21e.svg)](https://huggingface.co/YOUR_MODEL_LINK)
39
+
40
+ GroundSet is a large-scale Earth Observation dataset grounded in verifiable cadastral vector data, designed to bridge the gap in fine-grained spatial understanding for modern Multimodal Models.
41
+
42
+ The dataset is built upon high-resolution (20 cm) optical aerial orthophotos and legally verified vector data provided by the French national mapping agency (IGN). It features high semantic richness and geometric precision, enabling robust model training for complex geospatial tasks.
43
+
44
+ ### 📊 Key Statistics
45
+ * **Pretraining Scale:** 3.8 million annotated objects across 510k high-resolution images.
46
+ * **Finetuning Scale:** 880k objects across 60k images, with 1.8M instruction queries.
47
+ * **Semantic Granularity:** 135 highly specific semantic categories (e.g., power plants, heritage sites, crops, etc.).
48
+ * **Supported Tasks:** Scene captioning, localized classification, object detection, multi-class detection, referring expression comprehension (REC), segmentation and Visual Question Answering (VQA).
49
+
50
+ ![GroundSet Dataset Overview showing finetuning tasks](./docs/overview.jpg)
51
+ *GroundSet overview showing finetuning tasks*
52
+
53
+ ---
54
+
55
+ ## 🗂️ Dataset Structure
56
+
57
+ The repository is organized into two primary components: a pretraining dataset featuring raw geometric annotations (including bounding boxes, lines and polygons), and a supervised fine-tuning (SFT) dataset specifically tailored for Multimodal Large Language Models (MLLMs) using an instruction-based format.
58
+ For efficiency, all image and metadata pairs are stored in Parquet files.
59
+ ```text
60
+ GroundSet/
61
+
62
+ ├── pretraining/ <-- (Parquet files containing images and raw JSON annotations)
63
+ ├── finetuning/ <-- (Parquet files containing images and raw JSON annotations)
64
+ └── instructions/
65
+ ├── train/ <-- (Single JSON file with training instructions)
66
+ └── test/ <-- (JSONL files with testing instructions)
67
+ ```
68
+
69
+ ### 1. Pretraining Split
70
+ The pretraining dataset contains the full-scale release. It comprises 510,483 patches containing 3,829,755 objects across 135 unique categories. This release is intended to support broader research beyond the standard MLLM instruction tuning paradigm.
71
+
72
+ > **⚠️ Warning:** To prevent data leakage, the pretraining dataset does not contain any of the samples used in the finetuning subset.
73
+
74
+ ### 2. Finetuning Split
75
+ This subset is tailored specifically for supervised fine-tuning and contains the visual data (60k images) and raw annotations corresponding to the instruction sets. We partition the visual data of this finetuning dataset into 14,000 images for testing and 45,988 images for training.
76
+
77
+ > **💡 Note:** This split houses the actual image files referenced by the Q&A pairs in the Instructions subset.
78
+
79
+ ### 3. Instructions
80
+ The instructions subset contains the actual textual question-answer pairs required to train and evaluate vision-language models on the images from the Finetuning split:
81
+ * **Train:** A single JSON file containing a total of 1,845,076 instructions used to train our baseline.
82
+ * **Test:** JSONL files containing 72,597 evaluation instructions.
83
+
84
+ ---
85
+
86
+ ## 💾 Data Format
87
+
88
+ The dataset uses Parquet files to bundle the images and their corresponding metadata. Each row in the Parquet files contains the following fields:
89
+
90
+ * `image`: The PIL Image object (decoded from the raw bytes). The images are patches with a size of 672x672 pixels, corresponding to a spatial extent of approximately 134x134 meters.
91
+ * `file_name`: The original filename of the aerial patch.
92
+ * `json`: A stringified JSON payload containing the raw metadata and geometric annotations. Annotations can include Horizontal Bounding Boxes (HBB), Oriented Bounding Boxes (OBB) and polygonal segmentation masks.
93
+
94
+ The instruction files format the question-answer pairs and are stored directly in standard JSON or JSONL format.
95
+
96
+ ---
97
+
98
+
99
+ ## 🔍 Qualitative Samples
100
+
101
+ The cadastral vector data provides exact boundaries for diverse and highly specific infrastructure across varied environments (urban, rural, alpine, maritime).
102
+
103
+ ![Qualitative samples from the GroundSet dataset showing semantic and geometric annotations](./docs/examples.jpg)
104
+ *Qualitative samples from the GroundSet dataset showing semantic and geometric annotations*
105
+
106
+ ---
107
+
108
+ ## 💻 Usage Example
109
+ You can easily load and parse the dataset using the Hugging Face datasets library. Because the json column is stored as a string, you can parse it into a Python dictionary using the standard json library.
110
+ ```python
111
+ from datasets import load_dataset
112
+ import json
113
+
114
+ # 1. Load the finetuning dataset split
115
+ dataset = load_dataset("RogerFerrod/GroundSet", data_dir="finetuning", split="train")
116
+
117
+ # 2. Inspect a single sample
118
+ sample = dataset[0]
119
+
120
+ # The image is immediately accessible as a PIL object
121
+ image = sample["image"]
122
+ file_name = sample["file_name"]
123
+
124
+ # 3. Parse the JSON string into a dictionary
125
+ annotations = json.loads(sample["json"])
126
+
127
+ print(f"File: {file_name}")
128
+ print(f"Raw Annotations: {annotations}")
129
+
130
+ # --- Loading Instructions ---
131
+ # You can load the instructions similarly:
132
+ train_instructions = load_dataset("RogerFerrod/GroundSet", data_dir="instructions/train", split="train")
133
+ print(train_instructions[0])
134
+ ```
135
+
136
+
137
+ ## 📝 Citation
138
+
139
+ If you utilize this dataset in your research, please consider citing the original work:
140
+
141
+ ```bibtex
142
+ @article{groundset,
143
+ title={GroundSet: A Cadastral-Grounded Dataset for Spatial Understanding with Vector Data},
144
+ author={Ferrod, Roger and Lecene, Ma{\"e}l and Sapkota, Krishna and Leifman, George and Silverman, Vered and Beryozkin, Genady and Lobry, Sylvain},
145
+ journal={arXiv preprint},
146
+ year={2026}
147
+ }
148
+ ```
149
+
150
+ ## 🙌 Acknowledgements
151
+ This work leverages official data from IGN (French National Institute of Geographic and Forest Information), specifically BD ORTHO® and BD TOPO®, released under Open Licence 2.0.