koukeft commited on
Commit
436c0bd
·
verified ·
1 Parent(s): c05db5f

Update README.

Browse files
Files changed (1) hide show
  1. README.md +175 -3
README.md CHANGED
@@ -1,3 +1,175 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-classification
6
+ - object-detection
7
+ language:
8
+ - en
9
+ size_categories:
10
+ - n<1K
11
+ - 1K<n<10K
12
+ tags:
13
+ - geographic-maps
14
+ - question-answering
15
+ - legend-detection
16
+ - geospatial
17
+ - chart-understanding
18
+ - gis
19
+ pretty_name: VISQAM - Visual Question Answering for Thematic Maps
20
+ ---
21
+
22
+ # VISQAM (Visual Question-Answering for Thematic Maps)
23
+
24
+ ## Dataset Summary
25
+
26
+ VISQAM (Visual Question-Answering for Thematic Maps) is a dataset of 400 geographic thematic maps annotated for visual question-answering (VQA) and legend detection tasks. Each map includes 3-5 question-answering (QA) pairs of geographic nature and bounding box annotations for map legends. The dataset combines maps from multiple sources covering land use, climate variables, choropleth maps, and diverse cartographic layouts.
27
+
28
+ This dataset is designed for tasks at the intersection of computer vision, natural language processing, and geographic information science (GIS), including map understanding, VQA on specialized scientific visualizations, and legend detection.
29
+
30
+ ## Dataset Structure
31
+
32
+ ### Data Instances
33
+
34
+ The dataset contains 400 images organized into 4 categories. Each image is accompanied by:
35
+ - 3-5 QA related to geographic information in the map
36
+ - Bounding box annotations for map legends
37
+ - Original source attribution and licensing information
38
+
39
+ All annotations are provided in `annotations.json`, formatted following the [COCO annotation standard](https://cocodataset.org/#format-data).
40
+
41
+ ### Data Structure
42
+
43
+ - `images`: Directory containing the thematic map images, divided into categories (one directory for each category)
44
+ - `annotations.json`: COCO-formatted file containing:
45
+ - QA pairs for each image
46
+ - Bounding box coordinates for legend regions
47
+ - Source URLs and license information for each image
48
+ - Image metadata (width, height)
49
+
50
+ ### Image Categories
51
+
52
+ The dataset includes 4 categories of thematic maps:
53
+
54
+ 1. **LULC** (Land Use-Land Cover): Maps depicting land cover classifications and land use patterns
55
+ - Source: [Copernicus Urban Atlas Land Cover/Land Use 2018](https://sdi.eea.europa.eu/catalogue/copernicus/eng/catalog.search#/metadata/fb4dffa1-6ceb-4cc0-8372-1ed354c285e6)
56
+
57
+ 2. **Climate_variables**: Maps showing atmospheric and meteorological data including temperature, precipitation, air quality, and other climate indicators
58
+ - Sources: Copernicus Atmosphere Monitoring Service [(CAMS)](https://atmosphere.copernicus.eu/charts/packages/cams/), European Centre for Medium-Range Weather Forecasts [(ECMWF)](https://charts.ecmwf.int/)
59
+
60
+ 3. **OWID** (Our World in Data): Choropleth maps representing socioeconomic, demographic, and development indicators by country. Our focus was set on indicators related to environment (e.g., energy consumption, CO2 emissions).
61
+ - Source: [Our World in Data](https://ourworldindata.org/data)
62
+
63
+ 4. **misc_layouts**: Maps with diverse cartographic layouts and design patterns
64
+ - Source: [Ubimap-l](https://figshare.com/articles/dataset/ubiMap-l_A_Benchmark_for_Crowdsourced_Thematic_Map_Layout_Retrieval_and_Embedding-based_Analysis/28621037?file=53091506)
65
+
66
+ ### Data Splits
67
+ Currently, the dataset provides a single train split containing all 400 examples.
68
+
69
+ ### Annotations
70
+
71
+ Each map in the dataset has been annotated with:
72
+ - **QA Pairs**: 3-5 questions per image focusing on geographic content, spatial relationships, data interpretation, and map-specific information. The development of the questions was based on the [GQA taxonomy](https://arxiv.org/pdf/1902.09506), adjusted for the needs of our dataset. Every question has a _structural_ and a _semantic_ label.
73
+ - **Legend Bounding Boxes**: Rectangular regions (x, y, width, height) marking the location of map legends.
74
+
75
+ All annotations follow the COCO format for consistency with existing computer vision pipelines and include complete attribution to original sources.
76
+
77
+ ### Personal and sensitive information
78
+
79
+ The dataset consists of publicly available thematic maps depicting geographic and statistical data. No personal or sensitive information about individuals is included.
80
+
81
+ ## Usage
82
+
83
+ ### Downloading the dataset
84
+
85
+ To maintain the directory structure and naming conventions of the dataset, download the dataset repository using _git clone_:
86
+
87
+ ```bash
88
+ git clone https://huggingface.co/datasets/koukeft/visqam
89
+ ```
90
+
91
+ ### Working with the annotations
92
+
93
+ The `annotations.json` file contains detailed annotations in COCO format. You can load it separately:
94
+
95
+ ```python
96
+ import json
97
+
98
+ # Change the path to annotations.json if necessary
99
+ with open('visqam/annotations.json', 'r') as f:
100
+ annotations = json.load(f)
101
+
102
+ # Access QA pairs
103
+ for annotation in annotations['images']:
104
+ qa_pairs = annotation['qa_annotations']
105
+ source_url = annotation['source_url']
106
+ license_id = annotation['license']
107
+ source_license = annotations['licenses'][license_id-1]['name']
108
+ print(f"\nFor the image found at {source_url},\
109
+ licensed under {source_license},\
110
+ there are {len(qa_pairs)} QA annotations.")
111
+
112
+ ```
113
+
114
+ ### Potential Use Cases
115
+
116
+ - VQA on scientific geovisualizations
117
+ - Geographic information extraction from maps
118
+ - Legend detection
119
+ - Cross-modal learning between cartographic visualizations and natural language
120
+ - Automated map understanding and interpretation
121
+ - Training vision-language models on geographic domain content
122
+
123
+ ## Licensing and attribution
124
+
125
+ ### Dataset license
126
+
127
+ This dataset is released under **CC-BY-4.0** [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/).
128
+
129
+ ### Source attribution
130
+
131
+ The images in the dataset retain the original license and source attribution as documented in `annotations.json`. Users must:
132
+ - Consult the original licenses for each image category
133
+ - Provide appropriate attribution when using or redistributing images
134
+ - Respect the terms of the original data sources
135
+
136
+ **Source Licenses by Category:**
137
+ - **LULC**: Licensed under the [Commission Delegated Regulation (EU) No 1159/2013](https://www.eea.europa.eu/legal/eea-data-policy/Commission_Delegated_Regulation_1159_2013.pdf)
138
+ - **Climate_variables**: Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) or [CC BY 4.0 and ECMWF Terms of Use](https://apps.ecmwf.int/datasets/licences/general/)
139
+ - **OWID**: Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
140
+ - **misc_layouts**: Licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
141
+
142
+ Please refer to `annotations.json` for specific licensing information for each individual image.
143
+
144
+ ## Citation
145
+
146
+ If you use this dataset in your research, please cite:
147
+
148
+ ```bibtex
149
+ @dataset{visqam2025,
150
+ author = {Koukouraki, Eftychia and Ajay, Ajay and Abubakar, Ahmad},
151
+ title = {VISQAM: Visual Question Answering for Thematic Maps},
152
+ year = {2025},
153
+ publisher = {Hugging Face},
154
+ url = {https://huggingface.co/datasets/koukeft/visqam}
155
+ }
156
+ ```
157
+
158
+ ## Contact
159
+
160
+ For questions, feedback, or issues related to this dataset, please open a discussion on the [dataset page](https://huggingface.co/datasets/koukeft/visqam/discussions).
161
+
162
+ ## Acknowledgments
163
+
164
+ The development of this dataset was funded by the [NFDI4Earth Incubator Lab programme](https://www.nfdi4earth.de/2participate/incubator-lab).
165
+
166
+ ## Version History
167
+
168
+ ### v1.0.0 (January 2025) - Initial Release
169
+ - 400 images across 4 categories
170
+ - 3-5 QA pairs per image
171
+ - Legend bounding box annotations
172
+ - Categories: LULC (100), Climate_variables (100), OWID (100), misc_layouts (100)
173
+
174
+ ### Planned Updates
175
+ - v1.1.0: Additional images for each category (target: 800 total)