|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
## PreGRES: A Large-Scale Geospatial Dataset Collection |
|
|
|
|
|
**PreGRES** is a large-scale structured collection of existing smaller-scale geospatial datasets, designed for fine-tuning vision-language models in remote sensing applications. It integrates multiple sources, each contributing to different aspects of geospatial data understanding. |
|
|
|
|
|
The datasets within **PreGRES** support three major tasks, listed below. To use them, please download the associated image files via the provided links and place them in their respective folders. Then, download the **`pregres.json`** file and ensure your directory is organized as follows: |
|
|
|
|
|
``` |
|
|
├── pregres.json |
|
|
├── NWPU-Captions |
|
|
├── RSICD |
|
|
│ └── ... |
|
|
``` |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
### 1. Image Captioning |
|
|
- [**NWPU-Captions**](https://github.com/HaiyanHuang98/NWPU-Captions) (Cheng et al., 2022) |
|
|
- [**RSICD**](https://github.com/201528014227051/RSICD_optimal) (Lu et al., 2017) |
|
|
- [**RSITMD**](https://github.com/xiaoyuan1996/AMFMN/blob/master/RSITMD/README.md) (Yuan et al., 2022b) |
|
|
- [**Sydney-Captions**](https://pan.baidu.com/s/1hujEmcG#list/path=%2F) (Qu et al., 2016) |
|
|
- [**UCM-Captions**](https://pan.baidu.com/s/1mjPToHq#list/path=%2F) (Qu et al., 2016) |
|
|
|
|
|
These datasets contribute paired image-text data and contain long-form descriptions of top-down imagery across diverse geospatial environments, enhancing language supervision. |
|
|
|
|
|
--- |
|
|
|
|
|
### 2. Visual Question Answering (VQA) |
|
|
- [**RSVQA LR** and **RSVQA HR**](https://rsvqa.sylvainlobry.com/#downloads) (Lobry et al., 2020) |
|
|
- [**FloodNet**](https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021) (Rahnemoonfar et al., 2021) |
|
|
- [**RSIVQA**](https://github.com/spectralpublic/RSIVQA) (Zheng et al., 2021) |
|
|
|
|
|
These datasets include structured question-answer pairs supporting reasoning over aerial and satellite images, covering tasks such as object identification, scene understanding, and disaster assessment. |
|
|
|
|
|
--- |
|
|
|
|
|
### 3. Visual Grounding / Region-Level Captioning |
|
|
- [**DIOR-RSVG**](https://drive.google.com/drive/folders/1hTqtYsC6B-m4ED2ewx5oKuYZV13EoJp_) (Zhan et al., 2023): Paired text-image data for object localization and spatial reference resolution. |
|
|
- [**NWPU-RESISC45**](https://www.tensorflow.org/datasets/catalog/resisc45) (Cheng et al., 2017): Scene classification labels. |
|
|
|
|
|
--- |
|
|
|
|
|
### Dataset Statistics |
|
|
- **Images**: 119,279 |
|
|
- **Question-Answer Pairs**: 1,204,993 |
|
|
|
|
|
PreGRES is used in the first-stage pre-training of the **LISAT** model, enabling general-purpose geospatial question answering. |
|
|
|
|
|
> For more details on dataset composition, see **Table C.9** in our [paper](https://arxiv.org/pdf/2505.02829). |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use PreGRES or LISAT in your work, please cite: |
|
|
|
|
|
```bibtex |
|
|
@article{quenum2025lisat, |
|
|
title={LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery}, |
|
|
author={Quenum, Jerome and Hsieh, Wen-Han and Wu, Tsung-Han and Gupta, Ritwik and Darrell, Trevor and Chan, David M}, |
|
|
journal={arXiv preprint arXiv:2505.02829}, |
|
|
year={2025} |
|
|
} |
|
|
|