Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,039 Bytes
158c64c
 
 
 
 
 
 
0870cee
 
 
 
 
 
 
 
 
158c64c
 
 
 
0870cee
 
 
 
 
158c64c
 
 
 
 
 
0870cee
 
 
158c64c
 
 
 
 
 
0870cee
 
158c64c
 
 
 
 
 
 
 
 
 
c13dd15
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
---
## PreGRES: A Large-Scale Geospatial Dataset Collection

**PreGRES** is a large-scale structured collection of existing smaller-scale geospatial datasets, designed for fine-tuning vision-language models in remote sensing applications. It integrates multiple sources, each contributing to different aspects of geospatial data understanding.

The datasets within **PreGRES** support three major tasks, listed below. To use them, please download the associated image files via the provided links and place them in their respective folders. Then, download the **`pregres.json`** file and ensure your directory is organized as follows:

```
├── pregres.json
├── NWPU-Captions
├── RSICD
│   └── ...
```


---

### 1. Image Captioning
- [**NWPU-Captions**](https://github.com/HaiyanHuang98/NWPU-Captions) (Cheng et al., 2022)  
- [**RSICD**](https://github.com/201528014227051/RSICD_optimal) (Lu et al., 2017)  
- [**RSITMD**](https://github.com/xiaoyuan1996/AMFMN/blob/master/RSITMD/README.md) (Yuan et al., 2022b)  
- [**Sydney-Captions**](https://pan.baidu.com/s/1hujEmcG#list/path=%2F) (Qu et al., 2016)  
- [**UCM-Captions**](https://pan.baidu.com/s/1mjPToHq#list/path=%2F) (Qu et al., 2016)  

These datasets contribute paired image-text data and contain long-form descriptions of top-down imagery across diverse geospatial environments, enhancing language supervision.

---

### 2. Visual Question Answering (VQA)
- [**RSVQA LR** and **RSVQA HR**](https://rsvqa.sylvainlobry.com/#downloads) (Lobry et al., 2020)  
- [**FloodNet**](https://github.com/BinaLab/FloodNet-Challenge-EARTHVISION2021) (Rahnemoonfar et al., 2021)  
- [**RSIVQA**](https://github.com/spectralpublic/RSIVQA) (Zheng et al., 2021)  

These datasets include structured question-answer pairs supporting reasoning over aerial and satellite images, covering tasks such as object identification, scene understanding, and disaster assessment.

---

### 3. Visual Grounding / Region-Level Captioning
- [**DIOR-RSVG**](https://drive.google.com/drive/folders/1hTqtYsC6B-m4ED2ewx5oKuYZV13EoJp_) (Zhan et al., 2023): Paired text-image data for object localization and spatial reference resolution.  
- [**NWPU-RESISC45**](https://www.tensorflow.org/datasets/catalog/resisc45) (Cheng et al., 2017): Scene classification labels.

---

### Dataset Statistics
- **Images**: 119,279  
- **Question-Answer Pairs**: 1,204,993  

PreGRES is used in the first-stage pre-training of the **LISAT** model, enabling general-purpose geospatial question answering.

> For more details on dataset composition, see **Table C.9** in our [paper](https://arxiv.org/pdf/2505.02829).

## Citation

If you use PreGRES or LISAT in your work, please cite:

```bibtex
@article{quenum2025lisat,
  title={LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery},
  author={Quenum, Jerome and Hsieh, Wen-Han and Wu, Tsung-Han and Gupta, Ritwik and Darrell, Trevor and Chan, David M},
  journal={arXiv preprint arXiv:2505.02829},
  year={2025}
}