Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ With its advanced reasoning capabilities and superior performance on geospatial
|
|
| 25 |
</p>
|
| 26 |
|
| 27 |
- **Training data**: we introduce the Geospatial Reasoning Segmentation Dataset (GRES), a collection of vision and language data designed around
|
| 28 |
-
remote-sensing applications. GRES consists of two core components: PreGRES, a dataset consisting of over 1M remote-sensing specific visual instruction-tuning Q/A pairs for pre-training geospatial models, and GRES, a semi-synthetic dataset specialized for reasoning segmentation of remote-sensing data and consisting of 9,205 images and 27,615 natural language queries/answers within those images. From this LISAt dataset, we generate train, test, and validation splits consisting of 7,205, 1,500, and 500 images respectively.
|
| 29 |
|
| 30 |
- **Implementation Details**: LISAT and LISATPRE are trained on eight DGX A100 80GB GPUs. In the first stage, we pretrain LISATPRE (context length = 2048) using LoRA for 1 epoch on PreGRES with next-token prediction cross-entropy loss. We employ the AdamW optimizer with a learning rate of 3e−4 and a cosine-decay learning rate scheduler, setting the batch size to 2 and gradient accumulation
|
| 31 |
steps to 6.
|
|
|
|
| 25 |
</p>
|
| 26 |
|
| 27 |
- **Training data**: we introduce the Geospatial Reasoning Segmentation Dataset (GRES), a collection of vision and language data designed around
|
| 28 |
+
remote-sensing applications. GRES consists of two core components: PreGRES, a dataset consisting of over 1M remote-sensing specific visual instruction-tuning Q/A pairs for pre-training geospatial models, and GRES, a semi-synthetic dataset specialized for reasoning segmentation of remote-sensing data and consisting of 9,205 images and 27,615 natural language queries/answers within those images. From this LISAt dataset, we generate train, test, and validation splits consisting of 7,205, 1,500, and 500 images respectively. The GRES dataset can be downloaded 
|
| 29 |
|
| 30 |
- **Implementation Details**: LISAT and LISATPRE are trained on eight DGX A100 80GB GPUs. In the first stage, we pretrain LISATPRE (context length = 2048) using LoRA for 1 epoch on PreGRES with next-token prediction cross-entropy loss. We employ the AdamW optimizer with a learning rate of 3e−4 and a cosine-decay learning rate scheduler, setting the batch size to 2 and gradient accumulation
|
| 31 |
steps to 6.
|