Add model card for Geo-R1
Browse filesThis PR adds an initial model card for the Geo-R1 model, linking it to the paper [Geo-R1: Unlocking VLM Geospatial Reasoning with Cross-View Reinforcement Learning](https://huggingface.co/papers/2510.00072).
It includes the `library_name: transformers` metadata, as evidenced by the `config.json` and `tokenizer_config.json` files which indicate compatibility with the `transformers` library. The `pipeline_tag: image-text-to-text` is also added to correctly categorize the model as a vision-language model on the Hub.
README.md
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: image-text-to-text
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# Geo-R1: Unlocking VLM Geospatial Reasoning with Cross-View Reinforcement Learning
|
| 7 |
+
|
| 8 |
+
This repository contains the Geo-R1 model, a reasoning-centric post-training framework that unlocks geospatial reasoning in vision-language models, as introduced in the paper:
|
| 9 |
+
|
| 10 |
+
[**Geo-R1: Unlocking VLM Geospatial Reasoning with Cross-View Reinforcement Learning**](https://huggingface.co/papers/2510.00072)
|
| 11 |
+
|
| 12 |
+
Geo-R1 combines "thinking scaffolding" (supervised fine-tuning on synthetic chain-of-thought exemplars) and an "elevating" stage using GRPO-based reinforcement learning on a weakly-supervised cross-view pairing proxy. This approach enables models to connect visual cues with geographic priors and harness reasoning for accurate prediction, achieving state-of-the-art performance across various geospatial reasoning benchmarks.
|