Datasets:
Upload 8 files
Browse files- README.md +57 -38
- assets/.DS_Store +0 -0
- assets/image.png +3 -0
- assets/image_201.png +3 -0
- assets/image_202.png +3 -0
- assets/image_203.png +3 -0
- assets/image_204.png +3 -0
- assets/image_205.png +3 -0
README.md
CHANGED
|
@@ -1,38 +1,57 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
# *RSHR*: A Benchmark for MLLMs on Ultra-High-Resolution Remote Sensing Data
|
| 3 |
+
|
| 4 |
+
```
|
| 5 |
+
If our project helps you, please give us a star ⭐ on GitHub to support us💕
|
| 6 |
+
```
|
| 7 |
+
|
| 8 |
+
## 🔥 News
|
| 9 |
+
|
| 10 |
+
- **`2025-11-14`** 🎉 We released the paper : *RSHR*: A Benchmark for MLLMs on Ultra-High-Resolution Remote Sensing Data.
|
| 11 |
+
|
| 12 |
+
## 😼*RSHR* Overview
|
| 13 |
+
|
| 14 |
+
- **Large-scale ultra-high-resolution benchmark:** RSHR is designed to evaluate fine-grained perception and complex reasoning of multimodal large language models in remote sensing, comprising **5,329 full-scene images** with native resolutions from **4K up to 3 × 10^8 pixels (300 MP)**.
|
| 15 |
+
|
| 16 |
+
- **Diverse expert-annotated data sources:** The dataset aggregates expert-annotated data from **DOTA-v2.0, MiniFrance, FAIRIM, HRSCD, XLRS-Bench**, and our own **100MP UAV-captured imagery**, covering a wide variety of real-world remote sensing scenarios.
|
| 17 |
+
|
| 18 |
+
- **Comprehensive tasks and rigorous evaluation pipeline:** RSHR spans **9 perception categories** and **4 reasoning types**, supporting both single-image and multi-image/multi-turn dialogues, and adopts a two-stage **Human–LLM Adversarial Verification** pipeline (LLM adversarial filtering + human review) to eliminate questions solvable by language priors alone, ensuring that models must truly see the image to answer.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
### **🧠 Comprehensive Task Suite**
|
| 24 |
+
|
| 25 |
+
We categorize the evaluation into **four main task families** to support diverse usage scenarios, covering **9 perception categories** (e.g., Color, Orientation, Regional Grounding) and **4 reasoning types**.
|
| 26 |
+
|
| 27 |
+
- 🧩 **Multiple-Choice VQA (MCQ)**: Evaluates decision-making within a fixed answer space, covering both single-turn and multi-turn dialogues.
|
| 28 |
+
- ✍️ **Open-Ended VQA (OEQ)**: Assesses free-form visual understanding and compositionality without the reliance on option priors, offering a more accurate measure of MLLM capabilities.
|
| 29 |
+
- 📝 **Image Captioning (IC)**: Requires concise, accurate descriptions for both **Global** scenes (whole-image summary) and **Regional** details (directional sectors)。
|
| 30 |
+
- 🔍 **Single-Image Evaluation (SIE)**: A specialized protocol to test deep understanding of ultra-high-resolution images (4K to $3 \times 10^8$ pixels), probing multi-scale perception and reasoning on a per-image basis.
|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
## 🔖Evaluation Results
|
| 36 |
+
|
| 37 |
+
We evaluated **14 state-of-the-art models**, including general-purpose MLLMs (e.g., GPT-4o, Gemini 1.5 Pro, Qwen2.5-VL) and remote-sensing specialist models (e.g., GeoChat, VHM). The evaluation covers **Multiple-Choice VQA**, **Open-Ended VQA**, and **Image Captioning**.
|
| 38 |
+
|
| 39 |
+
### 📊 1. Main Leaderboard (Multiple-Choice)
|
| 40 |
+
|
| 41 |
+
Closed-source models dominate the leaderboard, yet they still struggle with complex reasoning tasks requiring fine-grained visual evidence.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
### 📉 2. Performance Analysis: Perception vs. Reasoning
|
| 47 |
+
|
| 48 |
+
We further analyze the correlation between perception and reasoning capabilities using Open-Ended VQA evaluation to avoid random guessing.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
### 📏 3. Impact of Resolution (Key Insight)
|
| 54 |
+
|
| 55 |
+
Does higher resolution support lead to better performance? Our Single-Image Evaluation reveals a critical robustness issue.
|
| 56 |
+
|
| 57 |
+

|
assets/.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
assets/image.png
ADDED
|
Git LFS Details
|
assets/image_201.png
ADDED
|
Git LFS Details
|
assets/image_202.png
ADDED
|
Git LFS Details
|
assets/image_203.png
ADDED
|
Git LFS Details
|
assets/image_204.png
ADDED
|
Git LFS Details
|
assets/image_205.png
ADDED
|
Git LFS Details
|