FelixKAI commited on
Commit
9427fcf
·
verified ·
1 Parent(s): 7269e35

Upload 8 files

Browse files
README.md CHANGED
@@ -1,38 +1,57 @@
1
- ---
2
- license: apache-2.0
3
- dataset_info:
4
- features:
5
- - name: id
6
- dtype: int32
7
- - name: images
8
- sequence:
9
- image:
10
- decode: false
11
- - name: question
12
- dtype: string
13
- - name: options
14
- struct:
15
- - name: A
16
- dtype: string
17
- - name: B
18
- dtype: string
19
- - name: C
20
- dtype: string
21
- - name: D
22
- dtype: string
23
- - name: answer
24
- dtype: string
25
- - name: category
26
- dtype: string
27
- splits:
28
- - name: train
29
- num_bytes: 698495
30
- num_examples: 1932
31
- download_size: 293107
32
- dataset_size: 698495
33
- configs:
34
- - config_name: default
35
- data_files:
36
- - split: train
37
- path: data/train-*
38
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # *RSHR*: A Benchmark for MLLMs on Ultra-High-Resolution Remote Sensing Data
3
+
4
+ ```
5
+ If our project helps you, please give us a star ⭐ on GitHub to support us💕
6
+ ```
7
+
8
+ ## 🔥 News
9
+
10
+ - **`2025-11-14`** 🎉 We released the paper : *RSHR*: A Benchmark for MLLMs on Ultra-High-Resolution Remote Sensing Data.
11
+
12
+ ## 😼*RSHR* Overview
13
+
14
+ - **Large-scale ultra-high-resolution benchmark:** RSHR is designed to evaluate fine-grained perception and complex reasoning of multimodal large language models in remote sensing, comprising **5,329 full-scene images** with native resolutions from **4K up to 3 × 10^8 pixels (300 MP)**.
15
+
16
+ - **Diverse expert-annotated data sources:** The dataset aggregates expert-annotated data from **DOTA-v2.0, MiniFrance, FAIRIM, HRSCD, XLRS-Bench**, and our own **100MP UAV-captured imagery**, covering a wide variety of real-world remote sensing scenarios.
17
+
18
+ - **Comprehensive tasks and rigorous evaluation pipeline:** RSHR spans **9 perception categories** and **4 reasoning types**, supporting both single-image and multi-image/multi-turn dialogues, and adopts a two-stage **Human–LLM Adversarial Verification** pipeline (LLM adversarial filtering + human review) to eliminate questions solvable by language priors alone, ensuring that models must truly see the image to answer.
19
+
20
+
21
+ ![image.png](assets/image.png)
22
+
23
+ ### **🧠 Comprehensive Task Suite**
24
+
25
+ We categorize the evaluation into **four main task families** to support diverse usage scenarios, covering **9 perception categories** (e.g., Color, Orientation, Regional Grounding) and **4 reasoning types**.
26
+
27
+ - 🧩 **Multiple-Choice VQA (MCQ)**: Evaluates decision-making within a fixed answer space, covering both single-turn and multi-turn dialogues.
28
+ - ✍️ **Open-Ended VQA (OEQ)**: Assesses free-form visual understanding and compositionality without the reliance on option priors, offering a more accurate measure of MLLM capabilities.
29
+ - 📝 **Image Captioning (IC)**: Requires concise, accurate descriptions for both **Global** scenes (whole-image summary) and **Regional** details (directional sectors)。
30
+ - 🔍 **Single-Image Evaluation (SIE)**: A specialized protocol to test deep understanding of ultra-high-resolution images (4K to $3 \times 10^8$ pixels), probing multi-scale perception and reasoning on a per-image basis.
31
+
32
+ ![image.png](assets/image_201.png)
33
+
34
+
35
+ ## 🔖Evaluation Results
36
+
37
+ We evaluated **14 state-of-the-art models**, including general-purpose MLLMs (e.g., GPT-4o, Gemini 1.5 Pro, Qwen2.5-VL) and remote-sensing specialist models (e.g., GeoChat, VHM). The evaluation covers **Multiple-Choice VQA**, **Open-Ended VQA**, and **Image Captioning**.
38
+
39
+ ### 📊 1. Main Leaderboard (Multiple-Choice)
40
+
41
+ Closed-source models dominate the leaderboard, yet they still struggle with complex reasoning tasks requiring fine-grained visual evidence.
42
+
43
+ ![image.png](assets/image_203.png)
44
+
45
+
46
+ ### 📉 2. Performance Analysis: Perception vs. Reasoning
47
+
48
+ We further analyze the correlation between perception and reasoning capabilities using Open-Ended VQA evaluation to avoid random guessing.
49
+
50
+ ![image.png](assets/image_204.png)
51
+
52
+
53
+ ### 📏 3. Impact of Resolution (Key Insight)
54
+
55
+ Does higher resolution support lead to better performance? Our Single-Image Evaluation reveals a critical robustness issue.
56
+
57
+ ![image.png](assets/image_205.png)
assets/.DS_Store ADDED
Binary file (6.15 kB). View file
 
assets/image.png ADDED

Git LFS Details

  • SHA256: 5d5db09b0308adbea98c7fe8aa1e11ea7d80c7c336fa8061750c790cc98b092c
  • Pointer size: 131 Bytes
  • Size of remote file: 892 kB
assets/image_201.png ADDED

Git LFS Details

  • SHA256: 0bd2127cb1aa2972ebe6dae3c6f74cc19cd2a66633e954db458489c768bed02e
  • Pointer size: 131 Bytes
  • Size of remote file: 507 kB
assets/image_202.png ADDED

Git LFS Details

  • SHA256: b3ad2e49038b280f814f632da12f26e69d0bbfd081eed7627bf01fa6b7eca4af
  • Pointer size: 131 Bytes
  • Size of remote file: 139 kB
assets/image_203.png ADDED

Git LFS Details

  • SHA256: 1b60a60c1d302306790b00c891a76657eaefad29cf0fd42d07bb0225156280b1
  • Pointer size: 131 Bytes
  • Size of remote file: 192 kB
assets/image_204.png ADDED

Git LFS Details

  • SHA256: d78e10ea37a7cfb8123541d9c593ed47ccb2000c00093b391411872d4a5db718
  • Pointer size: 131 Bytes
  • Size of remote file: 108 kB
assets/image_205.png ADDED

Git LFS Details

  • SHA256: c0a72210e8342d4516ca86910b1a46dab1c745c23a3e38f3cce93c5b9aefe2b5
  • Pointer size: 131 Bytes
  • Size of remote file: 179 kB