Datasets:
Rename dataset and add link to paper
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,126 +1,132 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
-
|
| 13 |
-
---
|
| 14 |
-
|
| 15 |
-
#
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
│
|
| 32 |
-
│ │
|
| 33 |
-
│ ├──
|
| 34 |
-
│ │
|
| 35 |
-
│
|
| 36 |
-
│ │
|
| 37 |
-
│ ├──
|
| 38 |
-
│ │
|
| 39 |
-
│
|
| 40 |
-
│ │ ├──
|
| 41 |
-
│ │
|
| 42 |
-
│
|
| 43 |
-
│
|
| 44 |
-
│
|
| 45 |
-
│
|
| 46 |
-
├──
|
| 47 |
-
│
|
| 48 |
-
|
| 49 |
-
│
|
| 50 |
-
│ │
|
| 51 |
-
│ ├──
|
| 52 |
-
│ │
|
| 53 |
-
│
|
| 54 |
-
│ │
|
| 55 |
-
│ ├──
|
| 56 |
-
│ │
|
| 57 |
-
│
|
| 58 |
-
│ │
|
| 59 |
-
│ ├──
|
| 60 |
-
│ │
|
| 61 |
-
│
|
| 62 |
-
│ │
|
| 63 |
-
│
|
| 64 |
-
│
|
| 65 |
-
│
|
| 66 |
-
│
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
│
|
| 72 |
-
├──
|
| 73 |
-
│
|
| 74 |
-
|
| 75 |
-
│
|
| 76 |
-
├──
|
| 77 |
-
│
|
| 78 |
-
|
| 79 |
-
│
|
| 80 |
-
├──
|
| 81 |
-
│
|
| 82 |
-
|
| 83 |
-
│
|
| 84 |
-
├──
|
| 85 |
-
│
|
| 86 |
-
|
| 87 |
-
│
|
| 88 |
-
├──
|
| 89 |
-
│
|
| 90 |
-
|
| 91 |
-
│
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
│
|
| 102 |
-
│ │
|
| 103 |
-
│
|
| 104 |
-
│
|
| 105 |
-
│
|
| 106 |
-
│
|
| 107 |
-
├──
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
"
|
| 122 |
-
"
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
<hr />
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: mit
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
+
task_categories:
|
| 8 |
+
- question-answering
|
| 9 |
+
- multiple-choice
|
| 10 |
+
tags:
|
| 11 |
+
- multi-modal
|
| 12 |
+
- remote-sensing
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# COREval: A Comprehensive and Objective Benchmark for Evaluating the Remote Sensing Capabilities of Large Vision-Language Models
|
| 16 |
+
|
| 17 |
+
[Paper](https://huggingface.co/papers/2411.18145)
|
| 18 |
+
|
| 19 |
+
**<p align="justify"> Abstract:** The rapid advancement of Large Vision-Language Models (VLMs), both general-domain models and those specifically tailored for remote sensing, has demonstrated exceptional perception and reasoning capabilities in Earth observation tasks. However, a benchmark for systematically evaluating their capabilities in this domain is still lacking. To bridge this gap, we propose COREval, an extensive benchmark designed to objectively evaluate the hierarchical remote sensing capabilities of VLMs. Focusing on 2 primary capability dimensions essential to remote sensing: perception and reasoning, we further categorize 6 secondary dimensions and 23 leaf tasks to ensure a well-rounded assessment coverage. COREval guarantees the quality of a total of 10,507 problems through a rigorous process of data collection from 50 globally distributed cities, question construction and quality control. The newly curated data and the format of multiple-choice questions with definitive answers allow for an objective and straightforward performance assessment. Our evaluation of 3 proprietary and 21 open-source VLMs highlights their critical limitations within this specialized context. We hope that COREval will serve as a valuable resource and offer deeper insights into the challenges and potential of VLMs in the field of remote sensing.
|
| 20 |
+
</p>
|
| 21 |
+
|
| 22 |
+
<hr />
|
| 23 |
+
|
| 24 |
+
## 📂 Data Structure
|
| 25 |
+
|
| 26 |
+
COREval is organized according to the three-tier hierarchical dimension taxonomy, structured as follows:
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
perception
|
| 30 |
+
├── cross_instance_discerment
|
| 31 |
+
│ ├── attribute_comparison
|
| 32 |
+
│ │ ├── images
|
| 33 |
+
│ │ ├── metadata.jsonl
|
| 34 |
+
│ │ └── attribute_comparison.json
|
| 35 |
+
│ ├── change_detection
|
| 36 |
+
│ │ ├── images
|
| 37 |
+
│ │ ├── metadata.jsonl
|
| 38 |
+
│ │ └── change_detection.json
|
| 39 |
+
│ ├── referring_expression_segmentation
|
| 40 |
+
│ │ ├── images
|
| 41 |
+
│ │ ├── masks
|
| 42 |
+
│ │ ├── metadata.jsonl
|
| 43 |
+
│ │ └── referring_expression_segmentation.json
|
| 44 |
+
│ └── spatial_relationship
|
| 45 |
+
│ ├── images
|
| 46 |
+
│ ├── metadata.jsonl
|
| 47 |
+
│ └── spatial_relationship.json
|
| 48 |
+
├── image_level_comprehension
|
| 49 |
+
│ ├── image_caption
|
| 50 |
+
│ │ ├── images
|
| 51 |
+
│ │ ├── metadata.jsonl
|
| 52 |
+
│ │ └── image_caption.json
|
| 53 |
+
│ ├── image_modality
|
| 54 |
+
│ │ ├── images
|
| 55 |
+
│ │ ├── metadata.jsonl
|
| 56 |
+
│ │ └── image_modality.json
|
| 57 |
+
│ ├── image_quality
|
| 58 |
+
│ │ ├── images
|
| 59 |
+
│ │ ├── metadata.jsonl
|
| 60 |
+
│ │ └── image_quality.json
|
| 61 |
+
│ ├── map_recognition
|
| 62 |
+
│ │ ├── images
|
| 63 |
+
│ │ ├── metadata.jsonl
|
| 64 |
+
│ │ └── map_recognition.json
|
| 65 |
+
│ └── scene_classification
|
| 66 |
+
│ ├── images
|
| 67 |
+
│ ├── metadata.jsonl
|
| 68 |
+
│ └── scene_classification.json
|
| 69 |
+
└── single_instance_identification
|
| 70 |
+
├── attribute_recognition
|
| 71 |
+
│ ├── images
|
| 72 |
+
│ ├── metadata.jsonl
|
| 73 |
+
│ └── attribute_recognition.json
|
| 74 |
+
├── hallucination_detection
|
| 75 |
+
│ ├── images
|
| 76 |
+
│ ├── metadata.jsonl
|
| 77 |
+
│ └── hallucination_detection.json
|
| 78 |
+
├── landmark_recognition
|
| 79 |
+
│ ├── images
|
| 80 |
+
│ ├── metadata.jsonl
|
| 81 |
+
│ └── landmark_recognition.json
|
| 82 |
+
├── object_counting
|
| 83 |
+
│ ├── images
|
| 84 |
+
│ ├── metadata.jsonl
|
| 85 |
+
│ └── object_counting.json
|
| 86 |
+
├── object_localization
|
| 87 |
+
│ ├── images
|
| 88 |
+
│ ├── metadata.jsonl
|
| 89 |
+
│ └── object_localization.json
|
| 90 |
+
├── object_presence
|
| 91 |
+
│ ├── images
|
| 92 |
+
│ ├── metadata.jsonl
|
| 93 |
+
│ └── object_presence.json
|
| 94 |
+
└── visual_grounding
|
| 95 |
+
├── images
|
| 96 |
+
├── metadata.jsonl
|
| 97 |
+
└── visual_grounding.json
|
| 98 |
+
|
| 99 |
+
reasoning
|
| 100 |
+
├── assessment_reasoning
|
| 101 |
+
│ ├── environmental_assessment
|
| 102 |
+
│ │ ├── images
|
| 103 |
+
│ │ ├── metadata.jsonl
|
| 104 |
+
│ │ └── environmental_assessment.json
|
| 105 |
+
│ └── resource_assessment
|
| 106 |
+
│ ├── images
|
| 107 |
+
│ ├── metadata.jsonl
|
| 108 |
+
│ └── resource_assessment.json
|
| 109 |
+
├── ......
|
| 110 |
+
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
<hr />
|
| 114 |
+
|
| 115 |
+
## 🌰 Example
|
| 116 |
+
|
| 117 |
+
An example of the Multiple-Choice Question (MCQ) is as follows:
|
| 118 |
+
|
| 119 |
+
```json
|
| 120 |
+
{
|
| 121 |
+
"id": "ef8777ba-27ee-4828-aab5-63214daf340d",
|
| 122 |
+
"image_path": "perception/single_instance_identification/object_counting/images/1.png",
|
| 123 |
+
"question": "Count the number of airplane present in this image.
|
| 124 |
+
A.2
|
| 125 |
+
B.1
|
| 126 |
+
C.4
|
| 127 |
+
D.3",
|
| 128 |
+
"answer": "A"
|
| 129 |
+
}
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
<hr />
|