Rename dataset and add link to paper

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +131 -125
README.md CHANGED
@@ -1,126 +1,132 @@
1
- ---
2
- license: mit
3
- task_categories:
4
- - question-answering
5
- - multiple-choice
6
- language:
7
- - en
8
- tags:
9
- - multi-modal
10
- - remote-sensing
11
- size_categories:
12
- - 10K<n<100K
13
- ---
14
-
15
- # CHOICE: Benchmarking The Remote Sensing Capabilities of Large Vision-Language Models
16
-
17
- **<p align="justify"> Abstract:** The rapid advancement of Large Vision-Language Models (VLMs), both general-domain models and those specifically tailored for remote sensing, has demonstrated exceptional perception and reasoning capabilities in Earth observation tasks. However, a benchmark for systematically evaluating their capabilities in this domain is still lacking. To bridge this gap, we propose CHOICE, an extensive benchmark designed to objectively evaluate the hierarchical remote sensing capabilities of VLMs. Focusing on 2 primary capability dimensions essential to remote sensing: perception and reasoning, we further categorize 6 secondary dimensions and 23 leaf tasks to ensure a well-rounded assessment coverage. CHOICE guarantees the quality of a total of 10,507 problems through a rigorous process of data collection from 50 globally distributed cities, question construction and quality control. The newly curated data and the format of multiple-choice questions with definitive answers allow for an objective and straightforward performance assessment. Our evaluation of 3 proprietary and 21 open-source VLMs highlights their critical limitations within this specialized context. We hope that CHOICE will serve as a valuable resource and offer deeper insights into the challenges and potential of VLMs in the field of remote sensing.
18
- </p>
19
-
20
- <hr />
21
-
22
- ## 📂 Data Structure
23
-
24
- CHOICE is organized according to the three-tier hierarchical dimension taxonomy, structured as follows:
25
-
26
- ```bash
27
- perception
28
- ├── cross_instance_discerment
29
- │ ├── attribute_comparison
30
- │ │ ├── images
31
- ├── metadata.jsonl
32
- │ │ └── attribute_comparison.json
33
- │ ├── change_detection
34
- │ │ ├── images
35
- ├── metadata.jsonl
36
- │ │ └── change_detection.json
37
- │ ├── referring_expression_segmentation
38
- │ │ ├── images
39
- ├── masks
40
- │ │ ├── metadata.jsonl
41
- │ │ └── referring_expression_segmentation.json
42
- └── spatial_relationship
43
- ├── images
44
- ├── metadata.jsonl
45
- └── spatial_relationship.json
46
- ├── image_level_comprehension
47
- ├── image_caption
48
- │ │ ├── images
49
- ├── metadata.jsonl
50
- │ │ └── image_caption.json
51
- │ ├── image_modality
52
- │ │ ├── images
53
- ├── metadata.jsonl
54
- │ │ └── image_modality.json
55
- │ ├── image_quality
56
- │ │ ├── images
57
- ├── metadata.jsonl
58
- │ │ └── image_quality.json
59
- │ ├── map_recognition
60
- │ │ ├── images
61
- ├── metadata.jsonl
62
- │ │ └── map_recognition.json
63
- └── scene_classification
64
- ├── images
65
- ├── metadata.jsonl
66
- └── scene_classification.json
67
- └── single_instance_identification
68
- ├── attribute_recognition
69
- │ ├── images
70
- ├── metadata.jsonl
71
- └── attribute_recognition.json
72
- ├── hallucination_detection
73
- ├── images
74
- ├── metadata.jsonl
75
- └── hallucination_detection.json
76
- ├── landmark_recognition
77
- ├── images
78
- ├── metadata.jsonl
79
- └── landmark_recognition.json
80
- ├── object_counting
81
- ├── images
82
- ├── metadata.jsonl
83
- └── object_counting.json
84
- ├── object_localization
85
- ├── images
86
- ├── metadata.jsonl
87
- └── object_localization.json
88
- ├── object_presence
89
- ├── images
90
- ├── metadata.jsonl
91
- └── object_presence.json
92
- └── visual_grounding
93
- ├── images
94
- ├── metadata.jsonl
95
- └── visual_grounding.json
96
-
97
- reasoning
98
- ├── assessment_reasoning
99
- │ ├── environmental_assessment
100
- │ │ ├── images
101
- ├── metadata.jsonl
102
- │ │ └── environmental_assessment.json
103
- └── resource_assessment
104
- ├── images
105
- ├── metadata.jsonl
106
- └── resource_assessment.json
107
- ├── ......
108
-
109
- ```
110
-
111
- <hr />
112
-
113
- ## 🌰 Example
114
-
115
- An example of the Multiple-Choice Question (MCQ) is as follows:
116
-
117
- ```json
118
- {
119
- "id": "ef8777ba-27ee-4828-aab5-63214daf340d",
120
- "image_path": "perception/single_instance_identification/object_counting/images/1.png",
121
- "question": "Count the number of airplane present in this image.\nA.2\nB.1\nC.4\nD.3",
122
- "answer": "A"
123
- }
124
- ```
125
-
 
 
 
 
 
 
126
  <hr />
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ size_categories:
6
+ - 10K<n<100K
7
+ task_categories:
8
+ - question-answering
9
+ - multiple-choice
10
+ tags:
11
+ - multi-modal
12
+ - remote-sensing
13
+ ---
14
+
15
+ # COREval: A Comprehensive and Objective Benchmark for Evaluating the Remote Sensing Capabilities of Large Vision-Language Models
16
+
17
+ [Paper](https://huggingface.co/papers/2411.18145)
18
+
19
+ **<p align="justify"> Abstract:** The rapid advancement of Large Vision-Language Models (VLMs), both general-domain models and those specifically tailored for remote sensing, has demonstrated exceptional perception and reasoning capabilities in Earth observation tasks. However, a benchmark for systematically evaluating their capabilities in this domain is still lacking. To bridge this gap, we propose COREval, an extensive benchmark designed to objectively evaluate the hierarchical remote sensing capabilities of VLMs. Focusing on 2 primary capability dimensions essential to remote sensing: perception and reasoning, we further categorize 6 secondary dimensions and 23 leaf tasks to ensure a well-rounded assessment coverage. COREval guarantees the quality of a total of 10,507 problems through a rigorous process of data collection from 50 globally distributed cities, question construction and quality control. The newly curated data and the format of multiple-choice questions with definitive answers allow for an objective and straightforward performance assessment. Our evaluation of 3 proprietary and 21 open-source VLMs highlights their critical limitations within this specialized context. We hope that COREval will serve as a valuable resource and offer deeper insights into the challenges and potential of VLMs in the field of remote sensing.
20
+ </p>
21
+
22
+ <hr />
23
+
24
+ ## 📂 Data Structure
25
+
26
+ COREval is organized according to the three-tier hierarchical dimension taxonomy, structured as follows:
27
+
28
+ ```bash
29
+ perception
30
+ ├── cross_instance_discerment
31
+ │ ├── attribute_comparison
32
+ │ │ ├── images
33
+ ├── metadata.jsonl
34
+ │ │ └── attribute_comparison.json
35
+ │ ├── change_detection
36
+ │ │ ├── images
37
+ ├── metadata.jsonl
38
+ │ │ └── change_detection.json
39
+ │ ├── referring_expression_segmentation
40
+ │ │ ├── images
41
+ │ │ ├── masks
42
+ │ ├── metadata.jsonl
43
+ │ └── referring_expression_segmentation.json
44
+ └── spatial_relationship
45
+ ├── images
46
+ ├── metadata.jsonl
47
+ └── spatial_relationship.json
48
+ ├── image_level_comprehension
49
+ │ ├── image_caption
50
+ │ │ ├── images
51
+ ├── metadata.jsonl
52
+ │ │ └── image_caption.json
53
+ │ ├── image_modality
54
+ │ │ ├── images
55
+ ├── metadata.jsonl
56
+ │ │ └── image_modality.json
57
+ │ ├── image_quality
58
+ │ │ ├── images
59
+ ├── metadata.jsonl
60
+ │ │ └── image_quality.json
61
+ │ ├── map_recognition
62
+ │ │ ├── images
63
+ │ ├── metadata.jsonl
64
+ │ └── map_recognition.json
65
+ └── scene_classification
66
+ ├── images
67
+ │ ├── metadata.jsonl
68
+ │ └── scene_classification.json
69
+ └── single_instance_identification
70
+ ├── attribute_recognition
71
+ ├── images
72
+ ├── metadata.jsonl
73
+ └── attribute_recognition.json
74
+ ├── hallucination_detection
75
+ ├── images
76
+ ├── metadata.jsonl
77
+ └── hallucination_detection.json
78
+ ├── landmark_recognition
79
+ ├── images
80
+ ├── metadata.jsonl
81
+ └── landmark_recognition.json
82
+ ├── object_counting
83
+ ├── images
84
+ ├── metadata.jsonl
85
+ └── object_counting.json
86
+ ├── object_localization
87
+ ├── images
88
+ ├── metadata.jsonl
89
+ └── object_localization.json
90
+ ├── object_presence
91
+ ├── images
92
+ │ ├── metadata.jsonl
93
+ │ └── object_presence.json
94
+ └── visual_grounding
95
+ ├── images
96
+ ├── metadata.jsonl
97
+ └── visual_grounding.json
98
+
99
+ reasoning
100
+ ├── assessment_reasoning
101
+ │ ├── environmental_assessment
102
+ │ │ ├── images
103
+ │ ├── metadata.jsonl
104
+ │ └── environmental_assessment.json
105
+ └── resource_assessment
106
+ ├── images
107
+ ├── metadata.jsonl
108
+ │ └── resource_assessment.json
109
+ ├── ......
110
+
111
+ ```
112
+
113
+ <hr />
114
+
115
+ ## 🌰 Example
116
+
117
+ An example of the Multiple-Choice Question (MCQ) is as follows:
118
+
119
+ ```json
120
+ {
121
+ "id": "ef8777ba-27ee-4828-aab5-63214daf340d",
122
+ "image_path": "perception/single_instance_identification/object_counting/images/1.png",
123
+ "question": "Count the number of airplane present in this image.
124
+ A.2
125
+ B.1
126
+ C.4
127
+ D.3",
128
+ "answer": "A"
129
+ }
130
+ ```
131
+
132
  <hr />