nielsr HF Staff commited on
Commit
ff23e4e
·
verified ·
1 Parent(s): ad494fb

Add link to paper and GitHub repository

Browse files

This PR updates the dataset card with links to the original paper and the official GitHub repository. It also adds the `image-text-to-text` task category to the YAML metadata to reflect the benchmark's diagnostic description tasks for Vision-Language Models.

Files changed (1) hide show
  1. README.md +22 -15
README.md CHANGED
@@ -1,32 +1,36 @@
1
  ---
2
  language:
3
- - en
4
- tags:
5
- - remote-sensing
6
- - image-quality-assessment
7
- - benchmark
8
- - visual-question-answering
9
  task_categories:
10
- - visual-question-answering
11
- - image-classification
 
12
  pretty_name: SenseBench
13
- license: cc-by-4.0
 
 
 
14
  ---
15
 
16
  # SenseBench
17
 
18
  > A benchmark for remote sensing low-level visual perception and description in large vision-language models.
19
 
20
- 🏠 [github](https://github.com/Zhong-Chenchen/SenseBench) | 🤗 [Hugging Face Subset](https://huggingface.co/datasets/Zhongchenchen/SenseBench_subset)
21
 
22
  ## Overview
23
 
24
- SenseBench is a remote sensing benchmark for evaluating low-level visual perception and description in large vision-language models.
 
 
 
 
25
 
26
  ## Supported Tasks
27
 
28
- - Visual question answering
29
- - Text generation
30
 
31
  ## Language
32
 
@@ -43,7 +47,10 @@ Each example contains image paths, a question, an answer, and metadata describin
43
  "images/4fda312e-70d2-4df7-b1f7-2f06955bf338_0.png",
44
  "images/4fda312e-70d2-4df7-b1f7-2f06955bf338_1.png"
45
  ],
46
- "question": "Using the options provided, rate the overall quality of Image 2 compared to Image 1.\nA.No/Slight distortion\nB.Moderate distortion\nC.Severe distortion",
 
 
 
47
  "answer": "A",
48
  "meta": {
49
  "image_count": "multi",
@@ -56,4 +63,4 @@ Each example contains image paths, a question, an answer, and metadata describin
56
  "comparison": "intra-image"
57
  }
58
  }
59
- ```
 
1
  ---
2
  language:
3
+ - en
4
+ license: cc-by-4.0
 
 
 
 
5
  task_categories:
6
+ - image-text-to-text
7
+ - visual-question-answering
8
+ - image-classification
9
  pretty_name: SenseBench
10
+ tags:
11
+ - remote-sensing
12
+ - image-quality-assessment
13
+ - benchmark
14
  ---
15
 
16
  # SenseBench
17
 
18
  > A benchmark for remote sensing low-level visual perception and description in large vision-language models.
19
 
20
+ 🏠 [GitHub](https://github.com/Zhong-Chenchen/SenseBench) | 📄 [Paper](https://huggingface.co/papers/2605.10576) | 🤗 [Hugging Face Subset](https://huggingface.co/datasets/Zhongchenchen/SenseBench_subset)
21
 
22
  ## Overview
23
 
24
+ SenseBench is the first dedicated diagnostic benchmark for remote sensing (RS) low-level visual perception and description. Driven by a physics-based hierarchical taxonomy, it features over 10K curated instances across 6 major and 22 fine-grained RS degradation categories. It is designed to evaluate whether Vision-Language Models (VLMs) can overcome the domain gap to perceive and articulate RS-specific artifacts.
25
+
26
+ The benchmark evaluation consists of two complementary protocols:
27
+ 1. **Objective low-level visual perception**: Evaluating the model's ability to identify the presence and type of distortions.
28
+ 2. **Subjective diagnostic description**: Evaluating the model's ability to articulate RS artifacts in natural language based on completeness, correctness, and faithfulness.
29
 
30
  ## Supported Tasks
31
 
32
+ - **Visual Question Answering**: Multiple-choice questions assessing degradation type and severity.
33
+ - **Image-to-Text / Diagnostic Description**: Natural language generation describing visual artifacts.
34
 
35
  ## Language
36
 
 
47
  "images/4fda312e-70d2-4df7-b1f7-2f06955bf338_0.png",
48
  "images/4fda312e-70d2-4df7-b1f7-2f06955bf338_1.png"
49
  ],
50
+ "question": "Using the options provided, rate the overall quality of Image 2 compared to Image 1.
51
+ A.No/Slight distortion
52
+ B.Moderate distortion
53
+ C.Severe distortion",
54
  "answer": "A",
55
  "meta": {
56
  "image_count": "multi",
 
63
  "comparison": "intra-image"
64
  }
65
  }
66
+ ```