File size: 10,855 Bytes
4f96586
 
 
 
 
 
 
 
1271e82
 
 
 
 
 
 
 
 
 
 
 
a6b09c9
1271e82
89fb307
 
a40700c
 
 
fc78413
6bb34ff
 
fc78413
e91ff40
fc78413
 
 
 
 
 
 
 
 
 
 
 
 
 
e91ff40
5337c5a
6bb34ff
5337c5a
984074c
5337c5a
a40700c
d6ae625
5337c5a
c7e0046
5337c5a
 
 
e091f1b
d6ae625
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e091f1b
5337c5a
d6ae625
1271e82
7496d66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1271e82
a40700c
7496d66
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1271e82
7381909
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
---
license: apache-2.0
task_categories:
- zero-shot-classification
- question-answering
pretty_name: Ordinal Regression Dataset
size_categories:
- 100K<n<1M
---

# 🌋 STORM: Stimulating Trustworthy Ordinal Regression Ability of MLLMs
*Benchmarking All-in-one Visual Rating of MLLMs with A Comprehensive Ordinal Regression Dataset.*

## Contents
- [STORM Weights](#STORM-weights)
- [Dataset](#Dataset)
- [Evaluation](#evaluation)
- [Examples](#Examples)

## STORM Weights
Please check out our [checkpoint_STORM](https://huggingface.co/datasets/ttlyy/ORD/tree/main/checkpoint_STORM) for public STORM checkpoints, and the [instructions](https://anonymous.4open.science/r/STORM-CDC7) of how to use the weights.
## Dataset
| Data file name | Size |
| --- | ---: |
| [STORM_instruct_MAX_527k.jsonl](https://huggingface.co/datasets/ttlyy/ORD/blob/main/ORD/IO_qwen_train_oc_527k.jsonl) | 383 MB |
| [STORM_instruct_Lite_123k.json](https://huggingface.co/datasets/ttlyy/ORD/blob/main/ORD/IO_qwen_train_oc_123k.jsonl) | 87.1 MB |
| [STORM_instruct_Test_80k.json](https://huggingface.co/datasets/ttlyy/ORD/blob/main/ORD/IO_qwen_test_oc_80k.jsonl) | 58.4 MB |
### Pretraining Dataset
To ensure a robust foundation for different visual rating tasks, our STORM data collection deliberately integrates a diverse selection of data including image quality assessment (IQA), image aesthetic assessment (IAA), facial age estimation (FAE), medical disease grading (MDG), and image historical date estimation (HDE). 
These data domains are intentionally chosen to cultivate a comprehensive skill set across varied visual rating tasks.
| Domain | Source Dataset | Full Version Size | Catergory |
| --- |  --- |  --- | --- |
| Image Quality Assessment (IQA) | SPAQ | 11,125 | 5 levels |
| Image Quality Assessment (IQA) | ChallengeDB | 1,169 | 5 levels |
| Image Quality Assessment (IQA) | KonIQ | 10,073 | 5 levels |
| Image Aesthetics Assessment (IAA) | Aesthetics | 13,706 | 5 levels |
| Image Aesthetics Assessment (IAA) | TAD66K | 66,327 | 5 levels |
| Image Aesthetics Assessment (IAA) | AVA | 255,508 | 5 levels |
| Facial Age Estimation (FAE) | Adience | 17,321 | 8 groups |
| Facial Age Estimation (FAE) | CACD | 163,446 | 14-62 years |
| Facial Age Estimation (FAE) | Morph | 50,015 | 16-77 years |
| Facial Age Estimation (FAE) | UTK | 24,106 | 1-116 years |
| Medical Disease Grading (MDG) | Eyepacs | 35,127 | 5 grades |
| Medical Disease Grading (MDG) | DeepDR | 2,000 | 5 grades |
| Medical Disease Grading (MDG) | APTOS | 3,662 | 5 grades |
| Historical Date Estimation (HDE) | HCI | 1,325 | 5 decades |

**Important notice**: As these datasets provide only images and digital labels, they are designed with a standardized VQA paradigm by reusing their images and modifying the annotations into a textual form to enable MLLMs to undergo joint training for heterogeneous tasks of diverse domains. 
Specifically, each data sample originally consists of a simple question and a corresponding numeric answer. However, this paradigm can lead to numerical hallucination. Hence, we add extra domain-driven prompts and coarse-to-fine CoT to mitigate this issue.
An example with the original VQA and our proposed coarse-to-fine CoT process is shown in the following figure. Meanwhile, we adopt the form of text + numbers for the labels to enhance semantic understanding.
![A data example with the original VQA compared with our coarse-to-fine CoT VQA.](./example.png)

### STORM Prompts
*Generating the dataset for IQA*
```
<image> You are now an advanced Image Quality Evaluator, and your task is to assess the quality of the provided image. Please evaluate the image’s quality based on a 5-rate scale: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). Please provide the coarse category that can help you answer the question better. Please first coarsely categorise the image: rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). Based on the coarse classification, proceed to make a final rate prediction. The specific steps are as follows:
1. Make the coarse prediction with the candidates:rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]

```
*Generating the dataset for IAA*
```
<image> You are now an advanced Aesthetic Evaluation Evaluator, and your task is to assess the aesthetic quality of the provided image. Please evaluate the image’s aesthetic quality based on a 5-level scale: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent). Please first coarsely categorise the image: level0-1(Below Average), level2(Average), level3-4(Above Average). Based on the coarse classification, proceed to make a final level prediction. The specific steps are as follows:
1. Make the coarse prediction with the candidates:level0-1(Below Average), level2(Average), level3-4(Above Average).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]
```

*Generating the dataset for MDG*
```
<image> You are an experienced facial analysis expert, and you need to estimate the age group of the person in the provided facial image based on their facial features. The known age range of the image is from 16 to 77 years old. Please first coarsely categorise the image: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old). Based on the coarse classification, proceed to make a final age prediction.The final output should be in the format: Coarse Answer: [result], Predicted Age: [result]. The specific steps are as follows:
1. Make the coarse prediction with the candidates: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: from 16 to 77 years old.
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]
```

*Generating the dataset for HDE*
```
<image> You are now an advanced history researcher, and you need to grade the provided images by decade. These are all candidate categories: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s). Please first coarsely categorise the image: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4). Based on the coarse classification, proceed to make a final phase prediction.The final output should be in the format: Coarse Classification: [result], Predicted Phase: [result]. The specific steps are as follows:
1. Make the coarse prediction with the candidates: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]
```

## Evaluation
Below, we provide simple examples to demonstrate how to quickly load the Qwen2.5-VL model using 🤗 Transformers, along with testing it on our benchmark datasets:
```python
import json
from tqdm import tqdm 
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto"
)

# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen2.5-VL-3B-Instruct",
#     torch_dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )

# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")

def write_jsonl(data, filename):
    with open(filename, 'a', encoding='utf-8') as f:
        json_str = json.dumps(data, ensure_ascii=False)  
        f.write(json_str + '\n')

file_path = 'STORM/IO_qwen_test_vqa_oc_80k.jsonl'
output_json = "answer.jsonl"

with open(file_path, 'r') as file:
    for line in tqdm(list(file), desc="Testing"):
        raw = {}
        data = json.loads(line.strip())
        
        query = data.get('query')
        response = data.get('response')
        image_path = data.get('image_path')
        
        messages = [
            {
                "role": "user",
                "content": [
                    {
                    "type": "image",
                    "image": image_path,
                    },
                    {
                    "type": "text", 
                    "text": query},
                ],
            }
        ]

        text = processor.apply_chat_template(
            messages, tokenize=False, add_generation_prompt=True
        )
        image_inputs, video_inputs = process_vision_info(messages)
        inputs = processor(
            text=[text],
            images=image_inputs,
            videos=video_inputs,
            padding=True,
            return_tensors="pt",
        )
        inputs = inputs.to("cuda")

        generated_ids = model.generate(**inputs, max_new_tokens=512)
        generated_ids_trimmed = [
            out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
        ]
        output_text = processor.batch_decode(
            generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
        )
        raw['label'] = response
        raw['answer'] = output_text
        write_jsonl(raw, output_json)
```
## Examples
Figure below shows the performances of our model on the lite version of the visual rating benchmark using different strategies for instruct prompts. As anticipated, the model not employing coarse-to-fine CoT yields lower performance, which indicates inherent challenges in directly predicting ratings. In contrast, our baseline with coarse-to-fine CoT performs better, especially on zero-shot datasets, illustrating the effectiveness of the coarse-to-fine CoT in enhancing robust and general thinking ability for visual rating by learning the ordinal regression nature.
![Visualization results of coarse-to-fine CoT on different datasets.](./Visualization_results.png)