Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
1.59k
2.23k
label
class label
300 classes
01
01
01
01
110
110
110
110
2100
2100
2100
2100
3101
3101
3101
3101
4102
4102
4102
4102
5103
5103
5103
5103
6104
6104
6104
6104
7105
7105
7105
7105
8106
8106
8106
8106
9107
9107
9107
9107
10108
10108
10108
10108
11109
11109
11109
11109
1211
1211
1211
1211
13110
13110
13110
13110
14111
14111
14111
14111
15112
15112
15112
15112
16113
16113
16113
16113
17114
17114
17114
17114
18115
18115
18115
18115
19116
19116
19116
19116
20117
20117
20117
20117
21118
21118
21118
21118
22119
22119
22119
22119
2312
2312
2312
2312
24120
24120
24120
24120
End of preview. Expand in Data Studio

🌋 STORM: Stimulating Trustworthy Ordinal Regression Ability of MLLMs

Benchmarking All-in-one Visual Rating of MLLMs with A Comprehensive Ordinal Regression Dataset.

Contents

STORM Weights

Please check out our checkpoint_STORM for public STORM checkpoints, and the instructions of how to use the weights.

Dataset

Pretraining Dataset

To ensure a robust foundation for different visual rating tasks, our STORM data collection deliberately integrates a diverse selection of data including image quality assessment (IQA), image aesthetic assessment (IAA), facial age estimation (FAE), medical disease grading (MDG), and image historical date estimation (HDE). These data domains are intentionally chosen to cultivate a comprehensive skill set across varied visual rating tasks.

Domain Source Dataset Full Version Size Catergory
Image Quality Assessment (IQA) SPAQ 11,125 5 levels
Image Quality Assessment (IQA) ChallengeDB 1,169 5 levels
Image Quality Assessment (IQA) KonIQ 10,073 5 levels
Image Aesthetics Assessment (IAA) Aesthetics 13,706 5 levels
Image Aesthetics Assessment (IAA) TAD66K 66,327 5 levels
Image Aesthetics Assessment (IAA) AVA 255,508 5 levels
Facial Age Estimation (FAE) Adience 17,321 8 groups
Facial Age Estimation (FAE) CACD 163,446 14-62 years
Facial Age Estimation (FAE) Morph 50,015 16-77 years
Facial Age Estimation (FAE) UTK 24,106 1-116 years
Medical Disease Grading (MDG) Eyepacs 35,127 5 grades
Medical Disease Grading (MDG) DeepDR 2,000 5 grades
Medical Disease Grading (MDG) APTOS 3,662 5 grades
Historical Date Estimation (HDE) HCI 1,325 5 decades

Important notice: As these datasets provide only images and digital labels, they are designed with a standardized VQA paradigm by reusing their images and modifying the annotations into a textual form to enable MLLMs to undergo joint training for heterogeneous tasks of diverse domains. Specifically, each data sample originally consists of a simple question and a corresponding numeric answer. However, this paradigm can lead to numerical hallucination. Hence, we add extra domain-driven prompts and coarse-to-fine CoT to mitigate this issue. An example with the original VQA and our proposed coarse-to-fine CoT process is shown in the following figure. Meanwhile, we adopt the form of text + numbers for the labels to enhance semantic understanding. A data example with the original VQA compared with our coarse-to-fine CoT VQA.

STORM Prompts

Generating the dataset for IQA

<image> You are now an advanced Image Quality Evaluator, and your task is to assess the quality of the provided image. Please evaluate the image’s quality based on a 5-rate scale: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). Please provide the coarse category that can help you answer the question better. Please first coarsely categorise the image: rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). Based on the coarse classification, proceed to make a final rate prediction. The specific steps are as follows:
1. Make the coarse prediction with the candidates:rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]

Generating the dataset for IAA

<image> You are now an advanced Aesthetic Evaluation Evaluator, and your task is to assess the aesthetic quality of the provided image. Please evaluate the image’s aesthetic quality based on a 5-level scale: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent). Please first coarsely categorise the image: level0-1(Below Average), level2(Average), level3-4(Above Average). Based on the coarse classification, proceed to make a final level prediction. The specific steps are as follows:
1. Make the coarse prediction with the candidates:level0-1(Below Average), level2(Average), level3-4(Above Average).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]

Generating the dataset for MDG

<image> You are an experienced facial analysis expert, and you need to estimate the age group of the person in the provided facial image based on their facial features. The known age range of the image is from 16 to 77 years old. Please first coarsely categorise the image: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old). Based on the coarse classification, proceed to make a final age prediction.The final output should be in the format: Coarse Answer: [result], Predicted Age: [result]. The specific steps are as follows:
1. Make the coarse prediction with the candidates: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: from 16 to 77 years old.
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]

Generating the dataset for HDE

<image> You are now an advanced history researcher, and you need to grade the provided images by decade. These are all candidate categories: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s). Please first coarsely categorise the image: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4). Based on the coarse classification, proceed to make a final phase prediction.The final output should be in the format: Coarse Classification: [result], Predicted Phase: [result]. The specific steps are as follows:
1. Make the coarse prediction with the candidates: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4).
2. Based on the coarse classification, proceed to make a final age prediction with the candidates: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s).
3. Please note that the coarse thoughts and the final answer should be consistent.

Answer: [Coarse answer], [Final answer]

Evaluation

Below, we provide simple examples to demonstrate how to quickly load the Qwen2.5-VL model using 🤗 Transformers, along with testing it on our benchmark datasets:

import json
from tqdm import tqdm 
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto"
)

# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
#     "Qwen/Qwen2.5-VL-3B-Instruct",
#     torch_dtype=torch.bfloat16,
#     attn_implementation="flash_attention_2",
#     device_map="auto",
# )

# default processer
processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")

def write_jsonl(data, filename):
    with open(filename, 'a', encoding='utf-8') as f:
        json_str = json.dumps(data, ensure_ascii=False)  
        f.write(json_str + '\n')

file_path = 'STORM/IO_qwen_test_vqa_oc_80k.jsonl'
output_json = "answer.jsonl"

with open(file_path, 'r') as file:
    for line in tqdm(list(file), desc="Testing"):
        raw = {}
        data = json.loads(line.strip())
        
        query = data.get('query')
        response = data.get('response')
        image_path = data.get('image_path')
        
        messages = [
            {
                "role": "user",
                "content": [
                    {
                    "type": "image",
                    "image": image_path,
                    },
                    {
                    "type": "text", 
                    "text": query},
                ],
            }
        ]

        text = processor.apply_chat_template(
            messages, tokenize=False, add_generation_prompt=True
        )
        image_inputs, video_inputs = process_vision_info(messages)
        inputs = processor(
            text=[text],
            images=image_inputs,
            videos=video_inputs,
            padding=True,
            return_tensors="pt",
        )
        inputs = inputs.to("cuda")

        generated_ids = model.generate(**inputs, max_new_tokens=512)
        generated_ids_trimmed = [
            out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
        ]
        output_text = processor.batch_decode(
            generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
        )
        raw['label'] = response
        raw['answer'] = output_text
        write_jsonl(raw, output_json)

Examples

Figure below shows the performances of our model on the lite version of the visual rating benchmark using different strategies for instruct prompts. As anticipated, the model not employing coarse-to-fine CoT yields lower performance, which indicates inherent challenges in directly predicting ratings. In contrast, our baseline with coarse-to-fine CoT performs better, especially on zero-shot datasets, illustrating the effectiveness of the coarse-to-fine CoT in enhancing robust and general thinking ability for visual rating by learning the ordinal regression nature. Visualization results of coarse-to-fine CoT on different datasets.

Downloads last month
84