--- license: apache-2.0 task_categories: - zero-shot-classification - question-answering pretty_name: Ordinal Regression Dataset size_categories: - 100K You are now an advanced Image Quality Evaluator, and your task is to assess the quality of the provided image. Please evaluate the image’s quality based on a 5-rate scale: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). Please provide the coarse category that can help you answer the question better. Please first coarsely categorise the image: rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). Based on the coarse classification, proceed to make a final rate prediction. The specific steps are as follows: 1. Make the coarse prediction with the candidates:rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). 2. Based on the coarse classification, proceed to make a final age prediction with the candidates: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). 3. Please note that the coarse thoughts and the final answer should be consistent. Answer: [Coarse answer], [Final answer] ``` *Generating the dataset for IAA* ``` You are now an advanced Aesthetic Evaluation Evaluator, and your task is to assess the aesthetic quality of the provided image. Please evaluate the image’s aesthetic quality based on a 5-level scale: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent). Please first coarsely categorise the image: level0-1(Below Average), level2(Average), level3-4(Above Average). Based on the coarse classification, proceed to make a final level prediction. The specific steps are as follows: 1. Make the coarse prediction with the candidates:level0-1(Below Average), level2(Average), level3-4(Above Average). 2. Based on the coarse classification, proceed to make a final age prediction with the candidates: level0(Unacceptable), level1(Flawed), level2(Average), level3(Professional), level4(Excellent). 3. Please note that the coarse thoughts and the final answer should be consistent. Answer: [Coarse answer], [Final answer] ``` *Generating the dataset for MDG* ``` You are an experienced facial analysis expert, and you need to estimate the age group of the person in the provided facial image based on their facial features. The known age range of the image is from 16 to 77 years old. Please first coarsely categorise the image: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old). Based on the coarse classification, proceed to make a final age prediction.The final output should be in the format: Coarse Answer: [result], Predicted Age: [result]. The specific steps are as follows: 1. Make the coarse prediction with the candidates: Teenager(16-24 years old), Adult(25-47 years old), Elder(48+ years old). 2. Based on the coarse classification, proceed to make a final age prediction with the candidates: from 16 to 77 years old. 3. Please note that the coarse thoughts and the final answer should be consistent. Answer: [Coarse answer], [Final answer] ``` *Generating the dataset for HDE* ``` You are now an advanced history researcher, and you need to grade the provided images by decade. These are all candidate categories: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s). Please first coarsely categorise the image: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4). Based on the coarse classification, proceed to make a final phase prediction.The final output should be in the format: Coarse Classification: [result], Predicted Phase: [result]. The specific steps are as follows: 1. Make the coarse prediction with the candidates: Early(phase0-phase1), Mid(phase2), Late(phase3-phase4). 2. Based on the coarse classification, proceed to make a final age prediction with the candidates: phase0(1930s), phase1(1940s), phase2(1950s), phase3(1960s), and phase4(1970s). 3. Please note that the coarse thoughts and the final answer should be consistent. Answer: [Coarse answer], [Final answer] ``` ## Evaluation Below, we provide simple examples to demonstrate how to quickly load the Qwen2.5-VL model using 🤗 Transformers, along with testing it on our benchmark datasets: ```python import json from tqdm import tqdm from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor from qwen_vl_utils import process_vision_info # default: Load the model on the available device(s) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( "Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto" ) # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios. # model = Qwen2_5_VLForConditionalGeneration.from_pretrained( # "Qwen/Qwen2.5-VL-3B-Instruct", # torch_dtype=torch.bfloat16, # attn_implementation="flash_attention_2", # device_map="auto", # ) # default processer processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct") def write_jsonl(data, filename): with open(filename, 'a', encoding='utf-8') as f: json_str = json.dumps(data, ensure_ascii=False) f.write(json_str + '\n') file_path = 'STORM/IO_qwen_test_vqa_oc_80k.jsonl' output_json = "answer.jsonl" with open(file_path, 'r') as file: for line in tqdm(list(file), desc="Testing"): raw = {} data = json.loads(line.strip()) query = data.get('query') response = data.get('response') image_path = data.get('image_path') messages = [ { "role": "user", "content": [ { "type": "image", "image": image_path, }, { "type": "text", "text": query}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=512) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False ) raw['label'] = response raw['answer'] = output_text write_jsonl(raw, output_json) ``` ## Examples Figure below shows the performances of our model on the lite version of the visual rating benchmark using different strategies for instruct prompts. As anticipated, the model not employing coarse-to-fine CoT yields lower performance, which indicates inherent challenges in directly predicting ratings. In contrast, our baseline with coarse-to-fine CoT performs better, especially on zero-shot datasets, illustrating the effectiveness of the coarse-to-fine CoT in enhancing robust and general thinking ability for visual rating by learning the ordinal regression nature. ![Visualization results of coarse-to-fine CoT on different datasets.](./Visualization_results.png)