ttlyy commited on
Commit
7496d66
·
verified ·
1 Parent(s): d6ae625

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -2
README.md CHANGED
@@ -50,7 +50,7 @@ Specifically, each data sample originally consists of a simple question and a co
50
  An example with the original VQA and our proposed coarse-to-fine CoT process is shown in the following figure. Meanwhile, we adopt the form of text + numbers for the labels to enhance semantic understanding.
51
  ![A data example with the original VQA compared with our coarse-to-fine CoT VQA.](./example.png)
52
 
53
- ### STORM Prompts
54
  *Generating the dataset for IQA*
55
  ```
56
  <image> You are now an advanced Image Quality Evaluator, and your task is to assess the quality of the provided image. Please evaluate the image’s quality based on a 5-rate scale: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). Please provide the coarse category that can help you answer the question better. Please first coarsely categorise the image: rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). Based on the coarse classification, proceed to make a final rate prediction. The specific steps are as follows:
@@ -91,7 +91,86 @@ Answer: [Coarse answer], [Final answer]
91
  Answer: [Coarse answer], [Final answer]
92
  ```
93
 
94
-
95
  ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97
  ## Examples
 
50
  An example with the original VQA and our proposed coarse-to-fine CoT process is shown in the following figure. Meanwhile, we adopt the form of text + numbers for the labels to enhance semantic understanding.
51
  ![A data example with the original VQA compared with our coarse-to-fine CoT VQA.](./example.png)
52
 
53
+ ### ORD Prompts
54
  *Generating the dataset for IQA*
55
  ```
56
  <image> You are now an advanced Image Quality Evaluator, and your task is to assess the quality of the provided image. Please evaluate the image’s quality based on a 5-rate scale: rate0(Bad), rate1(Poor), rate2(Fair), rate3(Good), rate4(Excellent). Please provide the coarse category that can help you answer the question better. Please first coarsely categorise the image: rate0-1(Below Fair), rate2(Fair), rate3-4(Above Fair). Based on the coarse classification, proceed to make a final rate prediction. The specific steps are as follows:
 
91
  Answer: [Coarse answer], [Final answer]
92
  ```
93
 
 
94
  ## Evaluation
95
+ Below, we provide simple examples to demonstrate how to quickly load the Qwen2.5-VL model using 🤗 Transformers, along with testing it on our benchmark datasets:
96
+ ```python
97
+ import json
98
+ from tqdm import tqdm
99
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
100
+ from qwen_vl_utils import process_vision_info
101
+
102
+ # default: Load the model on the available device(s)
103
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
104
+ "Qwen/Qwen2.5-VL-3B-Instruct", torch_dtype="auto", device_map="auto"
105
+ )
106
+
107
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
108
+ # model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
109
+ # "Qwen/Qwen2.5-VL-3B-Instruct",
110
+ # torch_dtype=torch.bfloat16,
111
+ # attn_implementation="flash_attention_2",
112
+ # device_map="auto",
113
+ # )
114
+
115
+ # default processer
116
+ processor = AutoProcessor.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct")
117
+
118
+ def write_jsonl(data, filename):
119
+ with open(filename, 'a', encoding='utf-8') as f:
120
+ json_str = json.dumps(data, ensure_ascii=False)
121
+ f.write(json_str + '\n')
122
 
123
+ file_path = 'ORD/IO_qwen_test_vqa_oc_80k.jsonl'
124
+ output_json = "answer.jsonl"
125
+
126
+
127
+ with open(file_path, 'r') as file:
128
+ for line in tqdm(list(file), desc="Testing"):
129
+ raw = {}
130
+ data = json.loads(line.strip())
131
+
132
+ query = data.get('query')
133
+ response = data.get('response')
134
+ image_path = data.get('image_path')
135
+
136
+ messages = [
137
+ {
138
+ "role": "user",
139
+ "content": [
140
+ {
141
+ "type": "image",
142
+ "image": image_path,
143
+ },
144
+ {
145
+ "type": "text",
146
+ "text": query},
147
+ ],
148
+ }
149
+ ]
150
+
151
+ text = processor.apply_chat_template(
152
+ messages, tokenize=False, add_generation_prompt=True
153
+ )
154
+ image_inputs, video_inputs = process_vision_info(messages)
155
+ inputs = processor(
156
+ text=[text],
157
+ images=image_inputs,
158
+ videos=video_inputs,
159
+ padding=True,
160
+ return_tensors="pt",
161
+ )
162
+ inputs = inputs.to("cuda")
163
+
164
+ generated_ids = model.generate(**inputs, max_new_tokens=512)
165
+ generated_ids_trimmed = [
166
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
167
+ ]
168
+ output_text = processor.batch_decode(
169
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
170
+ )
171
+ raw['label'] = response
172
+ raw['answer'] = output_text
173
+ # json_data.append(raw)
174
+ write_jsonl(raw, output_json)
175
+ ```
176
  ## Examples