Datasets:

ArXiv:
License:
eve commited on
Commit
f79b6fc
·
verified ·
1 Parent(s): c22d407

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +192 -2
README.md CHANGED
@@ -119,7 +119,8 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
119
  ## Evaluation Metric
120
 
121
  ## Prompts
122
- ### Answer Generation Prompt for LLM-Based Method
 
123
  #Input
124
  Query: {}
125
  Context: {}
@@ -127,7 +128,7 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
127
 
128
  # Task
129
  Imagine you are an expert in handling multimodal input queries and producing coherent text-image responses. You will receive:
130
- 1. Query: The user query to be answered
131
  2. Contexts containing multiple images represented as placeholders <img>.
132
  - The input context follows the format:
133
  [context_1] <img1>, [context_2] <img2>, ...
@@ -145,8 +146,197 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
145
 
146
  # Output Example
147
  Doing household chores is a daily task that helps maintain a clean home. At the kitchen, dishes are neatly washed and placed in the drying rack, ready to be put away once they dry.<img10> Similarly, in the living room, the sofa cushions are fluffed and arranged properly, creating a comfortable space for relaxation.<img11>
 
148
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
150
 
151
  ## Contact
152
  If you have any questions or suggestions, please contact yuqinhan@stu.pku.edu.cn
 
119
  ## Evaluation Metric
120
 
121
  ## Prompts
122
+ ### Generation Prompts
123
+ #### Answer Generation Prompt for LLM-Based Method
124
  #Input
125
  Query: {}
126
  Context: {}
 
128
 
129
  # Task
130
  Imagine you are an expert in handling multimodal input queries and producing coherent text-image responses. You will receive:
131
+ 1. Query: The user query to be answered.
132
  2. Contexts containing multiple images represented as placeholders <img>.
133
  - The input context follows the format:
134
  [context_1] <img1>, [context_2] <img2>, ...
 
146
 
147
  # Output Example
148
  Doing household chores is a daily task that helps maintain a clean home. At the kitchen, dishes are neatly washed and placed in the drying rack, ready to be put away once they dry.<img10> Similarly, in the living room, the sofa cushions are fluffed and arranged properly, creating a comfortable space for relaxation.<img11>
149
+ #### Answer Generation Prompt for MLLM-Based Methods
150
 
151
+ # Input
152
+ Query: {}
153
+ Context: {}
154
+ Image Caption: {}
155
+
156
+ # Task
157
+ Imagine you are an expert in handling multimodal input queries and producing coherent text-image responses.
158
+ You will receive:
159
+ 1. Query: The user query to be answered.
160
+ 2. Contexts.
161
+ 3. A set of images.
162
+ 4. A set of image captions.
163
+ - Each caption is sequentially aligned in a one-to-one correspondence with its respective input image.
164
+
165
+ Your task is to answer the query based solely on the content of the context and input image information. Firstly, you should visually and textually understand the images based on the given images and image captions to select appropriate images from the input images (if none are suitable, you may choose not to include any). Next, based on the provided contexts and query, generate a multi-modal answer combining text and the selected images.
166
+
167
+ # Requirements
168
+ Ensure that your answer does not include any additional information outside the context. Please note, your answer should be presented in an interwoven text-image format, where you select images from the context and output them in the corresponding placeholder format. Please provide only the answer, without including any analysis.
169
+ Image Insert: When inserting image placeholders, place them at the most appropriate point within the answer. Image placeholders should be embedded naturally in the answer to support and enhance understanding, such as when describing specific locations, historical events, or notable buildings.
170
+
171
+ # Output Format
172
+ Please output the answer in an interwoven text-image format, where you select images from the context provided and output them in the corresponding placeholder format.
173
+
174
+ # Output Example
175
+ Doing household chores is a daily task that helps maintain a clean home. At the kitchen, dishes are neatly washed and placed in the drying rack, ready to be put away once they dry.<img10> Similarly, in the living room, the sofa cushions are fluffed and arranged properly, creating a comfortable space for relaxation.<img11>
176
+
177
+ #### Answer Generation Prompt for Rule-Based Methods
178
+
179
+ # Task
180
+ Imagine you are a text QA expert, skilled in delivering contextually relevant answers. You will receive:
181
+ 1. Query.
182
+ 2. Contexts.
183
+
184
+ Your task is to answer the query based solely on the content of the context.
185
+
186
+ # Requirements
187
+ Ensure that your answer does not include any additional information outside the context. Please note that your answer should be in pure text format.
188
+
189
+ # Output Format
190
+ Provide the answer in pure text format. Do not include any information beyond what is contained in the context.
191
+
192
+ ### Evaluation Prompts
193
+
194
+ #### Answer Evaluation Prompt for Image Relevance
195
+
196
+ # Input
197
+ Query: {}
198
+ Answer: {}
199
+ Image Context: {}
200
+ Image Caption: {}
201
+
202
+ # Task
203
+ Imagine you are a multimodal QA evaluation expert. Your task is to evaluate the relevance of selected images within an answer to the given query. Specifically, the answer contains both text and images. You need to assess whether the selected images are relevant to the QA pair in terms of content. The evaluation results should be output in the form of reasons and scores.
204
+
205
+ # Answer Input Format
206
+ [text_1] <img_1> [text_2] <img_2>...
207
+ Explanation:
208
+ Each [text_x]” is a piece of pure text context, and each <img> represents an image. The images will be provided in the same order as the placeholders <img>.
209
+
210
+ # Image Context Input Format
211
+ [context_above] <img> [context_bottom]
212
+ Explanation:
213
+ This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
214
+
215
+ # Scoring Criteria of Relevance (Each Image)
216
+ When scoring, strictly adhere to the following standards, with a range of 1 to 5:
217
+ - 1 point: Completely unrelated: The image has no connection to the main content of the query and answer, and is irrelevant.
218
+ - 2 points: Weakly related: The image has a very tenuous connection to the main content of the query and answer.
219
+ - 3 points: Partially related: The image is somewhat connected to part of the content of the query and answer.
220
+ - 4 points: Mostly related: The image has a fairly clear connection to the main content of the query and answer.
221
+ - 5 points: Highly related: The image is highly relevant to the content of the query and answer.
222
+ Provide a brief reason for the evaluation along with a score from 1 to 5. Ensure you do not use any evaluation criteria beyond the query and answer.
223
+ # Output Format
224
+ Please output two lines for the results: the first line is your reasoning for the score, and the second line is the score. Strictly follow this format without any additional content.
225
+ # Output Example
226
+ Partially related, the image depicts the general structure of the gate but does not clearly show the number of pillars, making it only somewhat relevant to the QA.
227
+ <relevance_score>3</relevance_score>
228
+
229
+ #### Answer Evaluation Prompt For Image Effectiveness
230
+
231
+ # Input
232
+ Query: {}
233
+ Answer: {}
234
+ Image Context: {}
235
+ Image Caption: {}
236
+
237
+ # Task
238
+ Imagine you are a multimodal QA evaluation expert. Your task is to evaluate the effectiveness of selected images within an answer to the given query. Specifically, the answer contains both text and images. You need to assess whether the selected images are effective to the QA pair in terms of content. The evaluation results should be output in the form of reasons and scores.
239
+
240
+ Answer Input Format
241
+ [text_1] <img_1> [text_2] <img_2>...
242
+ Explanation:
243
+ Each [text_x] is a piece of pure text context, and each <img> represents an image. The images will be provided in the same order as the placeholders <img>.
244
+
245
+ # Image Context Input Format
246
+ [context_above] <img> [context_bottom]
247
+ Explanation:
248
+ This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
249
+
250
+ # Scoring Criteria of Effectiveness (Each Image)
251
+ When scoring, strictly adhere to the following standards, with a range of 1 to 5:
252
+ - 1 point, Harmful: The images in the answer are harmful to answering the query, such as causing serious misunderstanding for the reader.
253
+ - 2 points, Irrelevant: The images in the answer are mostly unrelated to the query and the answer, with little to no connection overall.
254
+ - 3 points, Partially Effective: The images in the answer are somewhat effective in helping the reader understand the answer to the query.
255
+ - 4 points, Mostly Effective: The images in the answer are largely consistent with the answer to the query and effectively help the reader better understand the answer.
256
+ - 5 points, Highly Effective: The images in the answer provide crucial details for answering the query. They not only align with the answer but also offer highly effective supplementary information that aids in understanding the query-answer pair from a multimodal perspective.
257
+ Provide a brief reason for the evaluation along with a score from 1 to 5. Ensure you do not use any evaluation criteria beyond the query and answer.
258
+
259
+ # Output Format
260
+ Please output two lines for the results: the first line is your reasoning for the score, and the second line is the score. Strictly follow this format without any additional content.
261
+
262
+ # Output Example
263
+ Highly effective: The images in the answer, depicting the front entrance with three pillars, are highly effective in helping readers understand the query about how many pillars there are. They strongly support the response that states there are three pillars. All images provide crucial details that aid in the reader's comprehension.
264
+ <effective_score>5</effective_score>
265
+
266
+ #### Answer Evaluation Prompt For Comprehensive Answer Quality Evaluation
267
+
268
+ # Input
269
+ Query: {}
270
+ Answer: {}
271
+ Image Context: {}
272
+ Image Caption: {}
273
+
274
+ # Task
275
+ Imagine you are a multimodal QA evaluation expert. Your task is to evaluate the overall quality of the answer. Specifically, the answer contains both text and images. The evaluation results should be output in the form of reasons and scores.
276
+
277
+ # Answer Input Format
278
+ [text_1] <img_1> [tex_2] <img_2>...
279
+ Explanation:
280
+ Each [text_x] is a piece of pure text context, and each <img> represents an image. The images will be provided in the same order as the placeholders <img>.
281
+
282
+ # Image Context Input Format
283
+ [context_above] <img> [context_bottom]
284
+
285
+ Explanation:
286
+ This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
287
+
288
+ # Evaluation Criteria of Overall Quality
289
+ Strictly follow the scoring criteria below to assign a score between 1 and 5:
290
+ - 1 point, Poor Quality: The answer fails to address the question, the structure is confusing or missing, and the images are irrelevant or not helpful.
291
+ - 2 points, Fair Quality: The answer partially addresses the question but lacks completeness. The structure is weak, and the text-image integration is weak or only partially helpful.
292
+ - 3 points, Average Quality: The answer addresses the question but lacks depth. The structure is clear but could be improved. The images are somewhat helpful but don’t fully enhance understanding.
293
+ - 4 points, Good Quality: The answer is clear and fairly comprehensive. The structure is logical and well-organized, and the images enhance the understanding of the text.
294
+ - 5 points, Excellent Quality: The answer is detailed and insightful. The structure is strong and cohesive, and the images complement the text perfectly, significantly enhancing comprehension.
295
+ Provide a brief reason for the evaluation along with a score from 1 to 5. Ensure you do not use any evaluation criteria beyond the query and answer.
296
+
297
+ # Output Format
298
+ Please output two lines for the results: the first line is your reasoning for the score, and the second line is the score. Strictly follow this format without any additional content.
299
+
300
+ # Output Example
301
+ The answer provides a complete and coherent description of the Irish bouzouki, and the images in the answer help reinforce the explanation of its appearance. The structure is logical and easy to follow, with all images appropriately enhancing the reader's understanding of the instrument.
302
+ <overall_quality_score>5</overall_quality_score>
303
+
304
+
305
+ #### Answer Evaluation Prompt For Image Position
306
+ # Input
307
+ Query: {}
308
+ Answer: {}
309
+ Image Context: {}
310
+ Image Caption: {}
311
+
312
+ # Task
313
+ Imagine you are a multimodal problem-solving expert tasked with evaluating whether the position of each selected image within an answer to the given query is appropriate.
314
 
315
+ # Answer Input Format
316
+ [text_1] <img_1> [text_2] <img_2>...
317
+ Explanation:
318
+ Each [text_x] is a segment of pure text context, and each <img> represents an image. The images will be presented in the same order as the placeholders <img>.
319
+
320
+ # Image Context Input Format
321
+ [context_above] <img> [context_bottom]
322
+
323
+ Explanation:
324
+ This format represents the contextual information surrounding the image within its original document. It provides supplementary information to assist in evaluating the image.
325
+
326
+ # Revised Evaluation Criteria:
327
+ Strictly follow the criteria below to assign a score of 0 or 1:
328
+ - 0 point, Inappropriate Position: The image is irrelevant to both the preceding and following context, or the position of the image does not enhance content understanding or visual appeal. The insertion of the image does not align with the logical progression of the text and fails to improve the reading experience or information transmission.
329
+ - 1 point, Appropriate Position: The image is contextually relevant to at least one of the surrounding contexts (preceding or following), and it enhances content understanding or visual effect. The position of the image aligns with the logical flow of the text and is inserted appropriately, improving the overall information delivery. If the description of the image is detailed, it further clarifies the connection between the image and the text, enhancing the overall expressive effect.
330
+
331
+ # Output Format
332
+ Provide a brief justification for the evaluation and a score of either 0 or 1. Ensure no evaluation criteria beyond the provided query and answer are used.
333
+ Please output two lines for each image: the first line is your reasoning for the score, and the second line is the score. Strictly follow this format without any additional content.
334
+
335
+ # Output Example
336
+ <img_1> displays a distant aerial view of the site, but the surrounding context focuses on intricate design details of the main entrance. The image placement does not align with the described content and does not improve comprehension.
337
+ <img_1_score>0</img_1_score>
338
+ <img_2> shows a close-up of one of the pillars, which is directly referenced in the following context about the structure's details. The image placement aligns with the description, enhancing understanding.
339
+ <img_2_score>1</img_2_score>
340
 
341
  ## Contact
342
  If you have any questions or suggestions, please contact yuqinhan@stu.pku.edu.cn