File size: 21,518 Bytes
c03d13f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
[
  {
    "id": "0",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What are 3 novelties of Video-MMMU dataset",
    "question_type": "summary",
    "evidence_type": "text",
    "answer": "1) Knowledge-Intensive Video Collection: The dataset includes 300 expert-level videos across 6 professional disciplines, covering 30 subjects. 2) Knowledge Acquisition-Based QA Design: Each video contains three QA pairs corresponding to the stages of knowledge acquisition—Perception (extracting key information), Comprehension (grasping concepts), and Adaptation (applying knowledge to new contexts). 3) Quantitative Knowledge Assessment: they introduce a delta knowledge metric to measure performance gains on practice exam questions after watching the videos, enabling quantitative evaluation of LMMs' ability to learn and apply new knowledge.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "1",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How QA pairs are categorized within each track in Video-MMMU?",
    "question_type": "summary",
    "evidence_type": "text",
    "answer": "Perception Questions assess the ability to extract information from videos via: 1) Optical Character Recognition (OCR) and 2) Automatic Speech Recognition (ASR). Comprehension Questions evaluate understanding through: 1) Concept Comprehension (CC) and 2) Problem-Solving Strategy Comprehension (PSC). Adaptation Questions test the ability to apply knowledge to new scenarios via: 1) Case Study Analysis (CSA) and 2) Problem-Solving Strategy Adaptation (PSA).",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "2",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Are the QAs in Video-MMMU all newly annotated by human annotators with no other sources of data?",
    "question_type": "fact",
    "evidence_type": "text",
    "answer": "No. Perception and Comprehension questions are manually created. For Adaptation, questions in Science, Engineering, Medicine, and Business are sourced from MMMU/MMMU-Pro, while Art and Humanities remain manual.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "3",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Does Video-MMMU has the longest video length among all benchmarks recorded in the paper?",
    "question_type": "reasoning",
    "evidence_type": "table",
    "answer": "No. 506.2s. The benchmark with longest video length is Video-MME, which is 1017.9s.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "4",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Does Claude-3.5-Sonnet achieves the highest delta score on Video-MMMU? If so, what is the delta score? If not, tell me the score and also what is the delta score of the highest delta score model?",
    "question_type": "reasoning",
    "evidence_type": "table",
    "answer": "No. 11.4. smaller than GPT-4o's 15.6%.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "6",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How delta knowlegde is calculated? What is the formula?",
    "question_type": "fact",
    "evidence_type": "formula",
    "answer": "Δ_knowledge = \\frac{Acc_{post} - Acc_{pre}}{100\\% - Acc_{pre}} \\times 100\\% \\quad \\text{where } Acc_{pre} \\text{ and } Acc_{post} \\text{ represent the accuracy before and after watching the video, respectively.}",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "7",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is the SOTA model performance on Video-MMMU dataset",
    "question_type": "reasoning",
    "evidence_type": "table",
    "answer": "65.78%, Claude-3.5-Sonnet",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "8",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is GPT-4o's Overall score on video-mmmu?",
    "question_type": "fact",
    "evidence_type": "table",
    "answer": "61.22",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "9",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How much error in video-mmmu are question misreading error?",
    "question_type": "fact",
    "evidence_type": "text",
    "answer": "15%",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "10",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Explain the error analysis in video-mmmu?",
    "question_type": "summary",
    "evidence_type": "text",
    "answer": "Method Selection Error (8%): The model chooses the wrong approach, failing to apply the correct strategy demonstrated in the video. Method Adaptation Error (64%): The model recalls and understands the video-taught method but struggles to adapt it to new scenarios. For example, it correctly applies DFS in a simple tree but fails in a complex graph with cycles, highlighting its difficulty in transferring learned methods across contexts. Question Misreading Error (15%): The model misinterprets question details, such as numerical values or conditions, unrelated to its knowledge application. Other Errors: Includes Refuse to Answer (4%), where the model expresses uncertainty; Annotation Error (4%), due to inaccurate labeling; and Answer Extraction Error (5%), where answers fail to be extracted from longer responses.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "11",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How much percentage of video-mmmu error is Refuse to answer error?",
    "question_type": "fact",
    "evidence_type": "text",
    "answer": "4%",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "12",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How much video-mmmu error is Annotation error?",
    "question_type": "fact",
    "evidence_type": "text",
    "answer": "4%",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "13",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How much video-mmmu error is Answer Extraction error?",
    "question_type": "fact",
    "evidence_type": "text",
    "answer": "5%",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "15",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is Aria's Overall score with transcript on video-mmmu?",
    "question_type": "fact",
    "evidence_type": "figure",
    "answer": "53.67%",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "16",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Explain the example of DFS to help me understand the method adaptation error.",
    "question_type": "summary",
    "evidence_type": "figure",
    "answer": "The video teaches DFS principles, but the adaptation question applies them to a complex graph with cycles. Before the video, both Claude and Humans misfocused on cycles. Afterward, both grasped the core principle, but Claude failed to adapt it correctly, while Humans successfully applied it. This highlights the difficulty of method adaptation in new scenarios.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "17",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How many video-mmmu Questions are ASR?",
    "question_type": "fact",
    "evidence_type": "figure",
    "answer": "23",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "18",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is Question Misreading error? Can you explain with an example case of GPT-4o?",
    "question_type": "summary",
    "evidence_type": "figure",
    "answer": "The video explains how to determine the work function (∅) in Photoelectric Effect Graphs by identifying the y-intercept, eliminating the need for formulas. Before the video: • Both humans and the model relied on formulas, leading to incorrect answers. After the video: • The model correctly recognized the y-intercept method but misread the graph, identifying -2.0 instead of -1.5 due to a mistaken x-intercept assumption. • Humans accurately identified the y-intercept and found the correct answer (1.5). This case illustrates a Question Misreading Error by GPT-4o, where it applied the right method but misinterpreted the graph.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "19",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Can you explain the wrong-to-right example of the video lecture '23 tree' in Video-MMMU?",
    "question_type": "summary",
    "evidence_type": "figure",
    "answer": "This case shows the model learning from a 2-3 tree lecture to correct its misunderstanding of insertion and reorganization. The video explains node insertion cases and restructuring rules, which the adaptation question tests. Before the video, the model: • Misjudged insertion effects on the root • Misunderstood reorganization rules • Incorrectly identified only S4 as true After the video, the model: • Recognized node splits and reorganization • Applied Case 2 principles correctly • Identified S1 and S4 as true This demonstrates successful knowledge acquisition, as the model corrected its understanding and applied the learned principles accurately.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "20",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is the number of Video-MMMU questions where audio might help?",
    "question_type": "reasoning",
    "evidence_type": "figure",
    "answer": "Art: 16, Business: 28, Medicine: 32, Science: 23, Humanities: 26, Engineering: 29. Total = 154.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "21",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How many questions are in the Comprehension track of Video-MMMU?",
    "question_type": "reasoning",
    "evidence_type": "figure",
    "answer": "300",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "22",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How many questions are in the Art discipline of Video-MMMU?",
    "question_type": "reasoning",
    "evidence_type": "figure + Multi-page",
    "answer": "Video: 7% * 300 = 21. Each video has 3 questions from 3 tracks. So 21 * 3 = 63.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "23",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is the performance gap between Human and model in the main experiment of Video-MMMU?",
    "question_type": "resaoning",
    "evidence_type": "table",
    "answer": "74.44-65.78=8.66",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "31",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which open-source model performs best on the Adaptation track and how does it compare to the worst-performing open-source model?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-row",
    "answer": "LLaVA-Video-72B performs the best among open-source models on the Adaptation track with an accuracy of 43.33%. The worst-performing open-source model is InternVL2-8B with an accuracy of 31.67%. The difference in performance is 11.66 percentage points.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "32",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which discipline shows the largest performance gap between Human Experts and Claude-3.5-Sonnet?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-row",
    "answer": "The largest performance gap is in the Medicine discipline: Human Experts scored 70.54%, while Claude-3.5-Sonnet scored 58.14%, resulting in a 12.4 percentage point gap.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "33",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which proprietary model achieves the highest Perception score, and how much higher is it than the top open-source model?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-row",
    "answer": "Claude-3.5-Sonnet achieves the highest Perception score among proprietary models with 72.00%. The top open-source model in Perception is Aria with 65.67%. Claude outperforms Aria by 6.33 percentage points.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "34",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which model demonstrates the highest Wrong-to-Right Rate and what is its corresponding Delta Knowledge value?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-row",
    "answer": "Claude-3.5-Sonnet has a Wrong-to-Right Rate of 28.8% and a ∆knowledge value of 11.4%.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "35",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which model shows a negative Delta Knowledge score and also has the highest Right-to-Wrong Rate?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-row",
    "answer": "InternVL2-8B has the lowest ∆knowledge score of -8.5% and the highest Right-to-Wrong Rate at 55.0%.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "37",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What are the formulas for Wrong-to-Right Rate and Right-to-Wrong Rate?",
    "question_type": "fact",
    "evidence_type": "formula + Multi-chunk",
    "answer": "Wrong-to-Right Rate = (N_Wrong-to-Right / N_Wrong-before) × 100%. Right-to-Wrong Rate = (N_Right-to-Wrong / N_Right-before) × 100%. These are defined in two separate paragraphs when discussing model response changes after watching videos.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "38",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What are the three main types of errors observed in Claude-3.5-Sonnet on the Adaptation track and their respective percentages?",
    "question_type": "fact",
    "evidence_type": "text + Multi-chunk",
    "answer": "The three main types of errors are: Method Adaptation Error (64%), Question Misreading (15%), and Method Selection Error (8%). This is summarized in Figure 7 and explained in detail in adjacent paragraphs.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "39",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How are Adaptation track questions evaluated and what inputs are provided to models?",
    "question_type": "fact",
    "evidence_type": "text + Multi-chunk",
    "answer": "The input includes the full video and a final frame with the question's image appended at the end. A special prompt is added indicating this setup, as shown in Figure 8 and the paragraph explaining inputs in Section 4.1. Prompt: System Message: As an AI assistant, you should watch and learn from the video. Then, adapt what you learned to answer the following question. The image for this question is at the end of the video. Question: [Question Text] Options: A) [Option A] B) [Option B] [etc.]",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "40",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What are the two video types in Video-MMMU and how do they differ?",
    "question_type": "fact",
    "evidence_type": "text + Multi-chunk",
    "answer": "Video-MMMU includes two types of videos: Concept-Introduction and Problem-Solving. Concept videos focus on explaining theories and facts, while Problem-Solving videos demonstrate step-by-step solutions. This is explained in the text and visually illustrated in Figure 2.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "41",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What are the three cognitive tracks in Video-MMMU and how are they visually illustrated?",
    "question_type": "fact",
    "evidence_type": "text + Multi-page",
    "answer": "The three tracks are Perception (extracting key information), Comprehension (understanding underlying concepts), and Adaptation (applying knowledge to new problems). They are introduced on page 1 and visually illustrated in Figure 1 on page 3, where each stage is associated with a different model behavior.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "42",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Which model shows the highest deltaknowledge score and what does this metric represent?",
    "question_type": "reasoning",
    "evidence_type": "table + Multi-page",
    "answer": "GPT-4o shows the highest ∆knowledge score among models with 15.6%. The ∆knowledge metric represents the normalized performance improvement in the Adaptation track after watching a video, as defined on page 7.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "43",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What does a Method Adaptation Error look like in the Adaptation track, and how common is it, and what example is illustrated in which figure?",
    "question_type": "reasoning",
    "evidence_type": "text + Multi-page",
    "answer": "A Method Adaptation Error occurs when a model recalls the correct method from the video but fails to apply it to a new scenario. Figure 6 shows an example with DFS intervals, while page 8 states that Method Adaptation accounts for 64% of Claude-3.5-Sonnet’s errors.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "44",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "What is the difference between Concept Comprehension and Problem-solving Strategy Comprehension in the Comprehension track?",
    "question_type": "reasoning",
    "evidence_type": "text + Multi-chunk",
    "answer": "Page 3 introduces the taxonomy where Concept Comprehension (CC) evaluates understanding of statements, often using multiple-answer formats, while Problem-solving Strategy Comprehension (PSC) changes inputs in example questions to test generalization.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "45",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "Give me an example of a question that is a good fit for the Problem-solving Strategy Comprehension track.",
    "question_type": "reasoning",
    "evidence_type": "figure + Multi-row",
    "answer": "An example question is: 'In the video, Example Question (1) is solved with an angle θ=25 degrees. If the angle θ is adjusted to 30 degrees while all other conditions remain unchanged, what will be the updated result for Example Question (1) as explained in the video?' This question appears in the Science domain and tests whether the model comprehends and can apply the same problem-solving strategy demonstrated in the video.",
    "content_domain": "Academic paper",
    "Comment": ""
  },
  {
    "id": "46",
    "doc_id": [
      "TinyBench/videommmu_paper.pdf"
    ],
    "file_type": "paper pdf",
    "question": "How many proprietary and open-source models are evaluated in Video-MMMU?",
    "question_type": "fact",
    "evidence_type": "text + Multi-row",
    "answer": "Video-MMMU evaluates 4 proprietary LMMs (Gemini 1.5 Flash, Gemini 1.5 Pro, GPT-4o, Claude-3.5-Sonnet) and 11 open-source LMMs (VILA1.5-8B, LongVA-7B, Llama-3.2-11B, LLaVA-OneVision-7B, VILA1.5-40B, LLaVA-Video-7B, InternVL2-8B, MAmmoTH-VL-8B, LLaVA-OneVision-72B, LLaVA-Video-72B, Aria).",
    "content_domain": "Academic paper",
    "Comment": ""
  }
]