Datasets:

ArXiv:
License:
eve commited on
Commit
bfbe84d
·
verified ·
1 Parent(s): 098d8cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -31
README.md CHANGED
@@ -18,12 +18,12 @@ The dataset includes **six JSONL files**, each corresponding to a different data
18
 
19
  | File Name | Description | Num |
20
  |--------------------|-------------|-------------|
21
- | `doc_wit.jsonl` | Wit-MQA+ documents | 639 |
22
- | `doc_wiki.jsonl` | Wiki-MQA+ documents | 538 |
23
- | `doc_web.jsonl` | Web-MQA+ documents | 1500 |
24
- | `doc_arxiv.jsonl` | Arxiv-MQA+ documents | 101 |
25
- | `doc_recipe.jsonl`| Recipe-MQA+ documents | 1528 |
26
- | `doc_manual.jsonl`| Manual-MQA+ documents | 40 |
27
 
28
  Each line in these files represents a **single document** with the following fields:
29
 
@@ -49,13 +49,13 @@ The **MQA component** consists of **six JSONL files**, each corresponding to a d
49
 
50
  | File Name | Description | Num |
51
  |--------------------|-------------|-------------|
52
- | `wit_mqa.jsonl` | Wit-MQA+ multimodal QA pairs | 600 |
53
- | `wiki_mqa.jsonl` | Wiki-MQA+ multimodal QA pairs | 500 |
54
- | `web_mqa.jsonl` | Web-MQA+ multimodal QA pairs | 750 |
55
  | `arxiv_mqa.jsonl`
56
- | Arxiv-MQA+ QA pairs | 200 |
57
- | `recipe_mqa.jsonl` | Recipe-MQA+ QA pairs | 2360 |
58
- | `manual_mqa.jsonl` | Manual-MQA+ QA pairs | 390 |
59
 
60
 
61
  Each entry contains **a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer**.
@@ -91,12 +91,12 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
91
 
92
  | File Name | Description | Num |
93
  |--------------------|-------------|-------------|
94
- | `wit_imgs_collection.json` | Image metadata from Wit-MQA+ | 639 |
95
- | `wiki_imgs_collection.json` | Image metadata from Web-MQA+ | 538 |
96
- | `web_imgs_collection.json` | Image metadata from Wiki-MQA+ | 1500 |
97
- | `arxiv_imgs_collection.json` | Image metadata from Arxiv-MQA+ | 337 |
98
- | `recipe_imgs_collection.json` | Image metadata from Recipe-MQA+ | 8569 |
99
- | `manual_imgs_collection.json` | Image metadata from Manual-MQA+ | 2607 |
100
 
101
  #### **Data Format (Example Entry)**
102
  ```json
@@ -433,8 +433,8 @@ We use the following _LLM-based metrics_:
433
 
434
  ## Results
435
  In this section, we give the full experiment results, wherein the metrics of **Prec.**, **Rec.**, **F1.**, **R.L.**, **B.S.**, **Rel.**, **Eff.**, **Comp.**, **Pos.**, and **Avg.** represent image precision, image recall, image F1 score, rouge-l, BERTScore, image relevance, image effectiveness, comprehensive score, image position score, and average score, respectively. Specifically, the metric **Ord.** represents image ordering score.
436
- ### Comprehensive performance results on Wit(Wit-MQA+).
437
- | Framework | Model | Wit-MQA+ | | | | | | | | | |
438
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
439
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
440
  | Rule-Based | GPT-4o | 49.50 | 49.67 | 49.56 | 56.23 | 92.27 | 43.67 | 39.50 | 77.00 | 50.08 | 56.39 |
@@ -465,8 +465,8 @@ In this section, we give the full experiment results, wherein the metrics of **P
465
  | | Llama-3.3-70B-Instruct | 86.58 | 96.00 | 89.09 | 44.83 | 92.87 | 81.93 | 75.33 | 78.90 | 88.15 | 81.52 |
466
 
467
 
468
- ### Comprehensive performance results on Wiki-MQA+(Web Dataset).
469
- | Framework | Model | Wiki-MQA+ | | | | | | | | | |
470
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
471
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
472
  | Rule-Based | GPT-4o | 53.00 | 53.00 | 53.00 | 54.62 | 95.15 | 46.60 | 42.56 | 82.24 | 53.00 | 59.24 |
@@ -496,8 +496,8 @@ In this section, we give the full experiment results, wherein the metrics of **P
496
  | | Llama-3.1-8B-Instruct | 23.50 | 28.00 | 24.79 | 35.66 | 85.16 | 23.04 | 21.68 | 51.16 | 23.90 | 35.21 |
497
  | | Llama-3.3-70B-Instruct | 70.61 | 94.40 | 76.35 | 47.86 | 95.47 | 78.16 | 71.84 | 76.96 | 71.46 | 75.90 |
498
 
499
- ### Comprehensive performance results on Web-MQA+(Web Dataset).
500
- | Framework | Model | Web-MQA+ | | | | | | | | | |
501
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
502
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
503
  | Rule-Based | GPT-4o | 32.47 | 16.93 | 22.11 | 39.17 | 90.56 | 29.47 | 27.81 | 73.87 | 32.80 | 40.58 |
@@ -527,8 +527,8 @@ In this section, we give the full experiment results, wherein the metrics of **P
527
  | | Llama-3.1-8B-Instruct | 29.34 | 26.27 | 26.31 | 33.70 | 81.16 | 32.08 | 30.48 | 51.81 | 32.38 | 38.17 |
528
  | | Llama-3.3-70B-Instruct | 66.83 | 95.80 | 75.47 | 47.98 | 94.79 | 92.03 | 88.03 | 88.93 | 69.34 | 79.91 |
529
 
530
- ### Comprehensive performance results on Arxiv-MQA+(Academic Paper Dataset).
531
- | Framework | Model | Arxiv-MQA+ | | | | | | | | | |
532
  |------------|------------------------|-------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
533
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
534
  | Rule-Based | GPT-4o | 55.42 | 63.04 | 57.70 | 44.96 | 94.67 | 69.10 | 67.30 | 84.20 | 75.75 | 68.02 |
@@ -558,8 +558,8 @@ In this section, we give the full experiment results, wherein the metrics of **P
558
  | | Llama-3.1-8B-Instruct | 1.50 | 2.00 | 1.67 | 25.78 | 80.61 | 3.30 | 3.00 | 43.40 | 4.00 | 18.36 |
559
  | | Llama-3.3-70B-Instruct | 38.78 | 84.88 | 48.56 | 37.83 | 95.01 | 85.50 | 81.80 | 83.40 | 64.59 | 68.93 |
560
 
561
- ### Comprehensive performance results on Recipe-MQA+(Lifestyle Dataset).
562
- | Framework | Model | Recipe-MQA+ | | | | | | | | | | |
563
  |------------|------------------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
564
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Ord. | Rel. | Eff. | Comp. | Pos. | Avg. |
565
  | Rule-Based | GPT-4o | 48.79 | 66.11 | 52.76 | 51.80 | 92.10 | 45.30 | 77.80 | 74.64 | 79.19 | 78.04 | 66.65 |
@@ -589,9 +589,9 @@ In this section, we give the full experiment results, wherein the metrics of **P
589
  | | Llama-3.1-8B-Instruct | 11.56 | 12.69 | 10.89 | 24.61 | 75.21 | 6.70 | 17.71 | 17.04 | 41.86 | 18.32 | 23.66 |
590
  | | Llama-3.3-70B-Instruct | 36.87 | 72.52 | 44.31 | 38.38 | 91.99 | 31.00 | 81.84 | 79.19 | 80.84 | 71.99 | 62.89 |
591
 
592
- ### Comprehensive performance results on Manual-MQA+(Lifestyle Dataset).
593
 
594
- | Framework | Model | Manual-MQA+ | | | | | | | | | | |
595
  |------------|------------------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
596
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Ord. | Rel. | Eff. | Comp. | Pos. | Avg. |
597
  | Rule-Based | GPT-4o | 36.45 | 47.97 | 38.32 | 50.82 | 91.51 | 32.10 | 75.79 | 73.44 | 79.08 | 71.66 | 59.71 |
@@ -607,7 +607,7 @@ In this section, we give the full experiment results, wherein the metrics of **P
607
  | | Llama-3.3-70B-Instruct | 34.53 | 44.35 | 35.60 | 49.50 | 91.22 | 30.26 | 73.13 | 71.03 | 75.74 | 69.26 | 57.46 |
608
  | MLLM-Based | GPT-4o | 35.07 | 33.78 | 32.44 | 44.68 | 91.16 | 24.50 | 75.49 | 73.28 | 79.59 | 73.38 | 56.34 |
609
  | | GPT-4o-mini | 23.43 | 32.24 | 25.16 | 43.60 | 91.05 | 17.33 | 72.92 | 71.13 | 75.23 | 62.22 | 51.43 |
610
- | | Claude-35-Sonnet | 25.17 | 39.24 | 28.47 | 40.32 | 91.02 | 19.94 | 80.51 | 78.10 | 80.41 | 75.12 | 55.83 |
611
  | | Gemini-1.5-Pro | 36.01 | 44.68 | 37.14 | 48.87 | 90.99 | 28.76 | 76.62 | 74.62 | 79.79 | 66.32 | 58.38 |
612
  | | Qwen2-VL-7B-Instruct | 13.32 | 15.05 | 13.48 | 41.07 | 86.02 | 3.09 | 13.38 | 12.82 | 57.74 | 10.46 | 26.65 |
613
  | | Qwen2-VL-72B-Instruct | 22.13 | 24.92 | 21.62 | 44.36 | 90.34 | 12.95 | 49.08 | 47.13 | 73.44 | 41.23 | 42.72 |
 
18
 
19
  | File Name | Description | Num |
20
  |--------------------|-------------|-------------|
21
+ | `doc_wit.jsonl` |MRAMG-Wit documents | 639 |
22
+ | `doc_wiki.jsonl` | MRAMG-Wiki documents | 538 |
23
+ | `doc_web.jsonl` | MRAMG-Web documents | 1500 |
24
+ | `doc_arxiv.jsonl` | MRAMG-Arxiv documents | 101 |
25
+ | `doc_recipe.jsonl`| MRAMG-Recipe documents | 1528 |
26
+ | `doc_manual.jsonl`| MRAMG-Manual documents | 40 |
27
 
28
  Each line in these files represents a **single document** with the following fields:
29
 
 
49
 
50
  | File Name | Description | Num |
51
  |--------------------|-------------|-------------|
52
+ | `wit_mqa.jsonl` |MRAMG-Wit multimodal QA pairs | 600 |
53
+ | `wiki_mqa.jsonl` | MRAMG-Wiki multimodal QA pairs | 500 |
54
+ | `web_mqa.jsonl` | MRAMG-Web multimodal QA pairs | 750 |
55
  | `arxiv_mqa.jsonl`
56
+ | MRAMG-Arxiv QA pairs | 200 |
57
+ | `recipe_mqa.jsonl` | MRAMG-Recipe QA pairs | 2360 |
58
+ | `manual_mqa.jsonl` | MRAMG-Manual QA pairs | 390 |
59
 
60
 
61
  Each entry contains **a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer**.
 
91
 
92
  | File Name | Description | Num |
93
  |--------------------|-------------|-------------|
94
+ | `wit_imgs_collection.json` | Image metadata fromMRAMG-Wit | 639 |
95
+ | `wiki_imgs_collection.json` | Image metadata from MRAMG-Web | 538 |
96
+ | `web_imgs_collection.json` | Image metadata from MRAMG-Wiki | 1500 |
97
+ | `arxiv_imgs_collection.json` | Image metadata from MRAMG-Arxiv | 337 |
98
+ | `recipe_imgs_collection.json` | Image metadata from MRAMG-Recipe | 8569 |
99
+ | `manual_imgs_collection.json` | Image metadata from MRAMG-Manual | 2607 |
100
 
101
  #### **Data Format (Example Entry)**
102
  ```json
 
433
 
434
  ## Results
435
  In this section, we give the full experiment results, wherein the metrics of **Prec.**, **Rec.**, **F1.**, **R.L.**, **B.S.**, **Rel.**, **Eff.**, **Comp.**, **Pos.**, and **Avg.** represent image precision, image recall, image F1 score, rouge-l, BERTScore, image relevance, image effectiveness, comprehensive score, image position score, and average score, respectively. Specifically, the metric **Ord.** represents image ordering score.
436
+ ### Comprehensive performance results on Wit(Wit-MQA).
437
+ | Framework | Model | MRAMG-Wit | | | | | | | | | |
438
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
439
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
440
  | Rule-Based | GPT-4o | 49.50 | 49.67 | 49.56 | 56.23 | 92.27 | 43.67 | 39.50 | 77.00 | 50.08 | 56.39 |
 
465
  | | Llama-3.3-70B-Instruct | 86.58 | 96.00 | 89.09 | 44.83 | 92.87 | 81.93 | 75.33 | 78.90 | 88.15 | 81.52 |
466
 
467
 
468
+ ### Comprehensive performance results on MRAMG-WikiWeb Dataset).
469
+ | Framework | Model | MRAMG-Wiki | | | | | | | | | |
470
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
471
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
472
  | Rule-Based | GPT-4o | 53.00 | 53.00 | 53.00 | 54.62 | 95.15 | 46.60 | 42.56 | 82.24 | 53.00 | 59.24 |
 
496
  | | Llama-3.1-8B-Instruct | 23.50 | 28.00 | 24.79 | 35.66 | 85.16 | 23.04 | 21.68 | 51.16 | 23.90 | 35.21 |
497
  | | Llama-3.3-70B-Instruct | 70.61 | 94.40 | 76.35 | 47.86 | 95.47 | 78.16 | 71.84 | 76.96 | 71.46 | 75.90 |
498
 
499
+ ### Comprehensive performance results on MRAMG-Web(Web Dataset).
500
+ | Framework | Model | MRAMG-Web+ | | | | | | | | | |
501
  |------------|------------------------|-----------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
502
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
503
  | Rule-Based | GPT-4o | 32.47 | 16.93 | 22.11 | 39.17 | 90.56 | 29.47 | 27.81 | 73.87 | 32.80 | 40.58 |
 
527
  | | Llama-3.1-8B-Instruct | 29.34 | 26.27 | 26.31 | 33.70 | 81.16 | 32.08 | 30.48 | 51.81 | 32.38 | 38.17 |
528
  | | Llama-3.3-70B-Instruct | 66.83 | 95.80 | 75.47 | 47.98 | 94.79 | 92.03 | 88.03 | 88.93 | 69.34 | 79.91 |
529
 
530
+ ### Comprehensive performance results on MRAMG-ArxivAcademic Paper Dataset).
531
+ | Framework | Model | MRAMG-Arxiv | | | | | | | | | |
532
  |------------|------------------------|-------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
533
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Rel. | Eff. | Comp. | Pos. | Avg. |
534
  | Rule-Based | GPT-4o | 55.42 | 63.04 | 57.70 | 44.96 | 94.67 | 69.10 | 67.30 | 84.20 | 75.75 | 68.02 |
 
558
  | | Llama-3.1-8B-Instruct | 1.50 | 2.00 | 1.67 | 25.78 | 80.61 | 3.30 | 3.00 | 43.40 | 4.00 | 18.36 |
559
  | | Llama-3.3-70B-Instruct | 38.78 | 84.88 | 48.56 | 37.83 | 95.01 | 85.50 | 81.80 | 83.40 | 64.59 | 68.93 |
560
 
561
+ ### Comprehensive performance results on MRAMG-RecipeLifestyle Dataset).
562
+ | Framework | Model | MRAMG-Recipe | | | | | | | | | | |
563
  |------------|------------------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
564
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Ord. | Rel. | Eff. | Comp. | Pos. | Avg. |
565
  | Rule-Based | GPT-4o | 48.79 | 66.11 | 52.76 | 51.80 | 92.10 | 45.30 | 77.80 | 74.64 | 79.19 | 78.04 | 66.65 |
 
589
  | | Llama-3.1-8B-Instruct | 11.56 | 12.69 | 10.89 | 24.61 | 75.21 | 6.70 | 17.71 | 17.04 | 41.86 | 18.32 | 23.66 |
590
  | | Llama-3.3-70B-Instruct | 36.87 | 72.52 | 44.31 | 38.38 | 91.99 | 31.00 | 81.84 | 79.19 | 80.84 | 71.99 | 62.89 |
591
 
592
+ ### Comprehensive performance results on MRAMG-ManualLifestyle Dataset).
593
 
594
+ | Framework | Model | MRAMG-Manual | | | | | | | | | | |
595
  |------------|------------------------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
596
  | | | Prec. | Rec. | F1 | R.L. | B.S. | Ord. | Rel. | Eff. | Comp. | Pos. | Avg. |
597
  | Rule-Based | GPT-4o | 36.45 | 47.97 | 38.32 | 50.82 | 91.51 | 32.10 | 75.79 | 73.44 | 79.08 | 71.66 | 59.71 |
 
607
  | | Llama-3.3-70B-Instruct | 34.53 | 44.35 | 35.60 | 49.50 | 91.22 | 30.26 | 73.13 | 71.03 | 75.74 | 69.26 | 57.46 |
608
  | MLLM-Based | GPT-4o | 35.07 | 33.78 | 32.44 | 44.68 | 91.16 | 24.50 | 75.49 | 73.28 | 79.59 | 73.38 | 56.34 |
609
  | | GPT-4o-mini | 23.43 | 32.24 | 25.16 | 43.60 | 91.05 | 17.33 | 72.92 | 71.13 | 75.23 | 62.22 | 51.43 |
610
+ | | Claude-3.5-Sonnet | 25.17 | 39.24 | 28.47 | 40.32 | 91.02 | 19.94 | 80.51 | 78.10 | 80.41 | 75.12 | 55.83 |
611
  | | Gemini-1.5-Pro | 36.01 | 44.68 | 37.14 | 48.87 | 90.99 | 28.76 | 76.62 | 74.62 | 79.79 | 66.32 | 58.38 |
612
  | | Qwen2-VL-7B-Instruct | 13.32 | 15.05 | 13.48 | 41.07 | 86.02 | 3.09 | 13.38 | 12.82 | 57.74 | 10.46 | 26.65 |
613
  | | Qwen2-VL-72B-Instruct | 22.13 | 24.92 | 21.62 | 44.36 | 90.34 | 12.95 | 49.08 | 47.13 | 73.44 | 41.23 | 42.72 |