Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
AI4Industry commited on
Commit
2d50aea
·
verified ·
1 Parent(s): 0306db9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -35,15 +35,35 @@ configs:
35
  path: data/en-*
36
  - split: zh
37
  path: data/zh-*
 
 
 
 
 
 
 
 
38
  ---
39
 
40
- # RxnBench: Benchmark Dataset for Chemical Reaction Figure Comprehension
41
 
42
- We curated a dataset of 305 figures related to organic chemical reactions, selected from multiple high-impact OpenAssess journals. In total, it contains **1,525 question–answer pairs**. For each figure, domain experts designed five multiple-choice VQA questions focused on chemical reaction figure understanding, which were refined through several rounds of rigorous review and revision.
43
 
44
- Answering these questions requires expertise equivalent to a Ph.D. level in organic chemistry or higher.
45
 
46
- The benchmark is available in both English and Chinese versions.
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  | Moldel | Weight | UpdateTime | RxnBench-En | RxnBench-Zh |
49
  |----|:----:|:----:|:----:|:----:|
@@ -57,3 +77,8 @@ The benchmark is available in both English and Chinese versions.
57
  | gpt-4o mini | Proprietary | 20240718 | 0.6636 | 0.6066 |
58
  | Choice longest answer | - | -| 0.426 | 0.452 |
59
  | Random | - | - | 0.250 | 0.250 |
 
 
 
 
 
 
35
  path: data/en-*
36
  - split: zh
37
  path: data/zh-*
38
+ license: mit
39
+ task_categories:
40
+ - visual-question-answering
41
+ language:
42
+ - en
43
+ - zh
44
+ tags:
45
+ - chemistry
46
  ---
47
 
48
+ # RxnBench: Benchmark for Chemical Reaction Figure Understanding
49
 
 
50
 
51
+ ## 📘 Benchmark Summary
52
 
53
+ RxnBench is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding.
54
+
55
+ The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals.
56
+ For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams.
57
+ These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy.
58
+ The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms.
59
+ This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills.
60
+
61
+ The benchmark is released in both English and Chinese versions.
62
+
63
+
64
+ ## 📊 Benchmark Leaderboard
65
+
66
+ We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.
67
 
68
  | Moldel | Weight | UpdateTime | RxnBench-En | RxnBench-Zh |
69
  |----|:----:|:----:|:----:|:----:|
 
77
  | gpt-4o mini | Proprietary | 20240718 | 0.6636 | 0.6066 |
78
  | Choice longest answer | - | -| 0.426 | 0.452 |
79
  | Random | - | - | 0.250 | 0.250 |
80
+
81
+
82
+ ## 📖 Citation
83
+
84
+ our paper coming soon ...