Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
RxnBench / README.md
AI4Industry's picture
Update README.md
2d50aea verified
|
raw
history blame
2.71 kB
metadata
dataset_info:
  features:
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: choices
      sequence: string
    - name: answer
      dtype: int32
    - name: meta_info
      struct:
        - name: title
          dtype: string
        - name: journal
          dtype: string
        - name: doi
          dtype: string
        - name: url
          dtype: string
  splits:
    - name: en
      num_bytes: 511040465.75
      num_examples: 1525
    - name: zh
      num_bytes: 510708453.75
      num_examples: 1525
  download_size: 218600714
  dataset_size: 1021748919.5
configs:
  - config_name: default
    data_files:
      - split: en
        path: data/en-*
      - split: zh
        path: data/zh-*
license: mit
task_categories:
  - visual-question-answering
language:
  - en
  - zh
tags:
  - chemistry

RxnBench: Benchmark for Chemical Reaction Figure Understanding

πŸ“˜ Benchmark Summary

RxnBench is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding.

The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals. For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams. These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy. The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms. This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills.

The benchmark is released in both English and Chinese versions.

πŸ“Š Benchmark Leaderboard

We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.

Moldel Weight UpdateTime RxnBench-En RxnBench-Zh
gemini-2.5-pro Proprietary 20250617 0.9095 Doing
doubao-1.5-thinking-vision-pro Proprietary 20250428 0.9056 0.9161
o3 Proprietary 20250416 0.9056 0.9115
o4 mini Proprietary 20250416 0.9062 0.9075
doubao-1.5-vision-pro Proprietary 20250328 0.8518 0.8669
gpt-4o Proprietary 20240806 0.7462 0.7436
gemini-2.5-flash Proprietary 20250617 0.6925 0.8557
gpt-4o mini Proprietary 20240718 0.6636 0.6066
Choice longest answer - - 0.426 0.452
Random - - 0.250 0.250

πŸ“– Citation

our paper coming soon ...