|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
tags: |
|
|
- chemistry |
|
|
- multimodal |
|
|
- reasoning |
|
|
- benchmark |
|
|
- STEM |
|
|
pretty_name: SUPERChem |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
# SUPERChem: A Multimodal Reasoning Benchmark in Chemistry |
|
|
|
|
|
π [Website](https://superchem.pku.edu.cn) | π [Paper](https://arxiv.org/abs/2512.01274) | π» [Code](https://github.com/catalystforyou/SUPERChem_eval) |
|
|
|
|
|
</div> |
|
|
|
|
|
## π’ Updates |
|
|
|
|
|
* **[2025-12-06]** **PDF Preview Released**: We have released the PDF version of SUPERChem in both English and Chinese to facilitate easier previewing and manual inspection, especially for non-technical users. You can download [SUPERChem-500.zip](./SUPERChem-500.zip) to access the dataset in PDF format. The password to unzip the file is `SUPERChem2025`. |
|
|
|
|
|
## π§ͺ What is SUPERChem? |
|
|
|
|
|
SUPERChem is a challenging, expert-curated multimodal benchmark designed for rigorously evaluating the chemical reasoning capabilities of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). |
|
|
|
|
|
## π Key Features |
|
|
|
|
|
* **π
Expert-Level Challenge**: 500 reasoning-intensive problems curated by domain experts to test deep chemical reasoning and mitigate the ceiling effects seen in other benchmarks. |
|
|
* **π Controlled Multimodality**: Each problem is available in both multimodal and text-only formats, enabling a rigorous, controlled analysis of a model's ability to integrate visual information. |
|
|
* **π§ Process-Level Evaluation**: Introduces **Reasoning Path Fidelity (RPF)**, a metric to assess the alignment of a model's reasoning with expert-authored solution paths, distinguishing genuine understanding from "lucky guesses". |
|
|
* **π Fine-Grained Ability Taxonomy**: A systematic categorization of chemical knowledge and reasoning skills supports detailed diagnosis of model strengths and weaknesses across various sub-domains. |
|
|
* **π‘οΈ Contamination Resistant**: Problems are newly authored or adapted from non-public sources and undergo a rigorous human-in-the-loop curation process to ensure quality and reduce the risk of data leakage from web-scraped training sets. |
|
|
|
|
|
## π€ Disclaimer & Feedback |
|
|
|
|
|
While SUPERChem has undergone rigorous expert curation and human-in-the-loop verification, errors and typos may inevitably exist given the complexity of building the dataset. |
|
|
|
|
|
We warmly welcome feedback and corrections from the community. If you identify any issues or have suggestions for improvement, please feel free to open a discussion in the [**Community tab**](https://huggingface.co/datasets/ZehuaZhao/SUPERChem/discussions). |
|
|
|
|
|
## π Citation |
|
|
|
|
|
If you use the dataset or evaluation framework of SUPERChem in your research, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{zhao2025superchemmultimodalreasoningbenchmark, |
|
|
title={SUPERChem: A Multimodal Reasoning Benchmark in Chemistry}, |
|
|
author={Zehua Zhao and Zhixian Huang and Junren Li and Siyu Lin and Junting Zhou and Fengqi Cao and Kun Zhou and Rui Ge and Tingting Long and Yuexiang Zhu and Yan Liu and Jie Zheng and Junnian Wei and Rong Zhu and Peng Zou and Wenyu Li and Zekai Cheng and Tian Ding and Yaxuan Wang and Yizhao Yan and Tingru Wei and Haowei Ming and Weijie Mao and Chen Sun and Yiming Liu and Zichen Wang and Zuo Zhang and Tong Yang and Hao Ma and Zhen Gao and Jian Pei}, |
|
|
year={2025}, |
|
|
eprint={2512.01274}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2512.01274}, |
|
|
} |
|
|
``` |