simonsu2000's picture
Fix false-positive edit in README.md - restore original text
4b1cbba verified
metadata
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
  - question-answering
language:
  - en
  - zh
tags:
  - science
  - multimodal
  - benchmark
  - evaluation
  - astronomy
  - earth-science
  - biology
  - materials-science
pretty_name: SFE (Scientists First Exam) Multimodal Science Benchmark
size_categories:
  - n<1K

SFE (Scientists First Exam) Multimodal Science Benchmark

SFE 科学家第一考试多模态评测数据集

Dataset Description

Description

SFE (Scientists First Exam) is a multimodal scientific LLM evaluation benchmark. It uses original research data combined with expert annotations to construct high-difficulty scientific reasoning datasets, evaluating models' full-chain research capabilities from signal perception and attribute understanding to complex reasoning.

Domains: Astronomy, Earth Science, Life Science, Materials Science, and more.

Data Production: Domain experts combine the latest research results to annotate high-difficulty multimodal image-text questions with answer explanations.

Difficulty Standard: Data is produced to the pass@4 standard where the model answers correctly at most twice out of four attempts.

中文描述

SFE(Scientists First Exam)是多模态科学大模型评测基准,核心是用原始科研数据 + 专家标注构建高难度科学推理数据集,评估模型从信号感知、属性理解到复杂推理的全链条科研能力。

由专家结合最新研究成果在天文学、地球科学、生命科学、材料科学等领域进行标注,生产出具备高难度的多模态图文题目并出具答案解析。生产数据按照pass@4标准模型至多答对两次。

Dataset Details

Property Value
Scale Weekly production capacity: 300 entries
Format JSON with images
License CC-BY-NC-4.0
Copyright Full copyright provided
Use Case Evaluating multimodal LLMs on expert-level scientific reasoning across multiple disciplines

Citation

If you use this dataset, please cite:

@dataset{obaydata2026,
  author = {OBayData Team},
  title = {SFE (Scientists First Exam) Multimodal Science Benchmark},
  year = {2026},
  publisher = {HuggingFace},
  url = {https://huggingface.co/datasets/obaydata/sfe-scientists-first-exam}
}