Datasets:
metadata
language:
- en
- zh
license: mit
size_categories:
- n<1K
task_categories:
- question-answering
pretty_name: S1-Bench
tags:
- LRM
- System1
- fast-thinking
The benchmark constructed in paper S1-Bench: A Simple Benchmark for Evaluating System 1 Thinking Capability of Large Reasoning Models.
S1-Bench is a novel benchmark designed to evaluate Large Reasoning Models' performance on simple tasks that favor intuitive system 1 thinking rather than deliberative system 2 reasoning.
S1-Bench comprises 422 question-answer pairs across four major categories and 28 subcategories, balanced with 220 English and 202 Chinese questions.