Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
ykwon-hf commited on
Commit
a5261ce
·
verified ·
1 Parent(s): 001a304

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -4,7 +4,11 @@ license: apache-2.0
4
 
5
  # ReasonIF
6
 
7
- A systematic benchmark for assessing large reasoning models' reasoning instruction following capability. We find substantial failures in reasoning instruction adherence: the highest instruction following score (IFS) remains below 0.25, meaning that fewer than 25\% of reasoning traces comply with the given instructions. Notably, as task difficulty increases, reasoning instruction following degrades further. We also explore two strategies to enhance reasoning instruction fidelity: (1) multi-turn reasoning and (2) Reasoning Instruction Finetuning (RIF) using synthetic data. RIF improves the IFS of GPT-OSS-20B from 0.11 to 0.27, indicating measurable progress but leaving ample room for improvement.
 
 
 
 
8
 
9
- For more details please find `https://github.com/ykwon0407/reasonIF`
10
 
 
4
 
5
  # ReasonIF
6
 
7
+ <p align="center">
8
+ <img src="figures/reasonIF_main.png" width="500">
9
+ <br>
10
+ <em>State-of-the-art large reasoning models demonstrate remarkable problem-solving capabilities, <br>but often fail to follow very simple instructions during reasoning.</em>
11
+ </p>
12
 
13
+ **TL;DR:** It’s critical that LLMs follow user instructions. While prior studies assess instruction adherence in the model’s main responses, we argue that it is also important for large reasoning models (LRMs) to follow user instructions throughout their reasoning process. We introduce [ReasonIF](https://huggingface.co/datasets/ykwon-hf/reasonIF), a systematic benchmark for assessing reasoning instruction following spanning multilingual reasoning, formatting and length control. We find frontier LRMs, including GPT-OSS-120B, Qwen3-235B, and DeepSeek-R1, fail to follow reasoning instructions more than 75% of time. Notably, as task difficulty increases, reasoning instruction following degrades further. For more information, please find our paper and [GitHub repository](https://github.com/ykwon0407/reasonIF).
14