File size: 4,117 Bytes
37257d6 8cf3e3c 37257d6 8cf3e3c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | ---
dataset_info:
features:
- name: source
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: constraint_desc
list: string
- name: key
dtype: string
- name: instruction_id_list
list: string
- name: kwargs
list:
- name: capital_frequency
dtype: int64
- name: capital_relation
dtype: string
- name: num_words
dtype: int64
- name: relation
dtype: string
- name: keyword
dtype: string
- name: frequency
dtype: int64
- name: prompt_to_repeat
dtype: string
- name: keywords
list: string
- name: forbidden_words
list: string
- name: num_highlights
dtype: int64
- name: end_phrase
dtype: string
- name: num_bullets
dtype: int64
- name: section_spliter
dtype: string
- name: num_sections
dtype: int64
- name: language
dtype: string
- name: prompt
dtype: string
splits:
- name: dev
num_bytes: 123575
num_examples: 90
- name: test
num_bytes: 478304
num_examples: 332
download_size: 223359
dataset_size: 601879
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
license: apache-2.0
language:
- en
size_categories:
- n<1K
---
# Math-IF Dataset Card
## Dataset Description
Math-IF (MathIF) is an instruction-following benchmark built on top of math word problems. Each example includes a math question together with explicit, verifiable instructions about how the model should respond (e.g., format, style, or structural constraints). The benchmark is designed to jointly test:
- instruction following in the **reasoning trace (RT)** and
- instruction following and correctness in the **final answer (FA)**.
In this repository, Math-IF is used as both a development set and a test benchmark for controllable reasoning models.
## Intended Use
- Evaluate how well models follow explicit instructions when solving math problems.
The dataset is intended for **research and benchmarking** only.
## Dataset Structure
From the accompanying paper in this repository (see `paper/`):
- **Size**:
- **Dev**: 90 examples
- **Test**: 332 examples
- **Splits used here**:
- The **GSM8K partition** is used as dev set for model selection.
- The remaining partition is used as test set.
Each instance conceptually includes:
- **`prompt`**: the user prompt with the math question and instruction.
- **`answer`**: the ground-truth final answer.
- **`question`**: the underlying math word problem (without instructions).
- **metadata for evaluation**: information needed to compute instruction-following metrics and answer accuracy.
## Tasks and Evaluation
- **Main task**: Instruction-following on math problems.
- **Metrics**:
- *Instruction-level loose-accuracy* (as defined in the Math-IF paper) for both RTs and FAs, yielding **IF-RT** and **IF-FA**.
- **Answer accuracy** measuring whether the final numeric answer is correct.
## Data Source
Math-IF was introduced to study the trade-off between reasoning performance and instruction-following in large reasoning models. For complete details, examples, and official evaluation scripts, please see the original Math-IF paper and repository.
## License
- **License**: Apache 2.0
## Known Limitations and Considerations
- The dataset focuses on **math word problems**, so instruction-following performance may differ on other domains (e.g., open-ended dialogue, code generation).
- The benchmark size is modest (422 examples total in the dev+test configuration used here), which can make very fine-grained comparisons noisy.
- Instructions are in **English**, so the benchmark does not directly evaluate multilingual behavior.
## Citation
```bibtex
@article{fu2025scaling,
title={Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models},
author={Fu, Tingchen and Gu, Jiawei and Li, Yafu and Qu, Xiaoye and Cheng, Yu},
journal={arXiv preprint arXiv:2505.14810},
year={2025}
}
``` |