Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
License:
math-if / README.md
haritzpuerto's picture
Update README.md
8cf3e3c verified
metadata
dataset_info:
  features:
    - name: source
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: constraint_desc
      list: string
    - name: key
      dtype: string
    - name: instruction_id_list
      list: string
    - name: kwargs
      list:
        - name: capital_frequency
          dtype: int64
        - name: capital_relation
          dtype: string
        - name: num_words
          dtype: int64
        - name: relation
          dtype: string
        - name: keyword
          dtype: string
        - name: frequency
          dtype: int64
        - name: prompt_to_repeat
          dtype: string
        - name: keywords
          list: string
        - name: forbidden_words
          list: string
        - name: num_highlights
          dtype: int64
        - name: end_phrase
          dtype: string
        - name: num_bullets
          dtype: int64
        - name: section_spliter
          dtype: string
        - name: num_sections
          dtype: int64
        - name: language
          dtype: string
    - name: prompt
      dtype: string
  splits:
    - name: dev
      num_bytes: 123575
      num_examples: 90
    - name: test
      num_bytes: 478304
      num_examples: 332
  download_size: 223359
  dataset_size: 601879
configs:
  - config_name: default
    data_files:
      - split: dev
        path: data/dev-*
      - split: test
        path: data/test-*
license: apache-2.0
language:
  - en
size_categories:
  - n<1K

Math-IF Dataset Card

Dataset Description

Math-IF (MathIF) is an instruction-following benchmark built on top of math word problems. Each example includes a math question together with explicit, verifiable instructions about how the model should respond (e.g., format, style, or structural constraints). The benchmark is designed to jointly test:

  • instruction following in the reasoning trace (RT) and
  • instruction following and correctness in the final answer (FA).

In this repository, Math-IF is used as both a development set and a test benchmark for controllable reasoning models.

Intended Use

  • Evaluate how well models follow explicit instructions when solving math problems.

The dataset is intended for research and benchmarking only.

Dataset Structure

From the accompanying paper in this repository (see paper/):

  • Size:
    • Dev: 90 examples
    • Test: 332 examples
  • Splits used here:
    • The GSM8K partition is used as dev set for model selection.
    • The remaining partition is used as test set.

Each instance conceptually includes:

  • prompt: the user prompt with the math question and instruction.
  • answer: the ground-truth final answer.
  • question: the underlying math word problem (without instructions).
  • metadata for evaluation: information needed to compute instruction-following metrics and answer accuracy.

Tasks and Evaluation

  • Main task: Instruction-following on math problems.
  • Metrics:
    • Instruction-level loose-accuracy (as defined in the Math-IF paper) for both RTs and FAs, yielding IF-RT and IF-FA.
    • Answer accuracy measuring whether the final numeric answer is correct.

Data Source

Math-IF was introduced to study the trade-off between reasoning performance and instruction-following in large reasoning models. For complete details, examples, and official evaluation scripts, please see the original Math-IF paper and repository.

License

  • License: Apache 2.0

Known Limitations and Considerations

  • The dataset focuses on math word problems, so instruction-following performance may differ on other domains (e.g., open-ended dialogue, code generation).
  • The benchmark size is modest (422 examples total in the dev+test configuration used here), which can make very fine-grained comparisons noisy.
  • Instructions are in English, so the benchmark does not directly evaluate multilingual behavior.

Citation

@article{fu2025scaling,
  title={Scaling Reasoning, Losing Control: Evaluating Instruction Following in Large Reasoning Models},
  author={Fu, Tingchen and Gu, Jiawei and Li, Yafu and Qu, Xiaoye and Cheng, Yu},
  journal={arXiv preprint arXiv:2505.14810},
  year={2025}
}