SenseMath / README.md
DaydreamerMZM's picture
Upload README.md with huggingface_hub
ff7ff8e verified
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
tags:
  - math
  - number-sense
  - benchmark
  - shortcuts
  - numerical-reasoning
size_categories:
  - 1K<n<10K

SenseMath: Evaluating Number Sense in Large Language Models

SenseMath is a controlled benchmark for measuring whether LLMs can exploit number-sense shortcuts.

Dataset Description

  • 1,600 item families across 8 categories and 4 digit scales
  • 3 variants per family: strong-shortcut, weak-shortcut, control
  • 4,800 total items
  • Categories: Magnitude Estimation, Structural Shortcuts, Relative Distance, Cancellation, Compatible Numbers, Landmark Comparison, Equation Reasoning, Option Elimination
  • Digit scales: d=2, 4, 8, 16

Files

File Description
data/sensemath_v2_d2.json 400 families, 2-digit operands
data/sensemath_v2_d4.json 400 families, 4-digit operands
data/sensemath_v2_d8.json 400 families, 8-digit operands
data/sensemath_v2_d16.json 400 families, 16-digit operands
data/judge_j1.json J1 task: shortcut recognition (251 items)
data/judge_j2.json J2 task: strategy identification (80 items)
data/judge_j3.json J3 task items

Usage

from datasets import load_dataset
ds = load_dataset("DaydreamerMZM/SenseMath", split="train")

# Or load directly
import json
with open("data/sensemath_v2_d4.json") as f:
    families = json.load(f)

Citation

@article{zhuang2025sensemath,
  title={SenseMath: Evaluating Number Sense in Large Language Models},
  author={Zhuang, Haomin and Wang, Xiangqi and Shen, Yili and Cheng, Ying and Zhang, Xiangliang},
  journal={arXiv preprint arXiv:XXXX.XXXXX},
  year={2025}
}

Links