Adam1010's picture
Upload folder using huggingface_hub
4ad9391 verified
metadata
license: mit
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - benchmark
  - reasoning
  - multi-step
  - evaluation
  - llm-evaluation
  - goodhart
  - execution-vs-understanding
size_categories:
  - n<1K

Goodhart Gap Benchmark

Detecting the gap between understanding and execution in language models

Overview

The Goodhart Gap Benchmark tests whether language models can correctly execute multi-step reasoning tasks that they can correctly explain. Named after Goodhart's Law ("When a measure becomes a target, it ceases to be a good measure"), this benchmark reveals a critical failure mode: models that understand procedures but fail to execute them.

Key Finding

In our testing of 15+ models:

  • gpt-4o: 57% pass rate (fails on financial, scheduling, units)
  • gpt-4o-mini: 36% pass rate
  • Claude 3.5 Haiku: 93% pass rate
  • Llama 3.1 70B: Fails the canonical discount calculation despite correct explanation

The Canonical Example

Problem: "If a shirt costs $25 and is on 20% sale, and you have a $5 coupon, what do you pay?"

Correct answer: $15 (apply 20% discount first: $25 × 0.8 = $20, then subtract coupon: $20 - $5 = $15)

When we first ask models to explain the procedure, they all correctly state: "First apply the discount, then subtract the coupon."

When we then ask for the answer, many models fail—giving answers like $16, $17, $22.50, or even $175.

Dataset Statistics

Metric Value
Total problems 101
Domains 12
Difficulty levels 3 (easy, medium, hard)
Steps per problem 2-6

Problems by Domain

Numerical Domains (67 problems)

Domain Count Description
math_discount 15 Discounts, coupons, taxes, markups
time 13 Duration arithmetic, travel times
financial 10 Interest, taxes, commissions
logic 8 Ordering, deduction, set operations
recipe 7 Scaling, unit conversion
scheduling 7 Task dependencies, work rates
units 7 Unit conversion with operations

Non-Numerical Domains (34 problems)

Domain Count Description
spatial 7 Direction tracking, grid navigation, relative positions
procedural 6 State machines, undo/redo, procedure following
text 7 String manipulation, encoding, word operations
sequence 7 Pattern recognition (letters, symbols, words)
causal 7 Cause-effect chains, counterfactuals, necessary/sufficient

Difficulty Distribution

Difficulty Count Description
Easy 28 2 steps, straightforward
Medium 32 2-3 steps, some complexity
Hard 7 3-4 steps, multiple operations

Data Format

Each problem is a JSON object with the following fields:

{
  "id": "math_discount_01",
  "domain": "math_discount",
  "problem": "A product costs $25 and is on 20% sale. You also have a $5 coupon. What do you pay? Answer with just the number.",
  "correct_answer": "15",
  "explanation": "25 × 0.8 = 20.0, then 20.0 - 5 = 15.0",
  "understanding_check": "To solve this, first apply the 20% discount, then subtract the coupon. What are the two steps?",
  "difficulty": "easy",
  "steps": 2
}

Field Descriptions

Field Description
id Unique identifier (domain_type_number)
domain Category of reasoning required
problem The question posed to the model
correct_answer Expected answer (numeric or text)
explanation Step-by-step solution
understanding_check Prompt to verify model understands the procedure
difficulty easy, medium, or hard
steps Number of sequential operations required

Usage

Quick Evaluation

# Install requirements
pip install requests

# Evaluate OpenAI model
python evaluate.py --provider openai --model gpt-4o -v

# Evaluate Claude model
python evaluate.py --provider anthropic --model claude-3-5-haiku-latest -v

# Evaluate local Ollama model
python evaluate.py --provider ollama --model llama3.1:8b -v

Python API

import json

# Load dataset
problems = []
with open('data/test.jsonl') as f:
    for line in f:
        problems.append(json.loads(line))

# Test your model
for problem in problems:
    response = your_model.generate(problem['problem'])
    expected = problem['correct_answer']
    # Validate response against expected

With HuggingFace Datasets

from datasets import load_dataset

dataset = load_dataset("your-username/goodhart-gap-benchmark")

for example in dataset['test']:
    print(example['problem'])
    print(f"Expected: {example['correct_answer']}")

Evaluation Criteria

A response is considered correct if:

  1. Numeric answers: The expected number appears in the response (with tolerance for rounding)
  2. Time answers: The expected time appears in any reasonable format (e.g., "4:45 PM", "4:45pm", "16:45")
  3. Yes/no answers: The response clearly indicates yes, no, or "cannot determine"
  4. Ordering answers: Items appear in the correct sequence

Leaderboard

Model Provider Pass Rate Weakest Domain
Claude 3.5 Haiku Anthropic 93% logic
Claude Sonnet 4 Anthropic 79% financial, scheduling
gpt-4o OpenAI 57% scheduling
gpt-4o-mini OpenAI 36% most domains
Qwen 2.5 72B Alibaba TBD -
Llama 3.1 70B Meta TBD -

Submit your results via PR to add to the leaderboard

Why This Matters

For AI Safety

Models that can explain correct procedures but execute them incorrectly are:

  • Harder to detect through explanation-based evaluation
  • More dangerous in agentic settings
  • A gap between capability benchmarks and deployment readiness

For Model Selection

Not all models are equal for multi-step reasoning:

  • Model family matters more than size
  • Distilled models often lose this capability
  • Test execution, not just explanation

For Training

The gap appears to be a training problem:

  • Well-trained models (Claude Haiku) outperform larger models
  • Suggests targeted fine-tuning could help

Citation

@dataset{goodhart_gap_benchmark_2026,
  title={Goodhart Gap Benchmark: Detecting the Gap Between Understanding and Execution in LLMs},
  author={Adam Kruger},
  year={2026},
  url={https://huggingface.co/datasets/Adam1010/goodhart-gap-benchmark}
}

License

MIT License - free for research and commercial use.

Contributing

We welcome contributions:

  • New test cases in underrepresented domains
  • Results from additional models
  • Improved validators
  • Translations to other languages

Submit issues and PRs at: [GitHub Repository URL]

Acknowledgments

Research inspired by:

  • Goodhart's Law and its application to AI evaluation
  • Work on multi-step reasoning in LLMs
  • The distinction between System 1 and System 2 thinking