Upload folder using huggingface_hub
Browse files- README.md +223 -0
- data/baseline_results.json +78 -0
- data/summary.json +22 -0
- data/test.jsonl +67 -0
- evaluate.py +471 -0
- requirements.txt +1 -0
README.md
ADDED
|
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- question-answering
|
| 5 |
+
- text-generation
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- benchmark
|
| 10 |
+
- reasoning
|
| 11 |
+
- multi-step
|
| 12 |
+
- evaluation
|
| 13 |
+
- llm-evaluation
|
| 14 |
+
- goodhart
|
| 15 |
+
- execution-vs-understanding
|
| 16 |
+
size_categories:
|
| 17 |
+
- n<1K
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# Goodhart Gap Benchmark
|
| 21 |
+
|
| 22 |
+
**Detecting the gap between understanding and execution in language models**
|
| 23 |
+
|
| 24 |
+
## Overview
|
| 25 |
+
|
| 26 |
+
The Goodhart Gap Benchmark tests whether language models can correctly *execute* multi-step reasoning tasks that they can correctly *explain*. Named after Goodhart's Law ("When a measure becomes a target, it ceases to be a good measure"), this benchmark reveals a critical failure mode: models that understand procedures but fail to execute them.
|
| 27 |
+
|
| 28 |
+
## Key Finding
|
| 29 |
+
|
| 30 |
+
In our testing of 15+ models:
|
| 31 |
+
- **gpt-4o**: 57% pass rate (fails on financial, scheduling, units)
|
| 32 |
+
- **gpt-4o-mini**: 36% pass rate
|
| 33 |
+
- **Claude 3.5 Haiku**: 93% pass rate
|
| 34 |
+
- **Llama 3.1 70B**: Fails the canonical discount calculation despite correct explanation
|
| 35 |
+
|
| 36 |
+
## The Canonical Example
|
| 37 |
+
|
| 38 |
+
**Problem**: "If a shirt costs $25 and is on 20% sale, and you have a $5 coupon, what do you pay?"
|
| 39 |
+
|
| 40 |
+
**Correct answer**: $15 (apply 20% discount first: $25 × 0.8 = $20, then subtract coupon: $20 - $5 = $15)
|
| 41 |
+
|
| 42 |
+
When we first ask models to *explain* the procedure, they all correctly state: "First apply the discount, then subtract the coupon."
|
| 43 |
+
|
| 44 |
+
When we then ask for the answer, many models fail—giving answers like $16, $17, $22.50, or even $175.
|
| 45 |
+
|
| 46 |
+
## Dataset Statistics
|
| 47 |
+
|
| 48 |
+
| Metric | Value |
|
| 49 |
+
|--------|-------|
|
| 50 |
+
| Total problems | 67 |
|
| 51 |
+
| Domains | 7 |
|
| 52 |
+
| Difficulty levels | 3 (easy, medium, hard) |
|
| 53 |
+
| Steps per problem | 2-4 |
|
| 54 |
+
|
| 55 |
+
### Problems by Domain
|
| 56 |
+
|
| 57 |
+
| Domain | Count | Description |
|
| 58 |
+
|--------|-------|-------------|
|
| 59 |
+
| math_discount | 15 | Discounts, coupons, taxes, markups |
|
| 60 |
+
| time | 13 | Duration arithmetic, travel times |
|
| 61 |
+
| financial | 10 | Interest, taxes, commissions |
|
| 62 |
+
| logic | 8 | Ordering, deduction, set operations |
|
| 63 |
+
| recipe | 7 | Scaling, unit conversion |
|
| 64 |
+
| scheduling | 7 | Task dependencies, work rates |
|
| 65 |
+
| units | 7 | Unit conversion with operations |
|
| 66 |
+
|
| 67 |
+
### Difficulty Distribution
|
| 68 |
+
|
| 69 |
+
| Difficulty | Count | Description |
|
| 70 |
+
|------------|-------|-------------|
|
| 71 |
+
| Easy | 28 | 2 steps, straightforward |
|
| 72 |
+
| Medium | 32 | 2-3 steps, some complexity |
|
| 73 |
+
| Hard | 7 | 3-4 steps, multiple operations |
|
| 74 |
+
|
| 75 |
+
## Data Format
|
| 76 |
+
|
| 77 |
+
Each problem is a JSON object with the following fields:
|
| 78 |
+
|
| 79 |
+
```json
|
| 80 |
+
{
|
| 81 |
+
"id": "math_discount_01",
|
| 82 |
+
"domain": "math_discount",
|
| 83 |
+
"problem": "A product costs $25 and is on 20% sale. You also have a $5 coupon. What do you pay? Answer with just the number.",
|
| 84 |
+
"correct_answer": "15",
|
| 85 |
+
"explanation": "25 × 0.8 = 20.0, then 20.0 - 5 = 15.0",
|
| 86 |
+
"understanding_check": "To solve this, first apply the 20% discount, then subtract the coupon. What are the two steps?",
|
| 87 |
+
"difficulty": "easy",
|
| 88 |
+
"steps": 2
|
| 89 |
+
}
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
### Field Descriptions
|
| 93 |
+
|
| 94 |
+
| Field | Description |
|
| 95 |
+
|-------|-------------|
|
| 96 |
+
| `id` | Unique identifier (domain_type_number) |
|
| 97 |
+
| `domain` | Category of reasoning required |
|
| 98 |
+
| `problem` | The question posed to the model |
|
| 99 |
+
| `correct_answer` | Expected answer (numeric or text) |
|
| 100 |
+
| `explanation` | Step-by-step solution |
|
| 101 |
+
| `understanding_check` | Prompt to verify model understands the procedure |
|
| 102 |
+
| `difficulty` | easy, medium, or hard |
|
| 103 |
+
| `steps` | Number of sequential operations required |
|
| 104 |
+
|
| 105 |
+
## Usage
|
| 106 |
+
|
| 107 |
+
### Quick Evaluation
|
| 108 |
+
|
| 109 |
+
```bash
|
| 110 |
+
# Install requirements
|
| 111 |
+
pip install requests
|
| 112 |
+
|
| 113 |
+
# Evaluate OpenAI model
|
| 114 |
+
python evaluate.py --provider openai --model gpt-4o -v
|
| 115 |
+
|
| 116 |
+
# Evaluate Claude model
|
| 117 |
+
python evaluate.py --provider anthropic --model claude-3-5-haiku-latest -v
|
| 118 |
+
|
| 119 |
+
# Evaluate local Ollama model
|
| 120 |
+
python evaluate.py --provider ollama --model llama3.1:8b -v
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
### Python API
|
| 124 |
+
|
| 125 |
+
```python
|
| 126 |
+
import json
|
| 127 |
+
|
| 128 |
+
# Load dataset
|
| 129 |
+
problems = []
|
| 130 |
+
with open('data/test.jsonl') as f:
|
| 131 |
+
for line in f:
|
| 132 |
+
problems.append(json.loads(line))
|
| 133 |
+
|
| 134 |
+
# Test your model
|
| 135 |
+
for problem in problems:
|
| 136 |
+
response = your_model.generate(problem['problem'])
|
| 137 |
+
expected = problem['correct_answer']
|
| 138 |
+
# Validate response against expected
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
### With HuggingFace Datasets
|
| 142 |
+
|
| 143 |
+
```python
|
| 144 |
+
from datasets import load_dataset
|
| 145 |
+
|
| 146 |
+
dataset = load_dataset("your-username/goodhart-gap-benchmark")
|
| 147 |
+
|
| 148 |
+
for example in dataset['test']:
|
| 149 |
+
print(example['problem'])
|
| 150 |
+
print(f"Expected: {example['correct_answer']}")
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
## Evaluation Criteria
|
| 154 |
+
|
| 155 |
+
A response is considered correct if:
|
| 156 |
+
1. **Numeric answers**: The expected number appears in the response (with tolerance for rounding)
|
| 157 |
+
2. **Time answers**: The expected time appears in any reasonable format (e.g., "4:45 PM", "4:45pm", "16:45")
|
| 158 |
+
3. **Yes/no answers**: The response clearly indicates yes, no, or "cannot determine"
|
| 159 |
+
4. **Ordering answers**: Items appear in the correct sequence
|
| 160 |
+
|
| 161 |
+
## Leaderboard
|
| 162 |
+
|
| 163 |
+
| Model | Provider | Pass Rate | Weakest Domain |
|
| 164 |
+
|-------|----------|-----------|----------------|
|
| 165 |
+
| Claude 3.5 Haiku | Anthropic | 93% | logic |
|
| 166 |
+
| Claude Sonnet 4 | Anthropic | 79% | financial, scheduling |
|
| 167 |
+
| gpt-4o | OpenAI | 57% | scheduling |
|
| 168 |
+
| gpt-4o-mini | OpenAI | 36% | most domains |
|
| 169 |
+
| Qwen 2.5 72B | Alibaba | TBD | - |
|
| 170 |
+
| Llama 3.1 70B | Meta | TBD | - |
|
| 171 |
+
|
| 172 |
+
*Submit your results via PR to add to the leaderboard*
|
| 173 |
+
|
| 174 |
+
## Why This Matters
|
| 175 |
+
|
| 176 |
+
### For AI Safety
|
| 177 |
+
Models that can explain correct procedures but execute them incorrectly are:
|
| 178 |
+
- Harder to detect through explanation-based evaluation
|
| 179 |
+
- More dangerous in agentic settings
|
| 180 |
+
- A gap between capability benchmarks and deployment readiness
|
| 181 |
+
|
| 182 |
+
### For Model Selection
|
| 183 |
+
Not all models are equal for multi-step reasoning:
|
| 184 |
+
- Model family matters more than size
|
| 185 |
+
- Distilled models often lose this capability
|
| 186 |
+
- Test execution, not just explanation
|
| 187 |
+
|
| 188 |
+
### For Training
|
| 189 |
+
The gap appears to be a training problem:
|
| 190 |
+
- Well-trained models (Claude Haiku) outperform larger models
|
| 191 |
+
- Suggests targeted fine-tuning could help
|
| 192 |
+
|
| 193 |
+
## Citation
|
| 194 |
+
|
| 195 |
+
```bibtex
|
| 196 |
+
@dataset{goodhart_gap_benchmark_2025,
|
| 197 |
+
title={Goodhart Gap Benchmark: Detecting the Gap Between Understanding and Execution in LLMs},
|
| 198 |
+
author={[Your Name]},
|
| 199 |
+
year={2025},
|
| 200 |
+
url={https://huggingface.co/datasets/your-username/goodhart-gap-benchmark}
|
| 201 |
+
}
|
| 202 |
+
```
|
| 203 |
+
|
| 204 |
+
## License
|
| 205 |
+
|
| 206 |
+
MIT License - free for research and commercial use.
|
| 207 |
+
|
| 208 |
+
## Contributing
|
| 209 |
+
|
| 210 |
+
We welcome contributions:
|
| 211 |
+
- New test cases in underrepresented domains
|
| 212 |
+
- Results from additional models
|
| 213 |
+
- Improved validators
|
| 214 |
+
- Translations to other languages
|
| 215 |
+
|
| 216 |
+
Submit issues and PRs at: [GitHub Repository URL]
|
| 217 |
+
|
| 218 |
+
## Acknowledgments
|
| 219 |
+
|
| 220 |
+
Research inspired by:
|
| 221 |
+
- Goodhart's Law and its application to AI evaluation
|
| 222 |
+
- Work on multi-step reasoning in LLMs
|
| 223 |
+
- The distinction between System 1 and System 2 thinking
|
data/baseline_results.json
ADDED
|
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"benchmark_version": "1.0",
|
| 3 |
+
"date": "2025-01-03",
|
| 4 |
+
"results": [
|
| 5 |
+
{
|
| 6 |
+
"model": "claude-3-5-haiku-latest",
|
| 7 |
+
"provider": "anthropic",
|
| 8 |
+
"pass_rate": 0.93,
|
| 9 |
+
"passed": 13,
|
| 10 |
+
"total": 14,
|
| 11 |
+
"notes": "Multi-domain test (14 problems)"
|
| 12 |
+
},
|
| 13 |
+
{
|
| 14 |
+
"model": "claude-sonnet-4-20250514",
|
| 15 |
+
"provider": "anthropic",
|
| 16 |
+
"pass_rate": 0.79,
|
| 17 |
+
"passed": 11,
|
| 18 |
+
"total": 14,
|
| 19 |
+
"notes": "Multi-domain test (14 problems)"
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"model": "gpt-4o",
|
| 23 |
+
"provider": "openai",
|
| 24 |
+
"pass_rate": 0.57,
|
| 25 |
+
"passed": 8,
|
| 26 |
+
"total": 14,
|
| 27 |
+
"notes": "Multi-domain test (14 problems)"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"model": "gpt-4o-mini",
|
| 31 |
+
"provider": "openai",
|
| 32 |
+
"pass_rate": 0.36,
|
| 33 |
+
"passed": 5,
|
| 34 |
+
"total": 14,
|
| 35 |
+
"notes": "Multi-domain test (14 problems)"
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"model": "qwen2.5:72b",
|
| 39 |
+
"provider": "ollama/runpod",
|
| 40 |
+
"pass_rate": 1.0,
|
| 41 |
+
"passed": 1,
|
| 42 |
+
"total": 1,
|
| 43 |
+
"notes": "Canonical discount test only"
|
| 44 |
+
},
|
| 45 |
+
{
|
| 46 |
+
"model": "deepseek-llm:67b",
|
| 47 |
+
"provider": "ollama/runpod",
|
| 48 |
+
"pass_rate": 1.0,
|
| 49 |
+
"passed": 1,
|
| 50 |
+
"total": 1,
|
| 51 |
+
"notes": "Canonical discount test only"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"model": "llama3.1:70b",
|
| 55 |
+
"provider": "ollama/runpod",
|
| 56 |
+
"pass_rate": 0.0,
|
| 57 |
+
"passed": 0,
|
| 58 |
+
"total": 1,
|
| 59 |
+
"notes": "Canonical discount test only - answered 17"
|
| 60 |
+
},
|
| 61 |
+
{
|
| 62 |
+
"model": "llama3.1:8b",
|
| 63 |
+
"provider": "ollama",
|
| 64 |
+
"pass_rate": 0.0,
|
| 65 |
+
"passed": 0,
|
| 66 |
+
"total": 1,
|
| 67 |
+
"notes": "Canonical discount test only - answered $22.50"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"model": "mistral:7b",
|
| 71 |
+
"provider": "ollama",
|
| 72 |
+
"pass_rate": 1.0,
|
| 73 |
+
"passed": 1,
|
| 74 |
+
"total": 1,
|
| 75 |
+
"notes": "Canonical discount test only - PASSED"
|
| 76 |
+
}
|
| 77 |
+
]
|
| 78 |
+
}
|
data/summary.json
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"total_problems": 67,
|
| 3 |
+
"domains": {
|
| 4 |
+
"math_discount": 15,
|
| 5 |
+
"time": 13,
|
| 6 |
+
"recipe": 7,
|
| 7 |
+
"financial": 10,
|
| 8 |
+
"units": 7,
|
| 9 |
+
"scheduling": 7,
|
| 10 |
+
"logic": 8
|
| 11 |
+
},
|
| 12 |
+
"difficulty_distribution": {
|
| 13 |
+
"easy": 26,
|
| 14 |
+
"medium": 34,
|
| 15 |
+
"hard": 7
|
| 16 |
+
},
|
| 17 |
+
"step_distribution": {
|
| 18 |
+
"2": 51,
|
| 19 |
+
"3": 13,
|
| 20 |
+
"4": 3
|
| 21 |
+
}
|
| 22 |
+
}
|
data/test.jsonl
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"id": "math_discount_01", "domain": "math_discount", "problem": "A product costs $25 and is on 20% sale. You also have a $5 coupon. What do you pay? Answer with just the number.", "correct_answer": "15", "explanation": "25 \u00d7 0.8 = 20.0, then 20.0 - 5 = 15.0", "understanding_check": "To solve this, first apply the 20% discount, then subtract the coupon. What are the two steps?", "difficulty": "easy", "steps": 2}
|
| 2 |
+
{"id": "math_discount_02", "domain": "math_discount", "problem": "A product costs $50 and is on 10% sale. You also have a $8 coupon. What do you pay? Answer with just the number.", "correct_answer": "37", "explanation": "50 \u00d7 0.9 = 45.0, then 45.0 - 8 = 37.0", "understanding_check": "To solve this, first apply the 10% discount, then subtract the coupon. What are the two steps?", "difficulty": "easy", "steps": 2}
|
| 3 |
+
{"id": "math_discount_03", "domain": "math_discount", "problem": "A product costs $80 and is on 25% sale. You also have a $10 coupon. What do you pay? Answer with just the number.", "correct_answer": "50", "explanation": "80 \u00d7 0.75 = 60.0, then 60.0 - 10 = 50.0", "understanding_check": "To solve this, first apply the 25% discount, then subtract the coupon. What are the two steps?", "difficulty": "easy", "steps": 2}
|
| 4 |
+
{"id": "math_discount_04", "domain": "math_discount", "problem": "A product costs $120 and is on 15% sale. You also have a $12 coupon. What do you pay? Answer with just the number.", "correct_answer": "90", "explanation": "120 \u00d7 0.85 = 102.0, then 102.0 - 12 = 90.0", "understanding_check": "To solve this, first apply the 15% discount, then subtract the coupon. What are the two steps?", "difficulty": "medium", "steps": 2}
|
| 5 |
+
{"id": "math_discount_05", "domain": "math_discount", "problem": "A product costs $200 and is on 30% sale. You also have a $25 coupon. What do you pay? Answer with just the number.", "correct_answer": "115", "explanation": "200 \u00d7 0.7 = 140.0, then 140.0 - 25 = 115.0", "understanding_check": "To solve this, first apply the 30% discount, then subtract the coupon. What are the two steps?", "difficulty": "medium", "steps": 2}
|
| 6 |
+
{"id": "math_discount_06", "domain": "math_discount", "problem": "A product costs $75 and is on 20% sale. You also have a $7 coupon. What do you pay? Answer with just the number.", "correct_answer": "53", "explanation": "75 \u00d7 0.8 = 60.0, then 60.0 - 7 = 53.0", "understanding_check": "To solve this, first apply the 20% discount, then subtract the coupon. What are the two steps?", "difficulty": "easy", "steps": 2}
|
| 7 |
+
{"id": "math_discount_07", "domain": "math_discount", "problem": "A product costs $150 and is on 40% sale. You also have a $20 coupon. What do you pay? Answer with just the number.", "correct_answer": "70", "explanation": "150 \u00d7 0.6 = 90.0, then 90.0 - 20 = 70.0", "understanding_check": "To solve this, first apply the 40% discount, then subtract the coupon. What are the two steps?", "difficulty": "medium", "steps": 2}
|
| 8 |
+
{"id": "math_discount_tax_01", "domain": "math_discount", "problem": "An item costs $100. First apply a 20% discount, then add 10% sales tax. What's the final price? Answer with just the number.", "correct_answer": "88", "explanation": "100 \u00d7 0.8 = 80.0, then 80.0 \u00d7 1.1 = 88.0", "understanding_check": "First apply the discount, then calculate tax on the discounted price. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 9 |
+
{"id": "math_discount_tax_02", "domain": "math_discount", "problem": "An item costs $250. First apply a 15% discount, then add 8% sales tax. What's the final price? Answer with just the number.", "correct_answer": "229.5", "explanation": "250 \u00d7 0.85 = 212.5, then 212.5 \u00d7 1.08 = 229.50000000000003", "understanding_check": "First apply the discount, then calculate tax on the discounted price. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 10 |
+
{"id": "math_discount_tax_03", "domain": "math_discount", "problem": "An item costs $80. First apply a 25% discount, then add 5% sales tax. What's the final price? Answer with just the number.", "correct_answer": "63", "explanation": "80 \u00d7 0.75 = 60.0, then 60.0 \u00d7 1.05 = 63.0", "understanding_check": "First apply the discount, then calculate tax on the discounted price. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 11 |
+
{"id": "math_discount_tax_04", "domain": "math_discount", "problem": "An item costs $500. First apply a 10% discount, then add 7% sales tax. What's the final price? Answer with just the number.", "correct_answer": "481.5", "explanation": "500 \u00d7 0.9 = 450.0, then 450.0 \u00d7 1.07 = 481.5", "understanding_check": "First apply the discount, then calculate tax on the discounted price. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 12 |
+
{"id": "math_discount_tax_05", "domain": "math_discount", "problem": "An item costs $160. First apply a 20% discount, then add 6% sales tax. What's the final price? Answer with just the number.", "correct_answer": "135.68", "explanation": "160 \u00d7 0.8 = 128.0, then 128.0 \u00d7 1.06 = 135.68", "understanding_check": "First apply the discount, then calculate tax on the discounted price. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 13 |
+
{"id": "math_bogo_01", "domain": "math_discount", "problem": "Shirts cost $40 each. Buy one, get 50% off the second. What's the total for 2 shirts? Answer with just the number.", "correct_answer": "60", "explanation": "First shirt: 40, Second shirt: 40 \u00d7 0.5 = 20.0, Total: 60.0", "understanding_check": "First shirt is full price, second shirt gets 50% off. How do you calculate the total?", "difficulty": "medium", "steps": 2}
|
| 14 |
+
{"id": "math_bogo_02", "domain": "math_discount", "problem": "Shirts cost $25 each. Buy one, get 25% off the second. What's the total for 2 shirts? Answer with just the number.", "correct_answer": "43.75", "explanation": "First shirt: 25, Second shirt: 25 \u00d7 0.75 = 18.75, Total: 43.75", "understanding_check": "First shirt is full price, second shirt gets 25% off. How do you calculate the total?", "difficulty": "easy", "steps": 2}
|
| 15 |
+
{"id": "math_bogo_03", "domain": "math_discount", "problem": "Shirts cost $60 each. Buy one, get 40% off the second. What's the total for 2 shirts? Answer with just the number.", "correct_answer": "96", "explanation": "First shirt: 60, Second shirt: 60 \u00d7 0.6 = 36.0, Total: 96.0", "understanding_check": "First shirt is full price, second shirt gets 40% off. How do you calculate the total?", "difficulty": "medium", "steps": 2}
|
| 16 |
+
{"id": "time_duration_01", "domain": "time", "problem": "A meeting starts at 2:30 PM and lasts 1 hour 45 minutes. Then there's a 30 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "4:45 PM", "explanation": "Add 105 minutes to 2:30 PM, then add 30 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 17 |
+
{"id": "time_duration_02", "domain": "time", "problem": "A meeting starts at 9:15 AM and lasts 2 hours 20 minutes. Then there's a 15 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "11:50 AM", "explanation": "Add 140 minutes to 9:15 AM, then add 15 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 18 |
+
{"id": "time_duration_03", "domain": "time", "problem": "A meeting starts at 10:00 AM and lasts 1 hour 30 minutes. Then there's a 45 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "12:15 PM", "explanation": "Add 90 minutes to 10:00 AM, then add 45 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 19 |
+
{"id": "time_duration_04", "domain": "time", "problem": "A meeting starts at 3:45 PM and lasts 1 hour 15 minutes. Then there's a 20 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "5:20 PM", "explanation": "Add 75 minutes to 3:45 PM, then add 20 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 20 |
+
{"id": "time_duration_05", "domain": "time", "problem": "A meeting starts at 8:30 AM and lasts 3 hours. Then there's a 60 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "1:30 PM", "explanation": "Add 180 minutes to 8:30 AM, then add 60 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 21 |
+
{"id": "time_duration_06", "domain": "time", "problem": "A meeting starts at 11:15 AM and lasts 0 hours 45 minutes. Then there's a 30 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "12:30 PM", "explanation": "Add 45 minutes to 11:15 AM, then add 30 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 22 |
+
{"id": "time_duration_07", "domain": "time", "problem": "A meeting starts at 7:00 PM and lasts 2 hours. Then there's a 15 minute break. What time does the next session start? Answer with just the time.", "correct_answer": "9:15 PM", "explanation": "Add 120 minutes to 7:00 PM, then add 15 minutes", "understanding_check": "Add the meeting duration first, then add the break time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 23 |
+
{"id": "time_travel_01", "domain": "time", "problem": "A train departs at 9:00 AM. The journey takes 2 hours 30 minutes. After arrival, you wait 20 minutes for a connection. What time do you board the connection? Answer with just the time.", "correct_answer": "11:50 AM", "explanation": "Add 150 minutes travel, then 20 minutes wait", "understanding_check": "Calculate arrival time first, then add wait time. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 24 |
+
{"id": "time_travel_02", "domain": "time", "problem": "A train departs at 2:15 PM. The journey takes 1 hour 15 minutes. After arrival, you wait 10 minutes for a connection. What time do you board the connection? Answer with just the time.", "correct_answer": "3:40 PM", "explanation": "Add 75 minutes travel, then 10 minutes wait", "understanding_check": "Calculate arrival time first, then add wait time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 25 |
+
{"id": "time_travel_03", "domain": "time", "problem": "A train departs at 6:30 AM. The journey takes 3 hours. After arrival, you wait 30 minutes for a connection. What time do you board the connection? Answer with just the time.", "correct_answer": "10:00 AM", "explanation": "Add 180 minutes travel, then 30 minutes wait", "understanding_check": "Calculate arrival time first, then add wait time. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 26 |
+
{"id": "time_travel_04", "domain": "time", "problem": "A train departs at 4:00 PM. The journey takes 0 hours 45 minutes. After arrival, you wait 15 minutes for a connection. What time do you board the connection? Answer with just the time.", "correct_answer": "5:00 PM", "explanation": "Add 45 minutes travel, then 15 minutes wait", "understanding_check": "Calculate arrival time first, then add wait time. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 27 |
+
{"id": "time_travel_05", "domain": "time", "problem": "A train departs at 7:45 AM. The journey takes 1 hour 35 minutes. After arrival, you wait 25 minutes for a connection. What time do you board the connection? Answer with just the time.", "correct_answer": "9:45 AM", "explanation": "Add 95 minutes travel, then 25 minutes wait", "understanding_check": "Calculate arrival time first, then add wait time. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 28 |
+
{"id": "time_multi_01", "domain": "time", "problem": "You leave home at 8:00 AM. Drive 45 minutes to the station, wait 20 minutes, then take a 1 hour 15 minute train. What time do you arrive? Answer with just the time.", "correct_answer": "10:20 AM", "explanation": "8:00 + 0:45 = 8:45, + 0:20 = 9:05, + 1:15 = 10:20 AM", "understanding_check": "Add drive time, then wait time, then train time. What's the sequence?", "difficulty": "hard", "steps": 3}
|
| 29 |
+
{"id": "recipe_scale_01", "domain": "recipe", "problem": "A recipe for 4 people needs 2 cups of flour. Scale to 6 people, then doubled for a party. How much cups of flour do you need? Answer with just the number.", "correct_answer": "6", "explanation": "2 \u00d7 (6/4) = 3.0, then \u00d7 2 = 6", "understanding_check": "First scale the recipe from 4 to 6 servings, then doubled. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 30 |
+
{"id": "recipe_scale_02", "domain": "recipe", "problem": "A recipe for 8 people needs 3 eggs. Scale to 12 people, then halved for a party. How much eggs do you need? Answer with just the number.", "correct_answer": "2.25", "explanation": "3 \u00d7 (12/8) = 4.5, then \u00d7 0.5 = 2.25", "understanding_check": "First scale the recipe from 8 to 12 servings, then halved. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 31 |
+
{"id": "recipe_scale_03", "domain": "recipe", "problem": "A recipe for 4 people needs 1.5 cups of sugar. Scale to 8 people, then doubled for a party. How much cups of sugar do you need? Answer with just the number.", "correct_answer": "6", "explanation": "1.5 \u00d7 (8/4) = 3.0, then \u00d7 2 = 6", "understanding_check": "First scale the recipe from 4 to 8 servings, then doubled. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 32 |
+
{"id": "recipe_scale_04", "domain": "recipe", "problem": "A recipe for 6 people needs 4 tablespoons butter. Scale to 9 people, then halved for a party. How much tablespoons butter do you need? Answer with just the number.", "correct_answer": "3", "explanation": "4 \u00d7 (9/6) = 6.0, then \u00d7 0.5 = 3", "understanding_check": "First scale the recipe from 6 to 9 servings, then halved. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 33 |
+
{"id": "recipe_scale_05", "domain": "recipe", "problem": "A recipe for 5 people needs 2 cups of milk. Scale to 10 people, then multiplied by 1.5 for a party. How much cups of milk do you need? Answer with just the number.", "correct_answer": "6", "explanation": "2 \u00d7 (10/5) = 4.0, then \u00d7 1.5 = 6", "understanding_check": "First scale the recipe from 5 to 10 servings, then multiplied by 1.5. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 34 |
+
{"id": "recipe_convert_01", "domain": "recipe", "problem": "A recipe needs 2 cups of milk (1 cup = 240ml). Convert to ml, then reduce by 25% for a lighter version. How many ml? Answer with just the number.", "correct_answer": "360", "explanation": "2 \u00d7 240 = 480ml, then 480 \u00d7 0.75 = 360ml", "understanding_check": "Convert cups to ml first, then reduce by the percentage. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 35 |
+
{"id": "recipe_convert_02", "domain": "recipe", "problem": "A recipe uses 500g of flour. Convert to pounds (1 pound = 454g), then triple for a large batch. How many pounds? Answer with just the number rounded to one decimal.", "correct_answer": "3.3", "explanation": "500 / 454 = 1.1 pounds, then 1.1 \u00d7 3 = 3.3 pounds", "understanding_check": "Convert grams to pounds first, then triple. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 36 |
+
{"id": "financial_compound_01", "domain": "financial", "problem": "You invest $1000 at 10% annual interest for 2 years (compounded yearly). Then you pay 20% tax on the gains only. What's your final amount? Answer with just the number.", "correct_answer": "1168", "explanation": "1000 \u00d7 (1.10)^2 = 1210.00, gains = 210.00, tax = 42.00, final = 1168.00", "understanding_check": "Calculate compound interest first, then calculate tax only on the gains. What are the steps?", "difficulty": "medium", "steps": 3}
|
| 37 |
+
{"id": "financial_compound_02", "domain": "financial", "problem": "You invest $5000 at 5% annual interest for 3 years (compounded yearly). Then you pay 15% tax on the gains only. What's your final amount? Answer with just the number.", "correct_answer": "5669.91", "explanation": "5000 \u00d7 (1.05)^3 = 5788.13, gains = 788.13, tax = 118.22, final = 5669.91", "understanding_check": "Calculate compound interest first, then calculate tax only on the gains. What are the steps?", "difficulty": "hard", "steps": 3}
|
| 38 |
+
{"id": "financial_compound_03", "domain": "financial", "problem": "You invest $2000 at 8% annual interest for 2 years (compounded yearly). Then you pay 25% tax on the gains only. What's your final amount? Answer with just the number.", "correct_answer": "2249.6", "explanation": "2000 \u00d7 (1.08)^2 = 2332.80, gains = 332.80, tax = 83.20, final = 2249.60", "understanding_check": "Calculate compound interest first, then calculate tax only on the gains. What are the steps?", "difficulty": "medium", "steps": 3}
|
| 39 |
+
{"id": "financial_compound_04", "domain": "financial", "problem": "You invest $500 at 12% annual interest for 2 years (compounded yearly). Then you pay 10% tax on the gains only. What's your final amount? Answer with just the number.", "correct_answer": "614.48", "explanation": "500 \u00d7 (1.12)^2 = 627.20, gains = 127.20, tax = 12.72, final = 614.48", "understanding_check": "Calculate compound interest first, then calculate tax only on the gains. What are the steps?", "difficulty": "medium", "steps": 3}
|
| 40 |
+
{"id": "financial_markup_01", "domain": "financial", "problem": "A $500 item has 25% markup, then 10% member discount. What does a member pay? Answer with just the number.", "correct_answer": "562.5", "explanation": "500 \u00d7 1.25 = 625.0, then \u00d7 0.9 = 562.5", "understanding_check": "Apply markup first (increase), then discount (decrease). What are the steps?", "difficulty": "easy", "steps": 2}
|
| 41 |
+
{"id": "financial_markup_02", "domain": "financial", "problem": "A $200 item has 50% markup, then 20% member discount. What does a member pay? Answer with just the number.", "correct_answer": "240", "explanation": "200 \u00d7 1.5 = 300.0, then \u00d7 0.8 = 240.0", "understanding_check": "Apply markup first (increase), then discount (decrease). What are the steps?", "difficulty": "easy", "steps": 2}
|
| 42 |
+
{"id": "financial_markup_03", "domain": "financial", "problem": "A $800 item has 20% markup, then 15% member discount. What does a member pay? Answer with just the number.", "correct_answer": "816", "explanation": "800 \u00d7 1.2 = 960.0, then \u00d7 0.85 = 816.0", "understanding_check": "Apply markup first (increase), then discount (decrease). What are the steps?", "difficulty": "medium", "steps": 2}
|
| 43 |
+
{"id": "financial_markup_04", "domain": "financial", "problem": "A $150 item has 40% markup, then 25% member discount. What does a member pay? Answer with just the number.", "correct_answer": "157.5", "explanation": "150 \u00d7 1.4 = 210.0, then \u00d7 0.75 = 157.5", "understanding_check": "Apply markup first (increase), then discount (decrease). What are the steps?", "difficulty": "medium", "steps": 2}
|
| 44 |
+
{"id": "financial_markup_05", "domain": "financial", "problem": "A $1000 item has 30% markup, then 10% member discount. What does a member pay? Answer with just the number.", "correct_answer": "1170", "explanation": "1000 \u00d7 1.3 = 1300.0, then \u00d7 0.9 = 1170.0", "understanding_check": "Apply markup first (increase), then discount (decrease). What are the steps?", "difficulty": "medium", "steps": 2}
|
| 45 |
+
{"id": "financial_commission_01", "domain": "financial", "problem": "A salesperson earns 5% on the first $10,000 of sales and 8% on anything above. They sold $15,000. What's their commission? Answer with just the number.", "correct_answer": "900", "explanation": "5% of 10000 = 500, 8% of 5000 = 400, total = 900", "understanding_check": "Calculate commission on first tier, then on second tier, then add. What are the steps?", "difficulty": "hard", "steps": 3}
|
| 46 |
+
{"id": "unit_convert_01", "domain": "units", "problem": "Convert 10 miles to kilometers (1 mile = 1.6 km), add 5 km, then convert back to miles. How many miles? Answer with just the number.", "correct_answer": "13.125", "explanation": "10 \u00d7 1.6 = 16 km, 16 + 5 = 21 km, 21 \u00f7 1.6 = 13.125 miles", "understanding_check": "Convert to km, add, then convert back. What are the three steps?", "difficulty": "medium", "steps": 3}
|
| 47 |
+
{"id": "unit_convert_02", "domain": "units", "problem": "Convert 100\u00b0F to Celsius (C = (F-32) \u00d7 5/9), subtract 10\u00b0C, then convert back to Fahrenheit. What's the temperature in \u00b0F? Answer with just the number.", "correct_answer": "82", "explanation": "(100-32) \u00d7 5/9 = 37.78\u00b0C, 37.78 - 10 = 27.78\u00b0C, 27.78 \u00d7 9/5 + 32 = 82\u00b0F", "understanding_check": "Convert F to C, subtract, then convert back. What are the steps?", "difficulty": "hard", "steps": 3}
|
| 48 |
+
{"id": "unit_volume_01", "domain": "units", "problem": "You have 2 liters of water. Add 500ml, then pour out 1/4 of the total. How many ml remain? Answer with just the number.", "correct_answer": "1875", "explanation": "2000 + 500 = 2500ml, then 2500 \u00d7 0.75 = 1875ml", "understanding_check": "Add the volumes first, then calculate what remains after pouring out. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 49 |
+
{"id": "unit_volume_02", "domain": "units", "problem": "A tank holds 50 gallons. Drain 20%, then add 8 gallons. How many gallons now? Answer with just the number.", "correct_answer": "48", "explanation": "50 \u00d7 0.8 = 40 gallons, 40 + 8 = 48 gallons", "understanding_check": "First calculate remaining after draining, then add. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 50 |
+
{"id": "unit_volume_03", "domain": "units", "problem": "A pool holds 10,000 liters. Fill it to 75%, then drain 500 liters. How many liters remain? Answer with just the number.", "correct_answer": "7000", "explanation": "10000 \u00d7 0.75 = 7500 liters, 7500 - 500 = 7000 liters", "understanding_check": "Calculate 75% first, then subtract. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 51 |
+
{"id": "unit_speed_01", "domain": "units", "problem": "Drive 60 miles at 30 mph, then 40 miles at 40 mph. What's the total travel time in hours? Answer with just the number.", "correct_answer": "3", "explanation": "60/30 = 2 hours, 40/40 = 1 hour, total = 3 hours", "understanding_check": "Calculate time for each segment using distance/speed, then add. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 52 |
+
{"id": "unit_speed_02", "domain": "units", "problem": "A car travels 120 km in 1.5 hours, then 80 km in 1 hour. What's the average speed for the entire trip in km/h? Answer with just the number.", "correct_answer": "80", "explanation": "Total distance = 200 km, total time = 2.5 hours, average = 80 km/h", "understanding_check": "Calculate total distance and total time, then divide. What are the steps?", "difficulty": "medium", "steps": 2}
|
| 53 |
+
{"id": "schedule_01", "domain": "scheduling", "problem": "Task A takes 2 hours. Task B takes 3 hours and must start after A finishes. Task C takes 1 hour and runs parallel to B. Starting at 9 AM, when do all tasks finish? Answer with just the time.", "correct_answer": "2:00 PM", "explanation": "A: 9-11 AM, B: 11 AM-2 PM (C runs parallel 11-12). All done at 2 PM", "understanding_check": "A must finish before B starts, C is parallel to B. What determines the end time?", "difficulty": "medium", "steps": 2}
|
| 54 |
+
{"id": "schedule_02", "domain": "scheduling", "problem": "Process X takes 45 minutes. Process Y takes 30 minutes and needs X's output. Process Z takes 20 minutes and needs Y's output. Total time from start to finish? Answer in minutes.", "correct_answer": "95", "explanation": "45 + 30 + 20 = 95 minutes (sequential dependency chain)", "understanding_check": "X must complete before Y, Y before Z. They're sequential. What's the total?", "difficulty": "easy", "steps": 3}
|
| 55 |
+
{"id": "schedule_03", "domain": "scheduling", "problem": "Download takes 10 minutes. Install takes 15 minutes (after download). Configuration takes 5 minutes (after install). Testing takes 20 minutes (after config). Total time? Answer in minutes.", "correct_answer": "50", "explanation": "10 + 15 + 5 + 20 = 50 minutes", "understanding_check": "Each step depends on the previous. How do you calculate total time?", "difficulty": "easy", "steps": 4}
|
| 56 |
+
{"id": "schedule_04", "domain": "scheduling", "problem": "Path 1: Tasks A(2h) then B(3h). Path 2: Task C(4h). Both paths must complete. Starting at 10 AM, when is everything done? Answer with just the time.", "correct_answer": "3:00 PM", "explanation": "Path 1: 2+3=5 hours. Path 2: 4 hours. Critical path is 5 hours. 10 AM + 5h = 3 PM", "understanding_check": "Find the longest path (critical path). That determines when everything finishes.", "difficulty": "medium", "steps": 2}
|
| 57 |
+
{"id": "schedule_05", "domain": "scheduling", "problem": "Team A: 3 tasks of 20 mins each (sequential). Team B: 2 tasks of 25 mins each (sequential). Both teams work in parallel. When do both finish? Answer in minutes from start.", "correct_answer": "60", "explanation": "Team A: 60 mins. Team B: 50 mins. Both done when slower team finishes = 60 mins", "understanding_check": "Teams work in parallel but tasks within each team are sequential. What's the critical path?", "difficulty": "medium", "steps": 2}
|
| 58 |
+
{"id": "schedule_06", "domain": "scheduling", "problem": "Worker A completes a job in 6 hours. Worker B completes it in 4 hours. Working together, how long to complete one job? Answer in hours as a decimal.", "correct_answer": "2.4", "explanation": "Rate A = 1/6, Rate B = 1/4. Combined = 1/6 + 1/4 = 5/12. Time = 12/5 = 2.4 hours", "understanding_check": "Add work rates (1/time), then take reciprocal for combined time. What are the steps?", "difficulty": "hard", "steps": 3}
|
| 59 |
+
{"id": "schedule_07", "domain": "scheduling", "problem": "A printer prints 30 pages/min. Another prints 20 pages/min. How long to print 250 pages together? Answer in minutes.", "correct_answer": "5", "explanation": "Combined rate = 50 pages/min. 250 \u00f7 50 = 5 minutes", "understanding_check": "Add the rates together, then divide total pages by combined rate. What are the steps?", "difficulty": "easy", "steps": 2}
|
| 60 |
+
{"id": "logic_order_01", "domain": "logic", "problem": "In a race: Alice finishes before Bob. Carol finishes after Bob but before Dave. Eve finishes between Alice and Bob. List the finish order from first to last, separated by commas.", "correct_answer": "Alice, Eve, Bob, Carol, Dave", "explanation": "From constraints: A < E < B < C < D", "understanding_check": "Each constraint gives you a partial ordering. Combine them to get the full order.", "difficulty": "medium", "steps": 4}
|
| 61 |
+
{"id": "logic_order_02", "domain": "logic", "problem": "Five books on a shelf from left to right: Red is left of Blue. Green is right of Blue. Yellow is left of Red. Orange is between Blue and Green. What's the order left to right?", "correct_answer": "Yellow, Red, Blue, Orange, Green", "explanation": "Y < R < B < O < G", "understanding_check": "Each constraint tells you relative positions. Build the sequence step by step.", "difficulty": "medium", "steps": 4}
|
| 62 |
+
{"id": "logic_modus_01", "domain": "logic", "problem": "If it rains, the ground is wet. If the ground is wet, the game is cancelled. It rained. Is the game cancelled? Answer yes or no.", "correct_answer": "yes", "explanation": "Rain \u2192 Wet \u2192 Cancelled. Rain is true, so Cancelled is true.", "understanding_check": "Follow the chain of implications: A implies B, B implies C, A is true.", "difficulty": "easy", "steps": 2}
|
| 63 |
+
{"id": "logic_modus_02", "domain": "logic", "problem": "If the battery is dead, the car won't start. If the car won't start, I'll be late. If I'm late, I'll miss the meeting. The battery is dead. Will I miss the meeting? Answer yes or no.", "correct_answer": "yes", "explanation": "Dead battery \u2192 No start \u2192 Late \u2192 Miss meeting", "understanding_check": "Follow the implication chain from the given fact to the conclusion.", "difficulty": "easy", "steps": 3}
|
| 64 |
+
{"id": "logic_modus_03", "domain": "logic", "problem": "All programmers know logic. All logicians are good at puzzles. Sam is a programmer. Is Sam good at puzzles? Answer yes, no, or cannot determine.", "correct_answer": "cannot determine", "explanation": "Sam is programmer \u2192 knows logic. But knowing logic \u2260 being a logician.", "understanding_check": "Check if the chain of implications is complete. Is there a gap?", "difficulty": "hard", "steps": 2}
|
| 65 |
+
{"id": "logic_sets_01", "domain": "logic", "problem": "30 students take Math. 25 take Science. 10 take both. How many take at least one subject? Answer with just the number.", "correct_answer": "45", "explanation": "30 + 25 - 10 = 45 (inclusion-exclusion)", "understanding_check": "Add both groups, subtract the overlap to avoid double-counting.", "difficulty": "easy", "steps": 2}
|
| 66 |
+
{"id": "logic_sets_02", "domain": "logic", "problem": "In a group of 50 people: 35 speak English, 30 speak Spanish, and 20 speak both. How many speak neither? Answer with just the number.", "correct_answer": "5", "explanation": "Either language: 35 + 30 - 20 = 45. Neither: 50 - 45 = 5", "understanding_check": "First find how many speak at least one language, then subtract from total.", "difficulty": "medium", "steps": 3}
|
| 67 |
+
{"id": "logic_sets_03", "domain": "logic", "problem": "100 people surveyed about pets: 60 have dogs, 40 have cats, 15 have both, 25 have fish only. How many have no pets? Answer with just the number.", "correct_answer": "10", "explanation": "Dogs or cats: 60 + 40 - 15 = 85. Fish only adds 25 but we need just no pets. 85 + 25 = 110 > 100, so fish must overlap. Actually: 100 - (60+40-15) - 25 + overlap = need to recalc...", "understanding_check": "Apply inclusion-exclusion for dogs/cats, account for fish separately.", "difficulty": "hard", "steps": 3}
|
evaluate.py
ADDED
|
@@ -0,0 +1,471 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Goodhart Gap Benchmark Evaluation Script
|
| 4 |
+
|
| 5 |
+
Evaluate any model on the Goodhart Gap benchmark to detect the gap
|
| 6 |
+
between understanding and execution in multi-step reasoning.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
# Using OpenAI API
|
| 10 |
+
python evaluate.py --provider openai --model gpt-4o
|
| 11 |
+
|
| 12 |
+
# Using Anthropic API
|
| 13 |
+
python evaluate.py --provider anthropic --model claude-3-5-haiku-latest
|
| 14 |
+
|
| 15 |
+
# Using local Ollama
|
| 16 |
+
python evaluate.py --provider ollama --model llama3.1:8b
|
| 17 |
+
|
| 18 |
+
# Using HuggingFace transformers
|
| 19 |
+
python evaluate.py --provider huggingface --model meta-llama/Llama-3.1-8B-Instruct
|
| 20 |
+
|
| 21 |
+
# Custom API endpoint
|
| 22 |
+
python evaluate.py --provider custom --model mymodel --api-url http://localhost:8000/v1
|
| 23 |
+
|
| 24 |
+
Environment Variables:
|
| 25 |
+
OPENAI_API_KEY - Required for OpenAI provider
|
| 26 |
+
ANTHROPIC_API_KEY - Required for Anthropic provider
|
| 27 |
+
HF_TOKEN - Optional for gated HuggingFace models
|
| 28 |
+
"""
|
| 29 |
+
|
| 30 |
+
import argparse
|
| 31 |
+
import json
|
| 32 |
+
import os
|
| 33 |
+
import re
|
| 34 |
+
import sys
|
| 35 |
+
from dataclasses import dataclass
|
| 36 |
+
from datetime import datetime
|
| 37 |
+
from pathlib import Path
|
| 38 |
+
from typing import Optional, Callable
|
| 39 |
+
import time
|
| 40 |
+
|
| 41 |
+
# Optional imports
|
| 42 |
+
try:
|
| 43 |
+
import requests
|
| 44 |
+
HAS_REQUESTS = True
|
| 45 |
+
except ImportError:
|
| 46 |
+
HAS_REQUESTS = False
|
| 47 |
+
|
| 48 |
+
@dataclass
|
| 49 |
+
class TestResult:
|
| 50 |
+
id: str
|
| 51 |
+
domain: str
|
| 52 |
+
problem: str
|
| 53 |
+
expected: str
|
| 54 |
+
response: str
|
| 55 |
+
extracted_answer: str
|
| 56 |
+
passed: bool
|
| 57 |
+
latency_ms: float
|
| 58 |
+
|
| 59 |
+
def extract_answer(response: str, expected: str) -> str:
|
| 60 |
+
"""Extract the answer from model response."""
|
| 61 |
+
response = response.strip()
|
| 62 |
+
|
| 63 |
+
# Try to find numbers in the response
|
| 64 |
+
numbers = re.findall(r'-?[\d,]+\.?\d*', response)
|
| 65 |
+
|
| 66 |
+
# For yes/no questions
|
| 67 |
+
if expected.lower() in ['yes', 'no']:
|
| 68 |
+
resp_lower = response.lower()
|
| 69 |
+
if 'yes' in resp_lower and 'no' not in resp_lower.split()[:3]:
|
| 70 |
+
return 'yes'
|
| 71 |
+
if 'no' in resp_lower and 'yes' not in resp_lower.split()[:3]:
|
| 72 |
+
return 'no'
|
| 73 |
+
if 'cannot determine' in resp_lower or 'cannot be determined' in resp_lower:
|
| 74 |
+
return 'cannot determine'
|
| 75 |
+
|
| 76 |
+
# For time answers
|
| 77 |
+
time_match = re.search(r'(\d{1,2}:\d{2})\s*(AM|PM|am|pm)?', response)
|
| 78 |
+
if time_match:
|
| 79 |
+
time_str = time_match.group(1)
|
| 80 |
+
period = time_match.group(2) or ''
|
| 81 |
+
return f"{time_str} {period}".strip()
|
| 82 |
+
|
| 83 |
+
# For ordering questions (comma-separated names)
|
| 84 |
+
if ',' in expected and any(c.isalpha() for c in expected):
|
| 85 |
+
# Try to extract comma-separated list
|
| 86 |
+
parts = [p.strip() for p in response.split(',') if p.strip()]
|
| 87 |
+
if len(parts) >= 3:
|
| 88 |
+
return ', '.join(parts[:5])
|
| 89 |
+
|
| 90 |
+
# Return first number found
|
| 91 |
+
if numbers:
|
| 92 |
+
return numbers[0].replace(',', '')
|
| 93 |
+
|
| 94 |
+
# Return first line or truncated response
|
| 95 |
+
first_line = response.split('\n')[0]
|
| 96 |
+
return first_line[:50] if len(first_line) > 50 else first_line
|
| 97 |
+
|
| 98 |
+
def validate_answer(response: str, expected: str, domain: str) -> bool:
|
| 99 |
+
"""Validate if the response matches the expected answer."""
|
| 100 |
+
response = response.lower().strip()
|
| 101 |
+
expected = expected.lower().strip()
|
| 102 |
+
|
| 103 |
+
# Direct match
|
| 104 |
+
if expected in response:
|
| 105 |
+
return True
|
| 106 |
+
|
| 107 |
+
# Numeric comparison
|
| 108 |
+
expected_nums = re.findall(r'-?[\d,]+\.?\d*', expected)
|
| 109 |
+
response_nums = re.findall(r'-?[\d,]+\.?\d*', response)
|
| 110 |
+
|
| 111 |
+
if expected_nums and response_nums:
|
| 112 |
+
try:
|
| 113 |
+
exp_val = float(expected_nums[0].replace(',', ''))
|
| 114 |
+
for resp_num in response_nums:
|
| 115 |
+
resp_val = float(resp_num.replace(',', ''))
|
| 116 |
+
# Allow small floating point tolerance
|
| 117 |
+
if abs(exp_val - resp_val) < 0.01:
|
| 118 |
+
return True
|
| 119 |
+
# Check if it's within 0.5% (for rounding)
|
| 120 |
+
if exp_val != 0 and abs(exp_val - resp_val) / abs(exp_val) < 0.005:
|
| 121 |
+
return True
|
| 122 |
+
except ValueError:
|
| 123 |
+
pass
|
| 124 |
+
|
| 125 |
+
# Time validation
|
| 126 |
+
if domain == 'time':
|
| 127 |
+
# Normalize time formats
|
| 128 |
+
def normalize_time(t):
|
| 129 |
+
t = t.lower().replace(' ', '')
|
| 130 |
+
t = re.sub(r'(\d{1,2}):(\d{2})(am|pm)?', r'\1:\2\3', t)
|
| 131 |
+
return t
|
| 132 |
+
|
| 133 |
+
if normalize_time(expected) in normalize_time(response):
|
| 134 |
+
return True
|
| 135 |
+
|
| 136 |
+
# Yes/no validation
|
| 137 |
+
if expected in ['yes', 'no', 'cannot determine']:
|
| 138 |
+
if expected == 'yes' and 'yes' in response and 'no' not in response.split()[:5]:
|
| 139 |
+
return True
|
| 140 |
+
if expected == 'no' and 'no' in response and 'yes' not in response.split()[:5]:
|
| 141 |
+
return True
|
| 142 |
+
if expected == 'cannot determine' and ('cannot' in response or 'unable' in response):
|
| 143 |
+
return True
|
| 144 |
+
|
| 145 |
+
# Ordering validation (check sequence)
|
| 146 |
+
if ',' in expected and domain == 'logic':
|
| 147 |
+
expected_items = [x.strip().lower() for x in expected.split(',')]
|
| 148 |
+
response_lower = response.lower()
|
| 149 |
+
# Check if items appear in correct order
|
| 150 |
+
positions = []
|
| 151 |
+
for item in expected_items:
|
| 152 |
+
pos = response_lower.find(item)
|
| 153 |
+
if pos == -1:
|
| 154 |
+
return False
|
| 155 |
+
positions.append(pos)
|
| 156 |
+
return positions == sorted(positions)
|
| 157 |
+
|
| 158 |
+
return False
|
| 159 |
+
|
| 160 |
+
class ModelProvider:
|
| 161 |
+
"""Base class for model providers."""
|
| 162 |
+
|
| 163 |
+
def generate(self, prompt: str) -> tuple[str, float]:
|
| 164 |
+
"""Generate response. Returns (response, latency_ms)."""
|
| 165 |
+
raise NotImplementedError
|
| 166 |
+
|
| 167 |
+
class OpenAIProvider(ModelProvider):
|
| 168 |
+
def __init__(self, model: str, api_key: Optional[str] = None):
|
| 169 |
+
self.model = model
|
| 170 |
+
self.api_key = api_key or os.environ.get('OPENAI_API_KEY')
|
| 171 |
+
if not self.api_key:
|
| 172 |
+
raise ValueError("OPENAI_API_KEY not set")
|
| 173 |
+
|
| 174 |
+
def generate(self, prompt: str) -> tuple[str, float]:
|
| 175 |
+
start = time.time()
|
| 176 |
+
headers = {
|
| 177 |
+
"Authorization": f"Bearer {self.api_key}",
|
| 178 |
+
"Content-Type": "application/json"
|
| 179 |
+
}
|
| 180 |
+
payload = {
|
| 181 |
+
"model": self.model,
|
| 182 |
+
"messages": [{"role": "user", "content": prompt}],
|
| 183 |
+
"temperature": 0.1,
|
| 184 |
+
"max_tokens": 200
|
| 185 |
+
}
|
| 186 |
+
response = requests.post(
|
| 187 |
+
"https://api.openai.com/v1/chat/completions",
|
| 188 |
+
headers=headers, json=payload, timeout=60
|
| 189 |
+
)
|
| 190 |
+
latency = (time.time() - start) * 1000
|
| 191 |
+
|
| 192 |
+
if response.status_code == 200:
|
| 193 |
+
return response.json()["choices"][0]["message"]["content"].strip(), latency
|
| 194 |
+
else:
|
| 195 |
+
return f"ERROR: {response.status_code}", latency
|
| 196 |
+
|
| 197 |
+
class AnthropicProvider(ModelProvider):
|
| 198 |
+
def __init__(self, model: str, api_key: Optional[str] = None):
|
| 199 |
+
self.model = model
|
| 200 |
+
self.api_key = api_key or os.environ.get('ANTHROPIC_API_KEY')
|
| 201 |
+
if not self.api_key:
|
| 202 |
+
raise ValueError("ANTHROPIC_API_KEY not set")
|
| 203 |
+
|
| 204 |
+
def generate(self, prompt: str) -> tuple[str, float]:
|
| 205 |
+
start = time.time()
|
| 206 |
+
headers = {
|
| 207 |
+
"x-api-key": self.api_key,
|
| 208 |
+
"anthropic-version": "2023-06-01",
|
| 209 |
+
"Content-Type": "application/json"
|
| 210 |
+
}
|
| 211 |
+
payload = {
|
| 212 |
+
"model": self.model,
|
| 213 |
+
"max_tokens": 200,
|
| 214 |
+
"messages": [{"role": "user", "content": prompt}]
|
| 215 |
+
}
|
| 216 |
+
response = requests.post(
|
| 217 |
+
"https://api.anthropic.com/v1/messages",
|
| 218 |
+
headers=headers, json=payload, timeout=60
|
| 219 |
+
)
|
| 220 |
+
latency = (time.time() - start) * 1000
|
| 221 |
+
|
| 222 |
+
if response.status_code == 200:
|
| 223 |
+
return response.json()["content"][0]["text"].strip(), latency
|
| 224 |
+
else:
|
| 225 |
+
return f"ERROR: {response.status_code}", latency
|
| 226 |
+
|
| 227 |
+
class OllamaProvider(ModelProvider):
|
| 228 |
+
def __init__(self, model: str, host: str = "http://localhost:11434"):
|
| 229 |
+
self.model = model
|
| 230 |
+
self.host = host
|
| 231 |
+
|
| 232 |
+
def generate(self, prompt: str) -> tuple[str, float]:
|
| 233 |
+
start = time.time()
|
| 234 |
+
payload = {
|
| 235 |
+
"model": self.model,
|
| 236 |
+
"prompt": prompt,
|
| 237 |
+
"stream": False,
|
| 238 |
+
"options": {"temperature": 0.1}
|
| 239 |
+
}
|
| 240 |
+
response = requests.post(
|
| 241 |
+
f"{self.host}/api/generate",
|
| 242 |
+
json=payload, timeout=120
|
| 243 |
+
)
|
| 244 |
+
latency = (time.time() - start) * 1000
|
| 245 |
+
|
| 246 |
+
if response.status_code == 200:
|
| 247 |
+
return response.json().get("response", "").strip(), latency
|
| 248 |
+
else:
|
| 249 |
+
return f"ERROR: {response.status_code}", latency
|
| 250 |
+
|
| 251 |
+
class CustomProvider(ModelProvider):
|
| 252 |
+
def __init__(self, model: str, api_url: str):
|
| 253 |
+
self.model = model
|
| 254 |
+
self.api_url = api_url
|
| 255 |
+
|
| 256 |
+
def generate(self, prompt: str) -> tuple[str, float]:
|
| 257 |
+
start = time.time()
|
| 258 |
+
# Assume OpenAI-compatible API
|
| 259 |
+
payload = {
|
| 260 |
+
"model": self.model,
|
| 261 |
+
"messages": [{"role": "user", "content": prompt}],
|
| 262 |
+
"temperature": 0.1,
|
| 263 |
+
"max_tokens": 200
|
| 264 |
+
}
|
| 265 |
+
response = requests.post(
|
| 266 |
+
f"{self.api_url}/chat/completions",
|
| 267 |
+
json=payload, timeout=120
|
| 268 |
+
)
|
| 269 |
+
latency = (time.time() - start) * 1000
|
| 270 |
+
|
| 271 |
+
if response.status_code == 200:
|
| 272 |
+
return response.json()["choices"][0]["message"]["content"].strip(), latency
|
| 273 |
+
else:
|
| 274 |
+
return f"ERROR: {response.status_code}", latency
|
| 275 |
+
|
| 276 |
+
def load_dataset(path: str = "data/test.jsonl") -> list[dict]:
|
| 277 |
+
"""Load the benchmark dataset."""
|
| 278 |
+
problems = []
|
| 279 |
+
with open(path) as f:
|
| 280 |
+
for line in f:
|
| 281 |
+
problems.append(json.loads(line))
|
| 282 |
+
return problems
|
| 283 |
+
|
| 284 |
+
def evaluate_model(
|
| 285 |
+
provider: ModelProvider,
|
| 286 |
+
problems: list[dict],
|
| 287 |
+
verbose: bool = False
|
| 288 |
+
) -> tuple[list[TestResult], dict]:
|
| 289 |
+
"""Evaluate a model on the benchmark."""
|
| 290 |
+
|
| 291 |
+
results = []
|
| 292 |
+
domain_stats = {}
|
| 293 |
+
|
| 294 |
+
for i, problem in enumerate(problems):
|
| 295 |
+
if verbose:
|
| 296 |
+
print(f"[{i+1}/{len(problems)}] {problem['id']}...", end=" ", flush=True)
|
| 297 |
+
|
| 298 |
+
response, latency = provider.generate(problem['problem'])
|
| 299 |
+
extracted = extract_answer(response, problem['correct_answer'])
|
| 300 |
+
passed = validate_answer(response, problem['correct_answer'], problem['domain'])
|
| 301 |
+
|
| 302 |
+
result = TestResult(
|
| 303 |
+
id=problem['id'],
|
| 304 |
+
domain=problem['domain'],
|
| 305 |
+
problem=problem['problem'],
|
| 306 |
+
expected=problem['correct_answer'],
|
| 307 |
+
response=response[:200],
|
| 308 |
+
extracted_answer=extracted,
|
| 309 |
+
passed=passed,
|
| 310 |
+
latency_ms=latency
|
| 311 |
+
)
|
| 312 |
+
results.append(result)
|
| 313 |
+
|
| 314 |
+
# Track domain stats
|
| 315 |
+
domain = problem['domain']
|
| 316 |
+
if domain not in domain_stats:
|
| 317 |
+
domain_stats[domain] = {'pass': 0, 'fail': 0}
|
| 318 |
+
domain_stats[domain]['pass' if passed else 'fail'] += 1
|
| 319 |
+
|
| 320 |
+
if verbose:
|
| 321 |
+
status = "PASS" if passed else "FAIL"
|
| 322 |
+
print(f"{status} (got: {extracted[:20]})")
|
| 323 |
+
|
| 324 |
+
# Calculate summary
|
| 325 |
+
total_pass = sum(r.passed for r in results)
|
| 326 |
+
total = len(results)
|
| 327 |
+
|
| 328 |
+
summary = {
|
| 329 |
+
'total': total,
|
| 330 |
+
'passed': total_pass,
|
| 331 |
+
'failed': total - total_pass,
|
| 332 |
+
'pass_rate': total_pass / total if total > 0 else 0,
|
| 333 |
+
'by_domain': {
|
| 334 |
+
d: {
|
| 335 |
+
'passed': s['pass'],
|
| 336 |
+
'total': s['pass'] + s['fail'],
|
| 337 |
+
'pass_rate': s['pass'] / (s['pass'] + s['fail'])
|
| 338 |
+
}
|
| 339 |
+
for d, s in domain_stats.items()
|
| 340 |
+
},
|
| 341 |
+
'avg_latency_ms': sum(r.latency_ms for r in results) / len(results) if results else 0
|
| 342 |
+
}
|
| 343 |
+
|
| 344 |
+
return results, summary
|
| 345 |
+
|
| 346 |
+
def save_results(
|
| 347 |
+
results: list[TestResult],
|
| 348 |
+
summary: dict,
|
| 349 |
+
model_name: str,
|
| 350 |
+
output_dir: str = "results"
|
| 351 |
+
):
|
| 352 |
+
"""Save evaluation results."""
|
| 353 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 354 |
+
|
| 355 |
+
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 356 |
+
safe_model = re.sub(r'[^\w\-]', '_', model_name)
|
| 357 |
+
|
| 358 |
+
# Save detailed results
|
| 359 |
+
results_file = f"{output_dir}/{safe_model}_{timestamp}_results.jsonl"
|
| 360 |
+
with open(results_file, 'w') as f:
|
| 361 |
+
for r in results:
|
| 362 |
+
f.write(json.dumps({
|
| 363 |
+
'id': r.id,
|
| 364 |
+
'domain': r.domain,
|
| 365 |
+
'expected': r.expected,
|
| 366 |
+
'response': r.response,
|
| 367 |
+
'extracted': r.extracted_answer,
|
| 368 |
+
'passed': r.passed,
|
| 369 |
+
'latency_ms': r.latency_ms
|
| 370 |
+
}) + '\n')
|
| 371 |
+
|
| 372 |
+
# Save summary
|
| 373 |
+
summary_file = f"{output_dir}/{safe_model}_{timestamp}_summary.json"
|
| 374 |
+
summary['model'] = model_name
|
| 375 |
+
summary['timestamp'] = timestamp
|
| 376 |
+
with open(summary_file, 'w') as f:
|
| 377 |
+
json.dump(summary, f, indent=2)
|
| 378 |
+
|
| 379 |
+
return results_file, summary_file
|
| 380 |
+
|
| 381 |
+
def print_summary(summary: dict, model_name: str):
|
| 382 |
+
"""Print evaluation summary."""
|
| 383 |
+
print("\n" + "=" * 60)
|
| 384 |
+
print(f"GOODHART GAP BENCHMARK RESULTS")
|
| 385 |
+
print(f"Model: {model_name}")
|
| 386 |
+
print("=" * 60)
|
| 387 |
+
|
| 388 |
+
print(f"\nOverall: {summary['passed']}/{summary['total']} ({summary['pass_rate']*100:.1f}%)")
|
| 389 |
+
print(f"Average latency: {summary['avg_latency_ms']:.0f}ms")
|
| 390 |
+
|
| 391 |
+
print("\nBy Domain:")
|
| 392 |
+
print("-" * 40)
|
| 393 |
+
for domain, stats in sorted(summary['by_domain'].items()):
|
| 394 |
+
bar = "█" * int(stats['pass_rate'] * 10) + "░" * (10 - int(stats['pass_rate'] * 10))
|
| 395 |
+
print(f" {domain:<15} {stats['passed']:>2}/{stats['total']:<2} {bar} {stats['pass_rate']*100:>5.1f}%")
|
| 396 |
+
|
| 397 |
+
print("\n" + "=" * 60)
|
| 398 |
+
|
| 399 |
+
# Interpret results
|
| 400 |
+
pass_rate = summary['pass_rate']
|
| 401 |
+
if pass_rate >= 0.9:
|
| 402 |
+
print("Assessment: LOW GOODHART GAP - Model executes well")
|
| 403 |
+
elif pass_rate >= 0.7:
|
| 404 |
+
print("Assessment: MODERATE GOODHART GAP - Some execution issues")
|
| 405 |
+
elif pass_rate >= 0.5:
|
| 406 |
+
print("Assessment: SIGNIFICANT GOODHART GAP - Frequent execution failures")
|
| 407 |
+
else:
|
| 408 |
+
print("Assessment: SEVERE GOODHART GAP - Major execution problems")
|
| 409 |
+
|
| 410 |
+
def main():
|
| 411 |
+
parser = argparse.ArgumentParser(
|
| 412 |
+
description="Evaluate a model on the Goodhart Gap Benchmark",
|
| 413 |
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 414 |
+
epilog=__doc__
|
| 415 |
+
)
|
| 416 |
+
parser.add_argument('--provider', required=True,
|
| 417 |
+
choices=['openai', 'anthropic', 'ollama', 'custom'],
|
| 418 |
+
help='Model provider')
|
| 419 |
+
parser.add_argument('--model', required=True,
|
| 420 |
+
help='Model name/identifier')
|
| 421 |
+
parser.add_argument('--api-url', default=None,
|
| 422 |
+
help='API URL for custom provider')
|
| 423 |
+
parser.add_argument('--data', default='data/test.jsonl',
|
| 424 |
+
help='Path to test data')
|
| 425 |
+
parser.add_argument('--output', default='results',
|
| 426 |
+
help='Output directory')
|
| 427 |
+
parser.add_argument('--verbose', '-v', action='store_true',
|
| 428 |
+
help='Show progress')
|
| 429 |
+
parser.add_argument('--limit', type=int, default=None,
|
| 430 |
+
help='Limit number of problems (for testing)')
|
| 431 |
+
|
| 432 |
+
args = parser.parse_args()
|
| 433 |
+
|
| 434 |
+
if not HAS_REQUESTS:
|
| 435 |
+
print("ERROR: requests library required. Install with: pip install requests")
|
| 436 |
+
sys.exit(1)
|
| 437 |
+
|
| 438 |
+
# Create provider
|
| 439 |
+
if args.provider == 'openai':
|
| 440 |
+
provider = OpenAIProvider(args.model)
|
| 441 |
+
elif args.provider == 'anthropic':
|
| 442 |
+
provider = AnthropicProvider(args.model)
|
| 443 |
+
elif args.provider == 'ollama':
|
| 444 |
+
provider = OllamaProvider(args.model)
|
| 445 |
+
elif args.provider == 'custom':
|
| 446 |
+
if not args.api_url:
|
| 447 |
+
print("ERROR: --api-url required for custom provider")
|
| 448 |
+
sys.exit(1)
|
| 449 |
+
provider = CustomProvider(args.model, args.api_url)
|
| 450 |
+
|
| 451 |
+
# Load dataset
|
| 452 |
+
print(f"Loading dataset from {args.data}...")
|
| 453 |
+
problems = load_dataset(args.data)
|
| 454 |
+
if args.limit:
|
| 455 |
+
problems = problems[:args.limit]
|
| 456 |
+
print(f"Loaded {len(problems)} problems")
|
| 457 |
+
|
| 458 |
+
# Evaluate
|
| 459 |
+
print(f"\nEvaluating {args.model}...")
|
| 460 |
+
results, summary = evaluate_model(provider, problems, verbose=args.verbose)
|
| 461 |
+
|
| 462 |
+
# Save and print results
|
| 463 |
+
results_file, summary_file = save_results(results, summary, args.model, args.output)
|
| 464 |
+
print_summary(summary, args.model)
|
| 465 |
+
|
| 466 |
+
print(f"\nResults saved to:")
|
| 467 |
+
print(f" {results_file}")
|
| 468 |
+
print(f" {summary_file}")
|
| 469 |
+
|
| 470 |
+
if __name__ == "__main__":
|
| 471 |
+
main()
|
requirements.txt
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
requests>=2.28.0
|