File size: 6,557 Bytes
90e595d d72d2d3 90e595d 0044471 90e595d d8ac915 90e595d 7e40cd9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 1K<n<10K
---
# 🧩 NumPuzzle-Easy: The 1,500-Step Spatial Reasoning Nightmare
**"Simple for Humans, Impossible for Machines."**
NumPuzzle-Easy is a high-density benchmarking dataset designed to expose the fundamental limitations of modern Large Language Models (LLMs) in spatial reasoning, sequential state tracking, and recursive simulation. While humans can solve these puzzles with basic clerical accuracy using tools like Excel, even the most advanced "Reasoning" models—such as OpenAI's o1-series, DeepSeek-R1, and Qwen 3.6 Plus—struggle to achieve even a 1% accuracy rate.
## 🔴 The Core Challenge: Numerical Hell
The task is deceptively simple:
1. **The Grid:** A 9x9 matrix filled with single-digit integers (0-9).
2. **The Manipulation:** 15 consecutive swap operations (moves) from a starting coordinate.
3. **The Recurrence:** After the moves, the system resolves up to 4 recursive "Chain Reactions" (Match-3 and Gravity).
4. **The Goal:** Provide the final state of the top row (Row 1) as a 9-digit string.
### Why LLMs Fail (The Three Walls)
1. **The Tokenization Trap:** LLMs process numbers as tokens, not as spatial entities. Swapping digits changes the token boundaries, causing the model to lose the physical "anchor" of each coordinate.
2. **State Tracking Collapse:** Each move depends on the previous state. A single 1-pixel error at Step 3 renders the remaining 12 moves and 4 chains purely hallucinatory.
3. **Recursive Gravity Simulation:** Calculating how numbers "fall" into empty spaces requires a consistent internal 2D world model, which autoregressive transformers fundamentally lack.
## 📊 Dataset Statistics
- **Total Samples:** 1,500
- **Grid Size:** 9x9 (81 cells)
- **Difficulty:** High (Sequential dependency + 4-layer recursion)
- **Ground Truth:** Generated via deterministic Python simulation (Accuracy: 100%).
## 📂 Files in this Repository
- `NumPuzzle_Easy_AutoEval.csv`: Optimized for automated evaluation (Columns: `prompt`, `target`).
- `numpuzzle_easy_1.5k.csv`: Raw data for custom scripts (Columns: `instruction`, `answer`).
---
## 🏆 Community Leaderboard & Transparency
**I do not solve these puzzles myself.** As the creator, my role is to design the maze; it is the AI's role to fail within it.
To ensure the benchmark remains objective and untainted by author bias, **we do not maintain an internal leaderboard.** Instead, we rely on the community for ground truth reporting.
### 📢 Submit Your Results
We invite all researchers and AI enthusiasts to post their evaluation logs in the **[Community Tab](https://huggingface.co/datasets/56m/NumpuzzleEasy/discussions)**.
**When posting, please include:**
- **Model Name:** (e.g., GPT-5.5, Qwen 3.6, Gemini, Kimi K, GLM, etc)
- **Sample Size:** (How many out of 1,500 you tested, In the case of more than 1500 questions, did you repeat it or did you measure it randomly)
- **Accuracy:** (e.g., 1%)
- **Logs:** A snippet of the Thinking Process or the final JSON results.
> **Note on Integrity:** Figures posted in the Community tab by verified users will be considered the official record of a model's performance. The "Truth" of this benchmark lies in the collective failures of the world's most powerful AIs.
---
### 🛠 Evaluation Protocol: The Strict Mandate
# 📂 File Structure & Usage
This repository contains the following files:
| File Name | Description | Column Mapping |
| :--- | :--- | :--- |
| **`NumPuzzle_Easy_AutoEval.csv`** | **Recommended for Evaluators.** Specifically formatted for Hugging Face Auto-Post, LightEval, and other automated evaluation tools. | `prompt`, `target` |
| **`numpuzzle_easy_1.5k.csv`** | The raw dataset containing 1,500 unique problems. Best for manual inspection or custom scripts. | `instruction`, `answer` |
| **`.gitattributes`** | Standard Git LFS configuration to handle CSV files efficiently. | N/A |
### 🚀 How to Start Benchmarking
#### 1. Quick Automated Eval (The "I'm Busy" Way)
If you want to test your model via **Hugging Face Model Evaluator** or **LightEval**, use `NumPuzzle_Easy_AutoEval.csv`.
- **Prompt Column:** `prompt`
- **Reference Column:** `target`
- **Metric:** `Exact Match`
#### 2. Custom Evaluation Script
If you are running your own Python script, you can use either file.
- **Sample count:** 1,500 (Designed to be completed overnight via most APIs).
- **Format:** The `prompt`/`instruction` includes the 9x9 grid, 15 swap moves, and the rules for gravity/chains.
### 🧩 Why 1,500 samples?
1. **Statistical Power:** With 1,500 problems, any "lucky guess" from an LLM is statistically irrelevant.
2. **Cost-Efficiency:** Small enough to be processed via OpenAI/Anthropic APIs within a few hours (overnight), keeping benchmarking costs reasonable while ensuring total failure for the model.
## 📊 Call for Evaluation: Test your model on the NumPuzzle-Easy Benchmark!
We are crowdsourcing evaluation results for **NumPuzzle-Easy**!
This benchmark evaluates how accurately an LLM can identify and extract target numbers of varying lengths from complex contexts.
Instead of waiting in long queues on centralized leaderboards, you can evaluate your model locally using Hugging Face's official `Lighteval` tool. By running the command below, the results will be automatically verified and submitted to your model's repository as a Pull Request!
### 🚀 How to evaluate your model
**1. Install Lighteval**
```bash
pip install lighteval
```
**2. Download the task script**
Make sure you have the `numpuzzle_task.py` script in your working directory.
**3. Run the evaluation**
Run the following command. Replace `[YOUR_MODEL_ID]` with the Hugging Face ID of the model you want to test (e.g., `meta-llama/Meta-Llama-3-8B-Instruct`).
```bash
lighteval accelerate \
--model_args "pretrained=[YOUR_MODEL_ID]" \
--custom_tasks "Evai.py" \
--tasks "custom|numpuzzle-easy|0|0" \
--push_to_hub \
--results_org "[YOUR_HF_USERNAME]"
```
### 💡 What happens next?
By using the `--push_to_hub` flag, Lighteval will automatically generate a JSON/YAML report with your score and push it to the Hub. You can then show off your verified NumPuzzle score on your model card!
We look forward to seeing your results! Let's find out which model is the true master of numbers. 🧩🔢
---
"Let the GPUs burn, while we watch the logic crumble."
https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=56m/NumpuzzleEasy |