The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
π§© NumPuzzle-Easy: The 1,500-Step Spatial Reasoning Nightmare
"Simple for Humans, Impossible for Machines."
NumPuzzle-Easy is a high-density benchmarking dataset designed to expose the fundamental limitations of modern Large Language Models (LLMs) in spatial reasoning, sequential state tracking, and recursive simulation. While humans can solve these puzzles with basic clerical accuracy using tools like Excel, even the most advanced "Reasoning" modelsβsuch as OpenAI's o1-series, DeepSeek-R1, and Qwen 3.6 Plusβstruggle to achieve even a 1% accuracy rate.
π΄ The Core Challenge: Numerical Hell
The task is deceptively simple:
- The Grid: A 9x9 matrix filled with single-digit integers (0-9).
- The Manipulation: 15 consecutive swap operations (moves) from a starting coordinate.
- The Recurrence: After the moves, the system resolves up to 4 recursive "Chain Reactions" (Match-3 and Gravity).
- The Goal: Provide the final state of the top row (Row 1) as a 9-digit string.
Why LLMs Fail (The Three Walls)
- The Tokenization Trap: LLMs process numbers as tokens, not as spatial entities. Swapping digits changes the token boundaries, causing the model to lose the physical "anchor" of each coordinate.
- State Tracking Collapse: Each move depends on the previous state. A single 1-pixel error at Step 3 renders the remaining 12 moves and 4 chains purely hallucinatory.
- Recursive Gravity Simulation: Calculating how numbers "fall" into empty spaces requires a consistent internal 2D world model, which autoregressive transformers fundamentally lack.
π Dataset Statistics
- Total Samples: 1,500
- Grid Size: 9x9 (81 cells)
- Difficulty: High (Sequential dependency + 4-layer recursion)
- Ground Truth: Generated via deterministic Python simulation (Accuracy: 100%).
π Files in this Repository
NumPuzzle_Easy_AutoEval.csv: Optimized for automated evaluation (Columns:prompt,target).numpuzzle_easy_1.5k.csv: Raw data for custom scripts (Columns:instruction,answer).
π Community Leaderboard & Transparency
I do not solve these puzzles myself. As the creator, my role is to design the maze; it is the AI's role to fail within it.
To ensure the benchmark remains objective and untainted by author bias, we do not maintain an internal leaderboard. Instead, we rely on the community for ground truth reporting.
π’ Submit Your Results
We invite all researchers and AI enthusiasts to post their evaluation logs in the Community Tab.
When posting, please include:
- Model Name: (e.g., GPT-5.5, Qwen 3.6, Gemini, Kimi K, GLM, etc)
- Sample Size: (How many out of 1,500 you tested, In the case of more than 1500 questions, did you repeat it or did you measure it randomly)
- Accuracy: (e.g., 1%)
- Logs: A snippet of the Thinking Process or the final JSON results.
Note on Integrity: Figures posted in the Community tab by verified users will be considered the official record of a model's performance. The "Truth" of this benchmark lies in the collective failures of the world's most powerful AIs.
π Evaluation Protocol: The Strict Mandate
π File Structure & Usage
This repository contains the following files:
| File Name | Description | Column Mapping |
|---|---|---|
NumPuzzle_Easy_AutoEval.csv |
Recommended for Evaluators. Specifically formatted for Hugging Face Auto-Post, LightEval, and other automated evaluation tools. | prompt, target |
numpuzzle_easy_1.5k.csv |
The raw dataset containing 1,500 unique problems. Best for manual inspection or custom scripts. | instruction, answer |
.gitattributes |
Standard Git LFS configuration to handle CSV files efficiently. | N/A |
π How to Start Benchmarking
1. Quick Automated Eval (The "I'm Busy" Way)
If you want to test your model via Hugging Face Model Evaluator or LightEval, use NumPuzzle_Easy_AutoEval.csv.
- Prompt Column:
prompt - Reference Column:
target - Metric:
Exact Match
2. Custom Evaluation Script
If you are running your own Python script, you can use either file.
- Sample count: 1,500 (Designed to be completed overnight via most APIs).
- Format: The
prompt/instructionincludes the 9x9 grid, 15 swap moves, and the rules for gravity/chains.
π§© Why 1,500 samples?
- Statistical Power: With 1,500 problems, any "lucky guess" from an LLM is statistically irrelevant.
- Cost-Efficiency: Small enough to be processed via OpenAI/Anthropic APIs within a few hours (overnight), keeping benchmarking costs reasonable while ensuring total failure for the model.
π Call for Evaluation: Test your model on the NumPuzzle-Easy Benchmark!
We are crowdsourcing evaluation results for NumPuzzle-Easy! This benchmark evaluates how accurately an LLM can identify and extract target numbers of varying lengths from complex contexts.
Instead of waiting in long queues on centralized leaderboards, you can evaluate your model locally using Hugging Face's official Lighteval tool. By running the command below, the results will be automatically verified and submitted to your model's repository as a Pull Request!
π How to evaluate your model
1. Install Lighteval
pip install lighteval
2. Download the task script
Make sure you have the numpuzzle_task.py script in your working directory.
3. Run the evaluation
Run the following command. Replace [YOUR_MODEL_ID] with the Hugging Face ID of the model you want to test (e.g., meta-llama/Meta-Llama-3-8B-Instruct).
lighteval accelerate \
--model_args "pretrained=[YOUR_MODEL_ID]" \
--custom_tasks "Evai.py" \
--tasks "custom|numpuzzle-easy|0|0" \
--push_to_hub \
--results_org "[YOUR_HF_USERNAME]"
π‘ What happens next?
By using the --push_to_hub flag, Lighteval will automatically generate a JSON/YAML report with your score and push it to the Hub. You can then show off your verified NumPuzzle score on your model card!
We look forward to seeing your results! Let's find out which model is the true master of numbers. π§©π’
"Let the GPUs burn, while we watch the logic crumble."
https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=56m/NumpuzzleEasy
- Downloads last month
- 15