--- license: apache-2.0 task_categories: - text-generation language: - en size_categories: - 1K **Note on Integrity:** Figures posted in the Community tab by verified users will be considered the official record of a model's performance. The "Truth" of this benchmark lies in the collective failures of the world's most powerful AIs. --- ### ๐Ÿ›  Evaluation Protocol: The Strict Mandate # ๐Ÿ“‚ File Structure & Usage This repository contains the following files: | File Name | Description | Column Mapping | | :--- | :--- | :--- | | **`NumPuzzle_Easy_AutoEval.csv`** | **Recommended for Evaluators.** Specifically formatted for Hugging Face Auto-Post, LightEval, and other automated evaluation tools. | `prompt`, `target` | | **`numpuzzle_easy_1.5k.csv`** | The raw dataset containing 1,500 unique problems. Best for manual inspection or custom scripts. | `instruction`, `answer` | | **`.gitattributes`** | Standard Git LFS configuration to handle CSV files efficiently. | N/A | ### ๐Ÿš€ How to Start Benchmarking #### 1. Quick Automated Eval (The "I'm Busy" Way) If you want to test your model via **Hugging Face Model Evaluator** or **LightEval**, use `NumPuzzle_Easy_AutoEval.csv`. - **Prompt Column:** `prompt` - **Reference Column:** `target` - **Metric:** `Exact Match` #### 2. Custom Evaluation Script If you are running your own Python script, you can use either file. - **Sample count:** 1,500 (Designed to be completed overnight via most APIs). - **Format:** The `prompt`/`instruction` includes the 9x9 grid, 15 swap moves, and the rules for gravity/chains. ### ๐Ÿงฉ Why 1,500 samples? 1. **Statistical Power:** With 1,500 problems, any "lucky guess" from an LLM is statistically irrelevant. 2. **Cost-Efficiency:** Small enough to be processed via OpenAI/Anthropic APIs within a few hours (overnight), keeping benchmarking costs reasonable while ensuring total failure for the model. ## ๐Ÿ“Š Call for Evaluation: Test your model on the NumPuzzle-Easy Benchmark! We are crowdsourcing evaluation results for **NumPuzzle-Easy**! This benchmark evaluates how accurately an LLM can identify and extract target numbers of varying lengths from complex contexts. Instead of waiting in long queues on centralized leaderboards, you can evaluate your model locally using Hugging Face's official `Lighteval` tool. By running the command below, the results will be automatically verified and submitted to your model's repository as a Pull Request! ### ๐Ÿš€ How to evaluate your model **1. Install Lighteval** ```bash pip install lighteval ``` **2. Download the task script** Make sure you have the `numpuzzle_task.py` script in your working directory. **3. Run the evaluation** Run the following command. Replace `[YOUR_MODEL_ID]` with the Hugging Face ID of the model you want to test (e.g., `meta-llama/Meta-Llama-3-8B-Instruct`). ```bash lighteval accelerate \ --model_args "pretrained=[YOUR_MODEL_ID]" \ --custom_tasks "Evai.py" \ --tasks "custom|numpuzzle-easy|0|0" \ --push_to_hub \ --results_org "[YOUR_HF_USERNAME]" ``` ### ๐Ÿ’ก What happens next? By using the `--push_to_hub` flag, Lighteval will automatically generate a JSON/YAML report with your score and push it to the Hub. You can then show off your verified NumPuzzle score on your model card! We look forward to seeing your results! Let's find out which model is the true master of numbers. ๐Ÿงฉ๐Ÿ”ข --- "Let the GPUs burn, while we watch the logic crumble." https://huggingface.co/spaces/autoevaluate/model-evaluator?dataset=56m/NumpuzzleEasy