--- license: apache-2.0 source_datasets: - original pretty_name: SudokuBench dataset_info: features: - name: starting_cells dtype: int32 - name: puzzle dtype: string - name: puzzle_pretty dtype: string - name: solution dtype: string - name: solution_pretty dtype: string configs: - config_name: eval default: true data_files: - split: train path: - "eval.parquet" - config_name: kids data_files: - split: train path: - "kids.parquet" - config_name: easy data_files: - split: train path: - "easy.parquet" - config_name: medium data_files: - split: train path: - "medium.parquet" - config_name: hard data_files: - split: train path: - "hard.parquet" - config_name: insane data_files: - split: train path: - "insane.parquet" - config_name: all data_files: - split: train path: - "kids.parquet" - "easy.parquet" - "medium.parquet" - "hard.parquet" - "insane.parquet" --- # Dataset Card for SudokuBench ## Dataset Details This dataset contains a list of sudoku puzzles and their solutions, all at varying levels of difficulty. The difficulties are based on the number of squares (also sometimes referred to as cells) that are provided at the start of the puzzle. The puzzles are guaranteed to have a single unique solution without any overlap. Within a difficulty config, you will find `10,000` puzzles at every number of available cells at the start of the board. This means that within the `kids` category, you will find 10,000 sudoku boards that have at least 63 filled squares. You will find 10,000 boards with 64 filled squares, etc all the way up to having 10,000 boards with 80 filled squares. I hope that this granularity provides for a clear understanding of where models start to have problems. - **Curated by:** Aaron Batilo ## Uses ### Direct Use The intended use for SudokuBench is to be able to evaluate language models on their ability to handle long context reasoning tasks. ## Dataset Structure All puzzles have the following parquet format: | starting_cells | puzzle | puzzle_pretty | solution | solution_pretty | |----------------|--------|---------------|----------|-----------------| | int | str | str | str | str | - **starting_cells**: How many cells are already filled (integer). - **puzzle**: The puzzle string (compact format). - **puzzle_pretty**: The puzzle string in a human-readable pretty format. - **solution**: The solution string (compact format). - **solution_pretty**: The solution string in a human-readable pretty format. ### Configs The dataset is organized into multiple parquet files grouped by difficulty thresholds, each represented as a separate config: | Config name | Minimum clues | Number of examples | Description | |---------------|---------------|--------------------|--------------------------------------------| | `kids` | 63 | 180000 | Very easy puzzles suitable for kids | | `easy` | 45 | 180000 | Easy puzzles | | `medium` | 36 | 90000 | Medium difficulty puzzles | | `hard` | 27 | 90000 | Hard puzzles | | `insane` | 17 | 100000 | Insane difficulty puzzles | Lastly, there's `eval`, which contains the first 200 puzzles of every single difficulty from 80 already filled squares to 17 already filled squares. This is sampling of a smaller number of puzzles is much more manageable for running holistic evals, compared to running 10,000 attempts on every single difficulty level.