ChessQA-Benchmark / README.md
wieeii's picture
Update README.md
18c5a78 verified
metadata
annotations_creators:
  - expert-generated
language:
  - en
license: mit
multilinguality:
  - monolingual
pretty_name: ChessQA-Benchmark
size_categories:
  - 1K<n<10K
source_datasets:
  - original
task_categories:
  - question-answering
task_ids:
  - multiple-choice-qa
configs:
  - config_name: structural
    data_files:
      - data/chessqa_structural.parquet
  - config_name: motifs
    data_files:
      - data/chessqa_motifs.parquet
  - config_name: short_tactics
    data_files:
      - data/chessqa_short_tactics.parquet
  - config_name: position_judgement
    data_files:
      - data/chessqa_position_judgement.parquet
  - config_name: semantic
    data_files:
      - data/chessqa_semantic.parquet

ChessQA-Benchmark

CSSLab, Department of Computer Science, University of Toronto

Overview

Abstract

Chess provides an ideal testbed for evaluating the reasoning, modeling, and abstraction capabilities of large language models (LLMs), as it has well-defined structure and objective ground truth while admitting a wide spectrum of skill levels. However, existing evaluations of LLM ability in chess are ad hoc and narrow in scope, making it difficult to accurately measure LLM chess understanding and how it varies with scale, post-training methodologies, or architecture choices. We present ChessQA, a comprehensive benchmark that assesses LLM chess understanding across five task categories (Structural, Motifs, Short Tactics, Position Judgment, and Semantic), which approximately correspond to the ascending abstractions that players master as they accumulate chess knowledge, from understanding basic rules and learning tactical motifs to correctly calculating tactics, evaluating positions, and semantically describing high-level concepts. In this way, ChessQA captures a more comprehensive picture of chess ability and understanding, going significantly beyond the simple move quality evaluations done previously, and offers a controlled, consistent setting for diagnosis and comparison. Furthermore, ChessQA is inherently dynamic, with prompts, answer keys, and construction scripts that can evolve as models improve. Evaluating a range of contemporary LLMs, we find persistent weaknesses across all five categories and provide results and error analyses by category. We will release the code, periodically refreshed datasets, and a public leaderboard to support further research.

Key Features

  • Five categories with objective answer keys and robust extraction
    • Structural: piece arrangement, legal moves (piece/all), check detection and check‑in‑1, capture/control/protect squares, and state tracking (FEN after UCI sequences)
    • Motifs: pin, fork, skewer, battery, discovered check, double check
    • Short Tactics: best‑move puzzles by rating buckets (beginner→expert) and by theme (dozens of tactical themes)
    • Position Judgment: centipawn evaluation selection across bands (neutral/advantage/winning/…)
    • Semantic: multiple‑choice commentary understanding with several distractor strategies (keyword, piece+stage, semantic embedding, easy random)

Dataset structure

Each category is saved as its own Parquet file. Every file exposes the same schema:

column type description
task_id string Unique identifier for the task.
task_type string Fine-grained task template (e.g. structural_piece_arrangement).
task_category string High-level category (Structural, Motifs, Short Tactics, Position Judgment, Semantic).
input string Chess position in FEN, sometimes followed by a move hint separated by `
question string Prompt template with placeholder tokens.
format_examples list[string] Example answer formats that can be injected at inference time.
correct_answer string Ground-truth answer, formatted to match the template.
answer_type string Either single or multi, describing how to compare predictions.
metadata_json string JSON-encoded dict with task-specific metadata (e.g. puzzle id, difficulty bucket).
source_file string Original JSONL filename.
task_group string Convenience alias for source_file without the extension.

Loading with datasets

After uploading the folder to the Hub, consumers can load the benchmark and pick individual categories or concatenate them:

from datasets import load_dataset

data_files = {
    "structural": "data/chessqa_structural.parquet",
    "motifs": "data/chessqa_motifs.parquet",
    "short_tactics": "data/chessqa_short_tactics.parquet",
    "position_judgement": "data/chessqa_position_judgement.parquet",
    "semantic": "data/chessqa_semantic.parquet",
}

ds = load_dataset("wieeii/ChessQA-Benchmark", data_files=data_files)
print(ds["structural"].num_rows)  # 1100

# Optional: merge all categories into a single dataset
from datasets import concatenate_datasets
all_tasks = concatenate_datasets(list(ds.values()))
print(all_tasks.num_rows)  # 3500

The questions contain templated placeholders so downstream users can choose their own prompting strategy:

  • CONTEXT_PLACEHOLDER – replaced with auto-generated context (piece arrangement + legal moves) when desired.
  • FORMAT_EXAMPLE_PLACEHOLDER – replaced with one of the entries in format_examples.

To resolve those placeholders consistently, we ship a small helper module in scripts/chessqa_prompt_utils.py. It recreates the logic used in the original evaluation harness.

Prompt preparation helper

from datasets import load_dataset
from pathlib import Path

from huggingface_hub import hf_hub_download

module_path = hf_hub_download(
    repo_id="wieeii/ChessQA-Benchmark",
    filename="scripts/chessqa_prompt_utils.py",
)

import importlib.util
spec = importlib.util.spec_from_file_location("chessqa_prompt_utils", module_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)

PromptConfig = module.PromptConfig
format_prompt = module.format_prompt
extract_final_answer = module.extract_final_answer

data_files = {
    "structural": "data/chessqa_structural.parquet",
    "motifs": "data/chessqa_motifs.parquet",
    "short_tactics": "data/chessqa_short_tactics.parquet",
    "position_judgement": "data/chessqa_position_judgement.parquet",
    "semantic": "data/chessqa_semantic.parquet",
}

ds = load_dataset("wieeii/ChessQA-Benchmark", data_files=data_files)
row = ds["motifs"][0]

prompt = format_prompt(row, PromptConfig(add_context=True, format_example_index=0))

# send `prompt` to your model, then extract the answer marker back out
answer, ok = extract_final_answer("... model output ...")

Dependencies: the helper relies on python-chess for generating contexts. Install it alongside datasets (and optionally huggingface_hub for programmatic downloads):

pip install datasets python-chess huggingface_hub

If you prefer not to add the dependency, call format_prompt with PromptConfig(add_context=False) to skip context injection entirely.

Citation

If you use ChessQA in your work, please cite the accompanying paper:

@article{wen2025chessqa,
  title={ChessQA: Evaluating Large Language Models for Chess Understanding},
  author={Wen, Qianfeng and Tang, Zhenwei and Anderson, Ashton},
  journal={arXiv preprint arXiv:2510.23948},
  year={2025}
}