chess-elite-uci / README.md
MostLime's picture
Update README.md
a9f3e0e verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation
  - reinforcement-learning
language:
  - en
tags:
  - chess
  - uci
  - transformer
  - games
  - elite
  - lichess
  - tokenized
size_categories:
  - 1M<n<10M

chess-elite-uci

A transformer-ready dataset of ~7.8 million elite chess games, pre-tokenized in UCI notation with a deterministic 1977-token vocabulary. Built for training chess language models directly with no preprocessing required.

Dataset Summary

Field Value
Total games 7,805,503
Average sequence length 94.24 tokens
Max sequence length 255 tokens
Vocabulary size 1,977 tokens
Mean combined Elo 5,211 (~2,606 per player)

Sources

Lichess Elite Database (June 2020 – November 2025) Games where both players are rated 2500+ vs 2300+ (2022 onwards: 2500+ vs 2300+; prior: 2400+ vs 2200+). Source: database.nikonoel.fr. Licensed CC0.

Vocabulary

The vocabulary contains 1,977 tokens and is fully deterministic and enumerated from chess geometry, not derived from data. It will never produce OOV tokens for any legal chess game.

ID Token Description
0 <PAD> Padding
1 <W> POV token: white wins / white side for draws
2 <B> POV token: black wins / black side for draws
3 <CHECKMATE> Terminal: game ended in checkmate
4 <RESIGN> Terminal: losing side resigned (≥ 40 ply)
5 <STALEMATE> Terminal: draw by stalemate
6 <REPETITION> Terminal: draw by threefold repetition
7 <FIFTY_MOVE> Terminal: draw by 50-move rule
8 <INSUFF_MATERIAL> Terminal: draw by insufficient material
9+ a1a2 … h7h8q 1968 UCI move strings, sorted lexicographically

The full vocabulary is provided in vocab.json as { token_str: int_id }.

Sequence Format

Every game is encoded as a flat list of integer token IDs:

[ <POV> | m1 | m2 | m3 | ... | mN | <TERMINAL> ]
  • POV token (position 0): <W> if white wins, <B> if black wins. For draws, assigned randomly 50/50 between <W> and <B>.
  • Move tokens (positions 1 to N): UCI half-moves alternating white/black, e.g. e2e4, e7e5, g1f3, e1g1 (castling), e7e8q (promotion).
  • Terminal token (position N+1): encodes why the game ended.

Maximum sequence length is 255 tokens (1 POV + 253 moves + 1 terminal). Sequences are variable length, pad to 255 with <PAD> (ID 0) in your DataLoader.

NTP Loss Mask

The ntp_mask column contains a binary list of the same length as token_ids. It indicates which positions should have next-token-prediction (NTP) loss applied during training:

Position           NTP loss
─────────────────────────────
POV token          1  (always)
Winning side move  1
Losing side move   0  (context only)
Terminal token     1  (always)
Draw game moves    1  (both sides, since neither lost)

This implements win-conditioned training: the model learns to predict the winning side's moves given the POV token, while still attending to the losing side's moves as context.

Usage in PyTorch:

loss = cross_entropy(logits, labels, reduction="none")
loss = (loss * ntp_mask).sum() / ntp_mask.sum()

Filtering

Games were filtered as follows before inclusion:

Decisive games (1-0 / 0-1):

  • Checkmates: verified by board.is_checkmate() on the final position. No length minimum.
  • Resignations: not checkmate, minimum 40 halfmoves (20 moves each side).

Draws (1/2-1/2):

  • Only forced draws are included: stalemate, insufficient material, 50-move rule, threefold repetition.
  • Draw-by-agreement is excluded (board.is_game_over(claim_draw=True) must return True).

All games:

  • Maximum 253 halfmoves (fits within 255-token sequence budget).
  • Both player Elo values must be present and non-zero.
  • All moves must be legally parseable by python-chess.

Game type breakdown:

Type Count %
White checkmate 1,702,751 21.8%
White resignation 2,000,000 25.6%
Black checkmate 1,702,752 21.8%
Black resignation 2,000,000 25.6%
Forced draw 400,000 5.1%

Schema

{
    "white_elo":    int32,        # white player Elo
    "black_elo":    int32,        # black player Elo
    "combined_elo": int32,        # white_elo + black_elo
    "result":       string,       # "1-0", "0-1", or "1/2-1/2"
    "game_type":    string,       # "checkmate", "resignation", or "forced_draw"
    "pov":          string,       # "<W>" or "<B>"
    "terminal":     string,       # "<CHECKMATE>", "<RESIGN>", "<STALEMATE>", ...
    "source":       string,       # "lichess"
    "moves_uci":    string,       # space-separated UCI moves, human-readable
    "token_ids":    list[int32],  # encoded sequence, use this for training
    "ntp_mask":     list[int32],  # 1 = apply NTP loss, 0 = skip
    "seq_len":      int32,        # len(token_ids), always in [3, 255]
}

Usage

from datasets import load_dataset
import json

# Load dataset
ds = load_dataset("MostLime/chess-elite-uci", split="train")

# Load vocabulary
with open("vocab.json") as f:
    vocab = json.load(f)
id_to_token = {v: k for k, v in vocab.items()}

# Decode a game
row = ds[0]
tokens = [id_to_token[i] for i in row["token_ids"]]
print(" ".join(tokens))
# → <W> e2e4 e7e5 g1f3 b8c6 f1b5 ... <RESIGN>

# PyTorch DataLoader
import torch
from torch.utils.data import DataLoader

def collate(batch):
    max_len = 255
    token_ids = torch.zeros(len(batch), max_len, dtype=torch.long)
    ntp_mask  = torch.zeros(len(batch), max_len, dtype=torch.float)
    for i, row in enumerate(batch):
        n = row["seq_len"]
        token_ids[i, :n] = torch.tensor(row["token_ids"], dtype=torch.long)
        ntp_mask[i,  :n] = torch.tensor(row["ntp_mask"],  dtype=torch.float)
    return {"token_ids": token_ids, "ntp_mask": ntp_mask}

loader = DataLoader(ds, batch_size=32, collate_fn=collate)

Inference

At inference time, prepend the POV token for the side the model plays as, then feed opponent moves as context and sample responses:

# Model plays as white
sequence = [vocab["<W>"]]

# Opponent plays e7e5 — append as context
sequence.append(vocab["e7e5"])

# Sample model's next move from legal UCI moves for the current position

Terminal tokens are never generated during normal play. The game ends when the opponent resigns or a draw is claimed externally.

Citation

@dataset{mostlime2026chessEliteUCI,
  author    = {MostLime},
  title     = {chess-elite-uci: A Transformer-Ready Dataset of Elite Chess Games},
  year      = {2026},
  publisher = {HuggingFace},
  url       = {https://huggingface.co/datasets/MostLime/chess-elite-uci}
}

Acknowledgements