Chess-Alpha-700K / README.md
satana123's picture
Update README.md
e5c754c verified
metadata
license: apache-2.0
task_categories:
  - reinforcement-learning
language:
  - en
  - ru
tags:
  - chess
  - deep-learning
  - stockfish
  - pytorch
  - multi-pv
  - chess-engine
  - reinforcement-learning
size_categories:
  - 100K<n<1M

♟️ Strategic Chess Dataset: Multi-PV & RL-Refined (700K+)

This is a high-performance dataset designed for training and pre-training state-of-the-art chess neural networks. It contains over 706,000 unique board positions generated and evaluated by Stockfish 16.1.

The dataset is specifically optimized for models using Policy & Value heads, providing rich metadata for each state.

🌟 Key Features

  • Multi-PV Intelligence: Each position includes not just the single best move, but 3 strong alternative plans. This allows models to learn strategic variability and fine-grained positional judgment.
  • 15-Channel Encoding: Data is pre-structured for advanced architectures. It includes 12 piece layers, 1 side-to-move layer, and 2 temporal layers (from/to squares of the last move) to eliminate tactical blindness.
  • RL-Refined Accuracy: Includes a specialized subset of 5,000+ positions derived from Reinforcement Learning sessions. These capture "hard-to-learn" tactical blunders that were corrected by Stockfish during active self-play.
  • High-Performance Processing: The entire dataset was generated and processed using a cluster with 128+ CPU cores, ensuring consistent and deep engine evaluation for every frame.
  • Ready-to-Train Eval: Position evaluations are pre-normalized using the $tanh(x / 300.0)$ function, mapping Stockfish centipawns to a perfect $[-1, 1]$ range for Stable MSE training.

📊 Data Structure

The dataset is provided in a compressed .npz format:

  • states: (N, 15, 8, 8) float32 tensors representing the board state.
  • plans: (N, 3, 1) int64 array containing Multi-PV move indices ($from_square \times 64 + to_square$).
  • evals: (N,) float32 array of normalized position evaluations.

🛠️ Usage (PyTorch Example)

import numpy as np
import torch
from torch.utils.data import Dataset

class StrategicChessDataset(Dataset):
    def __init__(self, npz_path):
        data = np.load(npz_path)
        self.states = data['states']
        self.evals = data['evals']
        # Extract the primary best move from Multi-PV plans
        self.best_moves = data['plans'][:, 0, 0] 

    def __len__(self):
        return len(self.states)

    def __getitem__(self, idx):
        state = torch.from_numpy(self.states[idx]).float()
        move = torch.tensor(self.best_moves[idx], dtype=torch.long)
        val = torch.tensor(self.evals[idx], dtype=torch.float32)
        return state, move, val

📈 Intended Use This dataset is ideal for:

Pre-training Chess Policy-Value networks (like AlphaZero or LC0 clones).

Fine-tuning models to reduce tactical blunders.

Researching Reinforcement Learning and MCTS-based agents.

📜 License This dataset is licensed under the Apache 2.0 License. You are free to use, modify, and distribute it for any purpose, including commercial projects.