βοΈ Searchless Chess: ViT-Medium (1960 ELO)
This is the flagship model of the Searchless Chess projectβa neural network that plays chess at a master level (~1960 ELO) relying purely on neural intuition, without any search algorithms (no minimax, no alpha-beta pruning).
- Architecture: Vision Transformer (ViT)
- Parameters: 9.5M
- Estimated ELO: 1960
- Training Data: 316M positions from Lichess/Stockfish
π― Key Achievements
- Master-Level Play: Reached ~1960 ELO by evaluating only the current board state.
- Extreme Efficiency: Achieved 67% of DeepMind's searchless ELO (2895) using less than 4% of their parameters and training data.
- ViT Superiority: This Medium ViT model outperforms ResNet architectures that are 2.5x larger, proving that Transformers are exceptionally efficient at capturing global board dependencies.
π Performance Metrics
| Metric | Value |
|---|---|
| ELO (Estimated) | 1960 |
| Tier 4 Puzzles (1000-1250) | 88% Accuracy |
| Tier 6 Puzzles (1500-1750) | 86% Accuracy |
| Tier 8 Puzzles (2000-2250) | 45% Accuracy |
π Quick Start & Usage
This repository provides everything needed to run or fine-tune the model:
model.keras: Complete model (architecture + weights) in Keras 3 format.model.weights.h5: Standalone weights.config.json: Model architecture configuration.
Which file to use?
- Use
model.kerasfor a quick "plug-and-play" experience, as it contains both the model architecture and the trained weights in one file. - Use
config.jsonandmodel.weights.h5if you prefer to build the model structure manually in your code and load only the parameters (e.g., for custom fine-tuning).
Using the model in Python
To use this model for playing chess, use the ChessAI wrapper provided in the official GitHub repository. It handles the FEN-to-Tensor encoding and move selection logic.
# 1. Clone the repository: git clone https://github.com/mateuszgrzyb-pl/searchless-chess
# 2. Use the following code:
import chess
from src.chess_ai.core.model import ChessAI
# Load the chess engine (provide path to the downloaded model.keras)
chess_bot = ChessAI('path/to/vit-medium/model.keras')
# Create a chess board
board = chess.Board()
# Play a game
board.push_san("e4") # Your move
engine_move = chess_bot.make_move(board) # Engine responds via neural intuition
board.push(engine_move)
print(board)
print(f"Engine played: {engine_move}")
For detailed instructions on board encoding, training, and evaluation, visit the Searchless Chess GitHub.
π¬ Model Architecture
The model treats the chess board as a set of tokens. Unlike traditional CNNs that look at local patterns, the Vision Transformer uses self-attention to understand the global relationship between all pieces simultaneously (e.g., a bishop's long-range pressure on a king).
- Patch Size: 1x1 (each square is a token)
- Embedding Dim: 384
- Num of Heads: 6
- Transformer Layers: 8
π Dataset
The model was trained on the Lichess Stockfish Normalized dataset, consisting of 316M deduplicated positions with evaluations.
π Citation
If you use this model in your research, please cite:
@software{grzyb2025searchless,
author = {Grzyb, Mateusz},
title = {Searchless Chess: Master-Level Chess Through Pure Neural Intuition},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/mateuszgrzyb/searchless-chess-vit-medium}
}
π§ Contact
- Project Repository: GitHub
- LinkedIn: Mateusz Grzyb
- Website: MateuszGrzyb.pl
- Downloads last month
- 22