File size: 10,236 Bytes
2ca5f55 2447e21 2ca5f55 2447e21 2ca5f55 2447e21 2ca5f55 2447e21 b4747da 2447e21 b4747da 2447e21 2ca5f55 9c477ad aa44d9b 9c477ad aa44d9b 9c477ad 99cbdf9 2447e21 99cbdf9 2447e21 99cbdf9 2ca5f55 c6273dd 2ca5f55 c6273dd 0aa1833 cd55665 fe9d052 2ca5f55 576b554 2ca5f55 cd55665 2ca5f55 cd55665 2ca5f55 576b554 68d00bc 99cbdf9 68d00bc c6273dd 68d00bc 2447e21 576b554 68d00bc c6273dd 2447e21 68d00bc c6273dd 68d00bc cd55665 c6273dd 68d00bc 9c477ad c6273dd 9c477ad c6273dd 9c477ad c6273dd 9c477ad 2447e21 9c477ad c6273dd 9c477ad c6273dd 9c477ad aa44d9b c6273dd 9c477ad c6273dd 9c477ad 68d00bc cd55665 99cbdf9 441c84e 2447e21 99cbdf9 2447e21 99cbdf9 2ca5f55 99cbdf9 2ca5f55 99cbdf9 2ca5f55 99cbdf9 2ca5f55 99cbdf9 2ca5f55 99cbdf9 2ca5f55 c6273dd 2ca5f55 68d00bc 2ca5f55 c6273dd 68d00bc 2ca5f55 c6273dd | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 | ---
license: mit
library_name: mlx
tags:
- computer-vision
- image-classification
- chess
- cnn
- lightweight
datasets:
- synthetic-chess-squares
model-index:
- name: chess-cv
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Chess CV Test Dataset
type: chess-cv-test
metrics:
- type: accuracy
value: 0.9990
name: Accuracy
verified: false
- type: f1
value: 0.9990
name: F1 Score (Macro)
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: Chess CV OpenBoard Dataset
type: chess-cv-openboard
metrics:
- type: accuracy
value: 0.9930
name: Accuracy
verified: false
- type: f1
value: 0.9856
name: F1 Score (Macro)
verified: false
- task:
type: image-classification
name: Image Classification
dataset:
name: Chess CV ChessVision Dataset
type: chess-cv-chessvision
metrics:
- type: accuracy
value: 0.9313
name: Accuracy
verified: false
- type: f1
value: 0.9228
name: F1 Score (Macro)
verified: false
- name: chess-cv-arrows
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Chess CV Arrows Test Dataset
type: chess-cv-arrows-test
metrics:
- type: accuracy
value: 0.9999
name: Accuracy
verified: false
- type: f1
value: 0.9999
name: F1 Score (Macro)
verified: false
- name: chess-cv-snap
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: Chess CV Snap Test Dataset
type: chess-cv-snap-test
metrics:
- type: accuracy
value: 0.9993
name: Accuracy
verified: false
- type: f1
value: 0.9993
name: F1 Score (Macro)
verified: false
pipeline_tag: image-classification
---
<div align="center">
# Chess CV
<img src="https://raw.githubusercontent.com/S1M0N38/chess-cv/main/docs/assets/model.svg" alt="Model Architecture" width="600">
</div>
Lightweight CNNs (156k parameters each) for chess board analysis from 32×32 pixel square images. The project includes three specialized models trained on synthetic data from chess.com/lichess boards, piece sets, arrow overlays, and centering variations:
- **Pieces Model**: Classifies 13 classes (6 white pieces, 6 black pieces, empty squares) for board state recognition and FEN generation
- **Arrows Model**: Classifies 49 classes representing arrow overlay patterns for detecting chess analysis annotations
- **Snap Model**: Classifies 2 classes (centered vs off-centered pieces) for automated board analysis and piece positioning validation
## Quick Start
```bash
pip install chess-cv
```
```python
from chess_cv import load_bundled_model
# Load pre-trained models (weights included in package)
pieces_model = load_bundled_model('pieces')
arrows_model = load_bundled_model('arrows')
snap_model = load_bundled_model('snap')
# Make predictions
piece_predictions = pieces_model(image_tensor)
arrow_predictions = arrows_model(image_tensor)
snap_predictions = snap_model(image_tensor)
```
**Alternative: Load latest version from Hugging Face Hub**
```python
from huggingface_hub import hf_hub_download
from chess_cv.model import SimpleCNN
import mlx.core as mx
# Download latest weights from Hugging Face
model_path = hf_hub_download(repo_id="S1M0N38/chess-cv", filename="pieces.safetensors")
model = SimpleCNN(num_classes=13)
weights = mx.load(str(model_path))
model.load_weights(list(weights.items()))
model.eval()
```
## Models
This repository contains three specialized models for chess board analysis:
### ♟️ Pieces Model (`pieces.safetensors`)
**Overview:**
The pieces model classifies chess square images into 13 classes: 6 white pieces (wP, wN, wB, wR, wQ, wK), 6 black pieces (bP, bN, bB, bR, bQ, bK), and empty squares (xx). This model is designed for board state recognition and FEN generation from chess board images.
**Training:**
- **Architecture**: SimpleCNN (156k parameters)
- **Input**: 32×32px RGB square images
- **Data**: ~93,000 synthetic images from 55 board styles × 64 piece sets
- **Augmentation**: Aggressive augmentation with arrow overlays (80%), highlight overlays (25%), move overlays (50%), random crops, horizontal flips, color jitter, rotation (±10°), and Gaussian noise
- **Optimizer**: AdamW (weight_decay=0.001) with LR scheduler (warmup + cosine decay: 0→0.001→1e-5)
- **Training**: 1000 epochs, batch size 64
**Performance:**
| Dataset | Accuracy | F1-Score (Macro) |
| ----------------------------------------------------------------------------------------------- | :------: | :--------------: |
| Test Data | 99.90% | 99.90% |
| [S1M0N38/chess-cv-openboard](https://huggingface.co/datasets/S1M0N38/chess-cv-openboard) \* | - | 98.56% |
| [S1M0N38/chess-cv-chessvision](https://huggingface.co/datasets/S1M0N38/chess-cv-chessvision) \* | - | 92.28% |
\* *Dataset with unbalanced class distribution (e.g. many more samples for empty square class), so accuracy is not representative.*
---
### ↗ Arrows Model (`arrows.safetensors`)
**Overview:**
The arrows model classifies chess square images into 49 classes representing different arrow overlay patterns: 20 arrow heads, 12 arrow tails, 8 middle segments (for straight and diagonal arrows), 4 corner pieces (for knight-move arrows), and empty squares (xx). This model enables detection and reconstruction of arrow annotations commonly used in chess analysis interfaces. The NSEW naming convention (North/South/East/West) indicates arrow orientation and direction.
**Training:**
- **Architecture**: SimpleCNN (156k parameters, same as pieces model)
- **Input**: 32×32px RGB square images
- **Data**: ~4.5M synthetic images from 55 board styles × arrow overlays (~3.14M train, ~672K val, ~672K test)
- **Augmentation**: Conservative augmentation with highlight overlays (25%), move overlays (50%), random crops, and minimal color jitter/noise. No horizontal flips to preserve arrow directionality
- **Optimizer**: AdamW (lr=0.0005, weight_decay=0.00005)
- **Training**: 20 epochs, batch size 128
**Performance:**
| Dataset | Accuracy | F1-Score (Macro) |
| --------------------- | -------- | ---------------- |
| Test Data (synthetic) | 99.99% | 99.99% |
The arrows model is optimized for detecting directional annotations while maintaining spatial consistency across the board.
**Limitation:** Classification accuracy degrades when multiple arrow components overlap in a single square.
---
### 📐 Snap Model (`snap.safetensors`)
**Overview:**
The snap model classifies chess square images into 2 classes: centered ("ok") and off-centered ("bad") pieces. This model is designed for automated board analysis and piece positioning validation, helping ensure proper piece placement in digital chess interfaces and automated analysis systems.
**Training:**
- **Architecture**: SimpleCNN (156k parameters)
- **Input**: 32×32px RGB square images
- **Data**: ~1.4M synthetic images from centered and off-centered piece positions (~985,960 train, ~211,574 validate, ~210,466 test)
- **Augmentation**: Conservative augmentation with arrow overlays (50%), highlight overlays (20%), move overlays (50%), mouse overlays (80%), horizontal flips (50%), color jitter, and Gaussian noise. No rotation or geometric transformations to preserve centering semantics
- **Optimizer**: AdamW (weight_decay=0.001) with LR scheduler (warmup + cosine decay: 0→0.001→1e-5)
- **Training**: 200 epochs, batch size 64
**Performance:**
| Dataset | Accuracy | F1-Score (Macro) |
| --------------------- | :------: | :--------------: |
| Test Data (synthetic) | 99.93% | 99.93% |
The snap model is optimized for detecting piece centering issues while maintaining robustness to various board styles and visual conditions.
**Use Cases:**
- Automated board state validation
- Piece positioning quality control
- Chess interface usability testing
- Digital chess board quality assurance
## Training Your Own Model
To train or evaluate a model yourself:
```bash
git clone https://github.com/S1M0N38/chess-cv.git
cd chess-cv
uv sync --all-extras
# Generate training data for a specific model
chess-cv preprocessing pieces # or 'arrows' or 'snap'
# Train model
chess-cv train pieces # or 'arrows' or 'snap'
# Evaluate model
chess-cv test pieces # or 'arrows' or 'snap'
```
See the [Setup Guide](https://s1m0n38.github.io/chess-cv/setup/) and [Train and Evaluate](https://s1m0n38.github.io/chess-cv/train-and-eval/) for detailed instructions on data generation, training configuration, and evaluation.
## Limitations
- Requires precisely cropped 32×32 pixel square images (no board detection)
- Trained on synthetic data; may not generalize to real-world photos
- Not suitable for non-standard piece designs
- Optimized for Apple Silicon (slower on CPU)
For detailed documentation, architecture details, and advanced usage, see the [full documentation](https://s1m0n38.github.io/chess-cv/).
## Citation
```bibtex
@software{bertolotto2025chesscv,
author = {Bertolotto, Simone},
title = {{Chess CV}},
url = {https://github.com/S1M0N38/chess-cv},
year = {2025}
}
```
<div align="center">
**Repo:** [github.com/S1M0N38/chess-cv](https://github.com/S1M0N38/chess-cv) • **PyPI:** [pypi.org/project/chess-cv](https://pypi.org/project/chess-cv/)
</div>
|