File size: 2,269 Bytes
f58422d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---

language: en
license: mit
tags:
- chess
- reinforcement-learning
- alphazero
- pytorch
library_name: pytorch
---


# Chess Bot Model - TinyPCN

A chess playing neural network trained on expert games from the Lichess Elite Database.

## Model Description

This is a policy-value network inspired by AlphaZero, designed to evaluate chess positions and suggest moves.

### Architecture

- **Input**: 18-plane board representation (12 pieces + 6 metadata planes)
- **Convolutional backbone**: 32 filters, 1 residual block, ~9,611,202 parameters
- **Policy head**: 4,672-dimensional output (one per legal move encoding)
- **Value head**: Single tanh output (-1 to +1 for position evaluation)

### Training Data

- **Dataset**: Lichess Elite Database (games from 2200+ ELO players)
- **Positions trained**: 16,800,000
- **Epochs**: 10

### Performance

- **Final Policy Loss**: 2.8000
- **Final Value Loss**: 0.8500

### Usage

```python

import torch

import chess

from model import TinyPCN, encode_board



# Load model

model = TinyPCN(board_channels=18, policy_size=4672)

model.load_state_dict(torch.load("chess_model.pth"))

model.eval()



# Evaluate a position

board = chess.Board()  # or chess.Board("fen string")

board_tensor = encode_board(board).unsqueeze(0)



with torch.no_grad():

    policy_logits, value = model(board_tensor)



# Value interpretation:

# +1.0 = winning for current player

#  0.0 = drawn/equal position

# -1.0 = losing for current player



print(f"Position evaluation: {value.item():.4f}")

```

### Model Files

- `chess_model.pth` - PyTorch model weights
- `model.py` - Model architecture and board encoding
- `mcts.py` - Monte Carlo Tree Search implementation
- `requirements.txt` - Python dependencies

### Limitations

- Trained on expert games only (no self-play yet)
- Lightweight architecture for educational purposes
- May not handle unusual openings or endgames well

### Training Details

- **Framework**: PyTorch
- **Optimizer**: Adam
- **Learning Rate**: 0.001
- **Batch Size**: 256
- **Loss Functions**: CrossEntropyLoss (policy) + MSELoss (value)

### Authors

Created as part of an AlphaZero-style chess engine project.

### License

MIT License