Update README.md
Browse files
README.md
CHANGED
|
@@ -2,45 +2,47 @@
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
| 6 |
|
| 7 |
GChess model is a powerful deep neural network designed specifically for the game of chess. Its architecture is heavily inspired by the principles of AlphaZero, utilizing a single neural network to simultaneously predict the optimal move and evaluate the position.
|
| 8 |
|
| 9 |
-
Architecture Details
|
| 10 |
|
| 11 |
The core of the network is a Deep Residual Network (ResNet), a structure well-suited for processing the spatial data of an 8x8 chessboard.
|
| 12 |
|
| 13 |
-
Torso
|
| 14 |
|
| 15 |
The network employs a robust torso composed of 20 Residual Blocks. Each block contains convolutional layers with skip connections, allowing for the effective learning of deep, hierarchical features and maintaining stable training.
|
| 16 |
|
| 17 |
-
Feature Processing
|
| 18 |
|
| 19 |
The entire network processes data using a high number of channels, specifically 512 filters in its main convolutional layers.
|
| 20 |
|
| 21 |
-
Input Representation
|
| 22 |
|
| 23 |
The current board state and history are encoded into a multi-plane tensor with 128 input channels, which typically includes information about piece locations, the player to move, castling rights, and repetition history, a common input format for state-of-the-art chess AI.
|
| 24 |
|
| 25 |
-
Dual Output Heads
|
| 26 |
|
| 27 |
The shared ResNet torso branches into two specialized heads:
|
| 28 |
|
| 29 |
-
Policy Head (p_logits)
|
| 30 |
|
| 31 |
Predicts a probability distribution over the 4672 possible moves (actions) that can be taken. This output is crucial for guiding the Monte Carlo Tree Search (MCTS).
|
| 32 |
|
| 33 |
-
Value Head (v)
|
| 34 |
|
| 35 |
Outputs a single scalar value, typically between -1.0 (Black is winning) and +1.0 (White is winning). This score represents the network's prediction of the final game outcome from the current position.
|
| 36 |
|
| 37 |
-
Training
|
| 38 |
|
| 39 |
The model is trained on small dataset from high-quality PGN of games. The model is trained on 50.000 games during 50 hours on RTX4060. This model acctually is evaluate around 1000 Elo. Training uses the PyTorch framework with advanced optimization techniques, including a OneCycleLR learning rate scheduler for accelerated convergence and a large batch size of 1024.
|
| 40 |
|
| 41 |
-
Usage
|
| 42 |
|
| 43 |
-
|
| 44 |
# Define Input State (FEN)
|
| 45 |
# Example: The initial position of a game
|
| 46 |
fen = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1"
|
|
@@ -76,6 +78,6 @@ print(f"Move Probability: {best_probability:.4f}")
|
|
| 76 |
print(f"Position Evaluation (Value): {expected_value:.4f}")
|
| 77 |
print("\nInterpretation: Value close to +1.0 means White is winning, -1.0 means Black is winning.")
|
| 78 |
|
| 79 |
-
|
| 80 |
|
| 81 |
Developed by PENEAUX Benjamin
|
|
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
| 4 |
|
| 5 |
+
#**GChess**
|
| 6 |
+
|
| 7 |
+
### **Model Description:**
|
| 8 |
|
| 9 |
GChess model is a powerful deep neural network designed specifically for the game of chess. Its architecture is heavily inspired by the principles of AlphaZero, utilizing a single neural network to simultaneously predict the optimal move and evaluate the position.
|
| 10 |
|
| 11 |
+
### **Architecture Details:**
|
| 12 |
|
| 13 |
The core of the network is a Deep Residual Network (ResNet), a structure well-suited for processing the spatial data of an 8x8 chessboard.
|
| 14 |
|
| 15 |
+
### **Torso:**
|
| 16 |
|
| 17 |
The network employs a robust torso composed of 20 Residual Blocks. Each block contains convolutional layers with skip connections, allowing for the effective learning of deep, hierarchical features and maintaining stable training.
|
| 18 |
|
| 19 |
+
### **Feature Processing:**
|
| 20 |
|
| 21 |
The entire network processes data using a high number of channels, specifically 512 filters in its main convolutional layers.
|
| 22 |
|
| 23 |
+
### **Input Representation:**
|
| 24 |
|
| 25 |
The current board state and history are encoded into a multi-plane tensor with 128 input channels, which typically includes information about piece locations, the player to move, castling rights, and repetition history, a common input format for state-of-the-art chess AI.
|
| 26 |
|
| 27 |
+
### **Dual Output Heads:**
|
| 28 |
|
| 29 |
The shared ResNet torso branches into two specialized heads:
|
| 30 |
|
| 31 |
+
### **Policy Head (p_logits):**
|
| 32 |
|
| 33 |
Predicts a probability distribution over the 4672 possible moves (actions) that can be taken. This output is crucial for guiding the Monte Carlo Tree Search (MCTS).
|
| 34 |
|
| 35 |
+
### **Value Head (v):**
|
| 36 |
|
| 37 |
Outputs a single scalar value, typically between -1.0 (Black is winning) and +1.0 (White is winning). This score represents the network's prediction of the final game outcome from the current position.
|
| 38 |
|
| 39 |
+
### **Training:**
|
| 40 |
|
| 41 |
The model is trained on small dataset from high-quality PGN of games. The model is trained on 50.000 games during 50 hours on RTX4060. This model acctually is evaluate around 1000 Elo. Training uses the PyTorch framework with advanced optimization techniques, including a OneCycleLR learning rate scheduler for accelerated convergence and a large batch size of 1024.
|
| 42 |
|
| 43 |
+
### **Usage:**
|
| 44 |
|
| 45 |
+
`
|
| 46 |
# Define Input State (FEN)
|
| 47 |
# Example: The initial position of a game
|
| 48 |
fen = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1"
|
|
|
|
| 78 |
print(f"Position Evaluation (Value): {expected_value:.4f}")
|
| 79 |
print("\nInterpretation: Value close to +1.0 means White is winning, -1.0 means Black is winning.")
|
| 80 |
|
| 81 |
+
`
|
| 82 |
|
| 83 |
Developed by PENEAUX Benjamin
|