File size: 2,998 Bytes
02e9968
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- en
datasets:
- Leon-LLM/Leon-Chess-Dataset-270k-All-Moves-BOS
---

# Chess Language Model

This model is a GPT-2-based language model trained on chess game sequences using our unique [xLAN](https://github.zhaw.ch/schmila7/leon-llm#xlan-format) notation, focusing on predicting legal chess moves and understanding game dynamics.
For more information see [GitHub](https://github.zhaw.ch/schmila7/leon-llm)
## Model Details

### Model Description

- **Developed by:** Schmid Lars, Maag Jerome
- **Model type:** GPT-2 adaptation for chess language understanding
- **Language(s) (NLP):** xLAN (Chess Notation)

### Model Sources

- **Repository:** [Leon LLM Chess Research](https://github.zhaw.ch/schmila7/leon-llm)

## Uses

### Direct Use

The model is intended for predicting chess moves, analyzing game positions, and studying chess strategies.

### Out-of-Scope Use

This model is not designed for general language understanding or tasks unrelated to chess.

## Bias, Risks, and Limitations

The model reflects the strategies and styles present in the training dataset and may not encompass all possible chess scenarios.

## How to Get Started with the Model

To use the model use the Notebooks on [GitHub](https://github.zhaw.ch/schmila7/leon-llm)

## Training Details

### Training Data

The model was trained on the "Leon-LLM/Leon-Chess-Dataset-270k-All-Moves-BOS" dataset, sourced from Lichess database (September 2023).

### Training Procedure 

#### Preprocessing

Chess games from Lichess were converted from PGN into [xLAN](https://github.zhaw.ch/schmila7/leon-llm#xlan-format) as part of the preprocessing.

#### Training Hyperparameters

- Batch Size: 25
- Epochs: 4
- Learning Rate: 0.0001
- with BOS Token

## Evaluation

### Metrics

The model was tested on 3 metrics:

1. **Average Number of Correct Plies:** Measures the model's ability to simulate chess games, evaluating the average number of correctly generated plies in 100 games.

2. **Hard Position Accuracy:** Assesses the model's handling of 67 challenging chess positions, including unusual castling, pawn promotions, and checkmate scenarios. Success is measured by the model's ability to generate legal moves in these complex positions.

3. **Legal Piece Moves Accuracy:** Examines the model's proficiency in keeping the board state various situations, including checks, pinned pieces, and pawn promotions. The metric focuses on the model's understanding of the board state.


### Results

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b81dff25b0493d515e317c/Onl5QLqVQuNg8n-LeZQpx.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b81dff25b0493d515e317c/Q7hCPdFZQskCysiFq3QSD.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b81dff25b0493d515e317c/FSkKSdtsUReT-LRXCecvP.png)

## Model Architecture

GPT2 config with the following changes:

- VOCAB_SIZE = 76
- N_POSITION = 512
- PAD_TOKEN_ID = 0
- EOS_TOKEN_ID = 74
- BOS_TOKEN_ID = 75