Kevin-Wardakhan commited on
Commit
fea238c
·
verified ·
1 Parent(s): 06b432a

Chess Challenge submission by Kevin-Wardakhan

Browse files
Files changed (7) hide show
  1. README.md +26 -0
  2. config.json +20 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +6 -0
  5. tokenizer.py +421 -0
  6. tokenizer_config.json +50 -0
  7. vocab.json +86 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - chess
5
+ - llm-course
6
+ - chess-challenge
7
+ license: mit
8
+ ---
9
+
10
+ # chess-leg-v2
11
+
12
+ Chess model submitted to the LLM Course Chess Challenge.
13
+
14
+ ## Submission Info
15
+
16
+ - **Submitted by**: [Kevin-Wardakhan](https://huggingface.co/Kevin-Wardakhan)
17
+ - **Parameters**: 965,952
18
+ - **Organization**: LLM-course
19
+
20
+ ## Model Details
21
+
22
+ - **Architecture**: Chess Transformer (GPT-style)
23
+ - **Vocab size**: 84
24
+ - **Embedding dim**: 96
25
+ - **Layers**: 10
26
+ - **Heads**: 8
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ChessForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "dropout": 0.1,
7
+ "dtype": "float32",
8
+ "eos_token_id": 2,
9
+ "layer_norm_epsilon": 1e-05,
10
+ "model_type": "chess_transformer",
11
+ "n_ctx": 256,
12
+ "n_embd": 96,
13
+ "n_head": 8,
14
+ "n_inner": 288,
15
+ "n_layer": 10,
16
+ "pad_token_id": 0,
17
+ "tie_weights": true,
18
+ "transformers_version": "4.57.6",
19
+ "vocab_size": 84
20
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39a81efaed9ce8e9ffc45c6cb1712a59305505223224b22e349643fe84f510e7
3
+ size 3874216
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[BOS]",
3
+ "eos_token": "[EOS]",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "[UNK]"
6
+ }
tokenizer.py ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom Chess Tokenizer for the Chess Challenge.
3
+
4
+ This tokenizer uses a STRUCTURED approach to tokenize chess moves, breaking down
5
+ each move into its components to help the model learn legal chess patterns.
6
+
7
+ The dataset format uses extended UCI notation:
8
+ - W/B prefix for White/Black
9
+ - Piece letter: P=Pawn, N=Knight, B=Bishop, R=Rook, Q=Queen, K=King
10
+ - Source and destination squares (e.g., e2e4)
11
+ - Special suffixes: (x)=capture, (+)=check, (+*)=checkmate, (o)/(O)=castling
12
+
13
+ Instead of treating each move as a single token (which creates thousands of tokens),
14
+ we tokenize the COMPONENTS:
15
+ - Color tokens: W, B
16
+ - Piece tokens: P, N, B, R, Q, K
17
+ - Square tokens: a1, a2, ..., h8 (64 squares)
18
+ - Suffix tokens: (x), (+), (+*), (o), (O), =Q, =R, =B, =N
19
+
20
+ This gives ~80 tokens total, helping the model learn:
21
+ 1. Valid squares on the board
22
+ 2. Which pieces can make which types of moves
23
+ 3. The structure of legal chess moves
24
+ """
25
+
26
+ from __future__ import annotations
27
+
28
+ import json
29
+ import os
30
+ import re
31
+ from pathlib import Path
32
+ from typing import Dict, List, Optional, Tuple
33
+
34
+ from transformers import PreTrainedTokenizer
35
+
36
+
37
+ class ChessTokenizer(PreTrainedTokenizer):
38
+ """
39
+ A structured tokenizer for chess moves using component-based tokenization.
40
+
41
+ Instead of treating each move as a single token, this tokenizer breaks moves
42
+ into their structural components (color, piece, from-square, to-square, suffix).
43
+ This smaller vocabulary helps the model learn valid chess patterns.
44
+
45
+ Vocabulary (~80 tokens):
46
+ - Special: [PAD], [BOS], [EOS], [UNK]
47
+ - Colors: W, B
48
+ - Pieces: P, N, B, R, Q, K
49
+ - Squares: a1-h8 (64 tokens)
50
+ - Suffixes: (x), (+), (+*), (o), (O), =Q, =R, =B, =N
51
+
52
+ Example:
53
+ >>> tokenizer = ChessTokenizer()
54
+ >>> tokens = tokenizer.tokenize("WPe2e4 BPe7e5")
55
+ >>> print(tokens)
56
+ ['W', 'P', 'e2', 'e4', 'B', 'P', 'e7', 'e5']
57
+ """
58
+
59
+ model_input_names = ["input_ids", "attention_mask"]
60
+ vocab_files_names = {"vocab_file": "vocab.json"}
61
+
62
+ # Special tokens
63
+ PAD_TOKEN = "[PAD]"
64
+ BOS_TOKEN = "[BOS]"
65
+ EOS_TOKEN = "[EOS]"
66
+ UNK_TOKEN = "[UNK]"
67
+
68
+ # Chess components
69
+ COLORS = ["W", "B"]
70
+ PIECES = ["P", "N", "B", "R", "Q", "K"]
71
+ FILES = ["a", "b", "c", "d", "e", "f", "g", "h"]
72
+ RANKS = ["1", "2", "3", "4", "5", "6", "7", "8"]
73
+ SQUARES = [f + r for f in ["a", "b", "c", "d", "e", "f", "g", "h"]
74
+ for r in ["1", "2", "3", "4", "5", "6", "7", "8"]] # a1, a2, ..., h8
75
+ SUFFIXES = ["(x)", "(+)", "(+*)", "(o)", "(O)", "=Q", "=R", "=B", "=N"]
76
+
77
+ # Regex pattern to parse extended UCI moves
78
+ # Format: [W|B][Piece][from_sq][to_sq][optional: =PromoPiece][optional: suffix]
79
+ MOVE_PATTERN = re.compile(
80
+ r'^([WB])([PNBRQK])([a-h][1-8])([a-h][1-8])(=[QRBN])?(\([xo+*O]+\))?$'
81
+ )
82
+
83
+ def __init__(
84
+ self,
85
+ vocab_file: Optional[str] = None,
86
+ vocab: Optional[Dict[str, int]] = None,
87
+ **kwargs,
88
+ ):
89
+ """
90
+ Initialize the chess tokenizer.
91
+
92
+ Args:
93
+ vocab_file: Path to a JSON file containing the vocabulary mapping.
94
+ vocab: Dictionary mapping tokens to IDs (alternative to vocab_file).
95
+ **kwargs: Additional arguments passed to PreTrainedTokenizer.
96
+ """
97
+ # Initialize special tokens
98
+ self._pad_token = self.PAD_TOKEN
99
+ self._bos_token = self.BOS_TOKEN
100
+ self._eos_token = self.EOS_TOKEN
101
+ self._unk_token = self.UNK_TOKEN
102
+
103
+ # Remove any duplicate special-token entries passed through kwargs
104
+ # to avoid "multiple values for keyword" errors when loading from disk.
105
+ kwargs.pop("pad_token", None)
106
+ kwargs.pop("bos_token", None)
107
+ kwargs.pop("eos_token", None)
108
+ kwargs.pop("unk_token", None)
109
+
110
+ # Load or create vocabulary
111
+ if vocab is not None:
112
+ self._vocab = vocab
113
+ elif vocab_file is not None and os.path.exists(vocab_file):
114
+ with open(vocab_file, "r", encoding="utf-8") as f:
115
+ self._vocab = json.load(f)
116
+ else:
117
+ # Create the structured vocabulary
118
+ self._vocab = self._create_structured_vocab()
119
+
120
+ # Create reverse mapping
121
+ self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
122
+
123
+ # Call parent init AFTER setting up vocab
124
+ super().__init__(
125
+ pad_token=self._pad_token,
126
+ bos_token=self._bos_token,
127
+ eos_token=self._eos_token,
128
+ unk_token=self._unk_token,
129
+ **kwargs,
130
+ )
131
+
132
+ def _create_structured_vocab(self) -> Dict[str, int]:
133
+ """
134
+ Create the structured vocabulary with all chess components.
135
+
136
+ This creates a fixed vocabulary of ~85 tokens covering all possible
137
+ chess move components.
138
+ """
139
+ tokens = []
140
+
141
+ # Special tokens first
142
+ tokens.extend([self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN])
143
+
144
+ # Colors
145
+ tokens.extend(self.COLORS)
146
+
147
+ # Pieces
148
+ tokens.extend(self.PIECES)
149
+
150
+ # Squares (64 tokens)
151
+ tokens.extend(self.SQUARES)
152
+
153
+ # Suffixes
154
+ tokens.extend(self.SUFFIXES)
155
+
156
+ # Build vocabulary
157
+ vocab = {token: idx for idx, token in enumerate(tokens)}
158
+ return vocab
159
+
160
+ def _create_default_vocab(self) -> Dict[str, int]:
161
+ """Alias for _create_structured_vocab for compatibility."""
162
+ return self._create_structured_vocab()
163
+
164
+ def _parse_move(self, move: str) -> List[str]:
165
+ """
166
+ Parse a single move into its component tokens.
167
+
168
+ Args:
169
+ move: A move in extended UCI format (e.g., "WPe2e4", "BNg8f6(x)").
170
+
171
+ Returns:
172
+ List of component tokens.
173
+ """
174
+ move = move.strip()
175
+ if not move:
176
+ return []
177
+
178
+ # Handle special tokens
179
+ if move in [self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN]:
180
+ return [move]
181
+
182
+ # Try to match the move pattern
183
+ match = self.MOVE_PATTERN.match(move)
184
+ if match:
185
+ color, piece, from_sq, to_sq, promotion, suffix = match.groups()
186
+ tokens = [color, piece, from_sq, to_sq]
187
+ if promotion:
188
+ tokens.append(promotion)
189
+ if suffix:
190
+ tokens.append(suffix)
191
+ return tokens
192
+
193
+ # If pattern doesn't match, try to extract what we can
194
+ # This handles edge cases and malformed moves gracefully
195
+ tokens = []
196
+ i = 0
197
+
198
+ # Color (W or B)
199
+ if i < len(move) and move[i] in self.COLORS:
200
+ tokens.append(move[i])
201
+ i += 1
202
+
203
+ # Piece (P, N, B, R, Q, K)
204
+ if i < len(move) and move[i] in self.PIECES:
205
+ tokens.append(move[i])
206
+ i += 1
207
+
208
+ # From square (e.g., e2)
209
+ if i + 1 < len(move) and move[i:i+2] in self.SQUARES:
210
+ tokens.append(move[i:i+2])
211
+ i += 2
212
+
213
+ # To square (e.g., e4)
214
+ if i + 1 < len(move) and move[i:i+2] in self.SQUARES:
215
+ tokens.append(move[i:i+2])
216
+ i += 2
217
+
218
+ # Promotion (e.g., =Q)
219
+ if i + 1 < len(move) and move[i:i+2] in self.SUFFIXES:
220
+ tokens.append(move[i:i+2])
221
+ i += 2
222
+
223
+ # Suffix (e.g., (x), (+), (+*), (o), (O))
224
+ remaining = move[i:]
225
+ if remaining in self.SUFFIXES:
226
+ tokens.append(remaining)
227
+ elif remaining:
228
+ # Try to find a matching suffix
229
+ for suffix in self.SUFFIXES:
230
+ if remaining.startswith(suffix):
231
+ tokens.append(suffix)
232
+ break
233
+
234
+ # If we couldn't parse anything, return UNK
235
+ if not tokens:
236
+ return [self.UNK_TOKEN]
237
+
238
+ return tokens
239
+
240
+ @classmethod
241
+ def build_vocab_from_iterator(
242
+ cls,
243
+ iterator,
244
+ min_frequency: int = 1,
245
+ ) -> "ChessTokenizer":
246
+ """
247
+ Build a tokenizer (for compatibility - vocab is fixed).
248
+
249
+ The structured tokenizer has a fixed vocabulary, so this method
250
+ simply returns a new tokenizer instance.
251
+
252
+ Args:
253
+ iterator: An iterator yielding game strings (ignored for structured vocab).
254
+ min_frequency: Minimum frequency (ignored for structured vocab).
255
+
256
+ Returns:
257
+ A ChessTokenizer with the structured vocabulary.
258
+ """
259
+ return cls()
260
+
261
+ @classmethod
262
+ def build_vocab_from_dataset(
263
+ cls,
264
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
265
+ split: str = "train",
266
+ column: str = "text",
267
+ min_frequency: int = 500,
268
+ max_samples: Optional[int] = 100000,
269
+ ) -> "ChessTokenizer":
270
+ """
271
+ Build a tokenizer (for compatibility - vocab is fixed).
272
+
273
+ The structured tokenizer has a fixed vocabulary covering all valid
274
+ chess move components, so no dataset scanning is needed.
275
+
276
+ Args:
277
+ dataset_name: Name of the dataset (ignored).
278
+ split: Dataset split (ignored).
279
+ column: Column name (ignored).
280
+ min_frequency: Minimum frequency (ignored).
281
+ max_samples: Maximum samples (ignored).
282
+
283
+ Returns:
284
+ A ChessTokenizer with the structured vocabulary.
285
+ """
286
+ return cls()
287
+
288
+ @property
289
+ def vocab_size(self) -> int:
290
+ """Return the size of the vocabulary."""
291
+ return len(self._vocab)
292
+
293
+ def get_vocab(self) -> Dict[str, int]:
294
+ """Return the vocabulary as a dictionary."""
295
+ return dict(self._vocab)
296
+
297
+ def _tokenize(self, text: str) -> List[str]:
298
+ """
299
+ Tokenize a string of moves into component tokens.
300
+
301
+ Args:
302
+ text: A string of space-separated moves.
303
+
304
+ Returns:
305
+ List of component tokens.
306
+ """
307
+ tokens = []
308
+ moves = text.strip().split()
309
+
310
+ for move in moves:
311
+ move_tokens = self._parse_move(move)
312
+ tokens.extend(move_tokens)
313
+
314
+ return tokens
315
+
316
+ def _convert_token_to_id(self, token: str) -> int:
317
+ """Convert a token to its ID."""
318
+ return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
319
+
320
+ def _convert_id_to_token(self, index: int) -> str:
321
+ """Convert an ID to its token."""
322
+ return self._ids_to_tokens.get(index, self.UNK_TOKEN)
323
+
324
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
325
+ """
326
+ Convert a list of tokens back to a move string.
327
+
328
+ Reconstructs moves from component tokens by grouping them appropriately.
329
+ """
330
+ # Filter out special tokens
331
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
332
+ tokens = [t for t in tokens if t not in special]
333
+
334
+ if not tokens:
335
+ return ""
336
+
337
+ # Reconstruct moves from components
338
+ result = []
339
+ current_move = []
340
+
341
+ for token in tokens:
342
+ # Start of a new move (color token)
343
+ if token in self.COLORS:
344
+ if current_move:
345
+ result.append("".join(current_move))
346
+ current_move = [token]
347
+ else:
348
+ current_move.append(token)
349
+
350
+ # Don't forget the last move
351
+ if current_move:
352
+ result.append("".join(current_move))
353
+
354
+ return " ".join(result)
355
+
356
+ def save_vocabulary(
357
+ self,
358
+ save_directory: str,
359
+ filename_prefix: Optional[str] = None,
360
+ ) -> tuple:
361
+ """
362
+ Save the vocabulary to a JSON file.
363
+
364
+ Args:
365
+ save_directory: Directory to save the vocabulary.
366
+ filename_prefix: Optional prefix for the filename.
367
+
368
+ Returns:
369
+ Tuple containing the path to the saved vocabulary file.
370
+ """
371
+ if not os.path.isdir(save_directory):
372
+ os.makedirs(save_directory, exist_ok=True)
373
+
374
+ vocab_file = os.path.join(
375
+ save_directory,
376
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
377
+ )
378
+
379
+ with open(vocab_file, "w", encoding="utf-8") as f:
380
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
381
+
382
+ return (vocab_file,)
383
+
384
+
385
+ def count_vocab_from_dataset(
386
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
387
+ split: str = "train",
388
+ column: str = "text",
389
+ max_samples: Optional[int] = 10000,
390
+ ) -> Dict[str, int]:
391
+ """
392
+ Count token frequencies in a dataset (useful for vocabulary analysis).
393
+
394
+ With the structured tokenizer, this counts component frequencies.
395
+
396
+ Args:
397
+ dataset_name: Name of the dataset on Hugging Face Hub.
398
+ split: Dataset split to use.
399
+ column: Column containing the game strings.
400
+ max_samples: Maximum number of samples to process.
401
+
402
+ Returns:
403
+ Dictionary mapping tokens to their frequencies.
404
+ """
405
+ from collections import Counter
406
+ from datasets import load_dataset
407
+
408
+ tokenizer = ChessTokenizer()
409
+
410
+ dataset = load_dataset(dataset_name, split=split)
411
+
412
+ if max_samples is not None:
413
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
414
+
415
+ token_counts = Counter()
416
+
417
+ for example in dataset:
418
+ tokens = tokenizer._tokenize(example[column])
419
+ token_counts.update(tokens)
420
+
421
+ return dict(token_counts)
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[BOS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[EOS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ }
35
+ },
36
+ "auto_map": {
37
+ "AutoTokenizer": [
38
+ "tokenizer.ChessTokenizer",
39
+ null
40
+ ]
41
+ },
42
+ "bos_token": "[BOS]",
43
+ "clean_up_tokenization_spaces": false,
44
+ "eos_token": "[EOS]",
45
+ "extra_special_tokens": {},
46
+ "model_max_length": 1000000000000000019884624838656,
47
+ "pad_token": "[PAD]",
48
+ "tokenizer_class": "ChessTokenizer",
49
+ "unk_token": "[UNK]"
50
+ }
vocab.json ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "[PAD]": 0,
3
+ "[BOS]": 1,
4
+ "[EOS]": 2,
5
+ "[UNK]": 3,
6
+ "W": 4,
7
+ "B": 8,
8
+ "P": 6,
9
+ "N": 7,
10
+ "R": 9,
11
+ "Q": 10,
12
+ "K": 11,
13
+ "a1": 12,
14
+ "a2": 13,
15
+ "a3": 14,
16
+ "a4": 15,
17
+ "a5": 16,
18
+ "a6": 17,
19
+ "a7": 18,
20
+ "a8": 19,
21
+ "b1": 20,
22
+ "b2": 21,
23
+ "b3": 22,
24
+ "b4": 23,
25
+ "b5": 24,
26
+ "b6": 25,
27
+ "b7": 26,
28
+ "b8": 27,
29
+ "c1": 28,
30
+ "c2": 29,
31
+ "c3": 30,
32
+ "c4": 31,
33
+ "c5": 32,
34
+ "c6": 33,
35
+ "c7": 34,
36
+ "c8": 35,
37
+ "d1": 36,
38
+ "d2": 37,
39
+ "d3": 38,
40
+ "d4": 39,
41
+ "d5": 40,
42
+ "d6": 41,
43
+ "d7": 42,
44
+ "d8": 43,
45
+ "e1": 44,
46
+ "e2": 45,
47
+ "e3": 46,
48
+ "e4": 47,
49
+ "e5": 48,
50
+ "e6": 49,
51
+ "e7": 50,
52
+ "e8": 51,
53
+ "f1": 52,
54
+ "f2": 53,
55
+ "f3": 54,
56
+ "f4": 55,
57
+ "f5": 56,
58
+ "f6": 57,
59
+ "f7": 58,
60
+ "f8": 59,
61
+ "g1": 60,
62
+ "g2": 61,
63
+ "g3": 62,
64
+ "g4": 63,
65
+ "g5": 64,
66
+ "g6": 65,
67
+ "g7": 66,
68
+ "g8": 67,
69
+ "h1": 68,
70
+ "h2": 69,
71
+ "h3": 70,
72
+ "h4": 71,
73
+ "h5": 72,
74
+ "h6": 73,
75
+ "h7": 74,
76
+ "h8": 75,
77
+ "(x)": 76,
78
+ "(+)": 77,
79
+ "(+*)": 78,
80
+ "(o)": 79,
81
+ "(O)": 80,
82
+ "=Q": 81,
83
+ "=R": 82,
84
+ "=B": 83,
85
+ "=N": 84
86
+ }