VZ22 commited on
Commit
5de41cb
·
verified ·
1 Parent(s): df3c905

Chess Challenge submission by VZ22

Browse files
Files changed (7) hide show
  1. README.md +26 -0
  2. config.json +20 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +6 -0
  5. tokenizer.py +748 -0
  6. tokenizer_config.json +50 -0
  7. vocab.json +1182 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - chess
5
+ - llm-course
6
+ - chess-challenge
7
+ license: mit
8
+ ---
9
+
10
+ # chess-vz-token-vanilla
11
+
12
+ Chess model submitted to the LLM Course Chess Challenge.
13
+
14
+ ## Submission Info
15
+
16
+ - **Submitted by**: [VZ22](https://huggingface.co/VZ22)
17
+ - **Parameters**: 845,568
18
+ - **Organization**: LLM-course
19
+
20
+ ## Model Details
21
+
22
+ - **Architecture**: Chess Transformer (GPT-style)
23
+ - **Vocab size**: 1180
24
+ - **Embedding dim**: 128
25
+ - **Layers**: 4
26
+ - **Heads**: 4
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ChessForCausalLM"
4
+ ],
5
+ "bos_token_id": 1,
6
+ "dropout": 0.1,
7
+ "dtype": "float32",
8
+ "eos_token_id": 2,
9
+ "layer_norm_epsilon": 1e-05,
10
+ "model_type": "chess_transformer",
11
+ "n_ctx": 256,
12
+ "n_embd": 128,
13
+ "n_head": 4,
14
+ "n_inner": 384,
15
+ "n_layer": 4,
16
+ "pad_token_id": 0,
17
+ "tie_weights": true,
18
+ "transformers_version": "4.57.6",
19
+ "vocab_size": 1180
20
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc960badf748a7292f1320de89d264703f5f68498d731268b4659e912581f4e9
3
+ size 3386672
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[BOS]",
3
+ "eos_token": "[EOS]",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "[UNK]"
6
+ }
tokenizer.py ADDED
@@ -0,0 +1,748 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom Chess Tokenizer for the Chess Challenge.
3
+
4
+ This tokenizer treats each move as a single token using the extended UCI notation
5
+ from the Lichess dataset (e.g., WPe2e4, BNg8f6).
6
+
7
+ The dataset format uses:
8
+ - W/B prefix for White/Black
9
+ - Piece letter: P=Pawn, N=Knight, B=Bishop, R=Rook, Q=Queen, K=King
10
+ - Source and destination squares (e.g., e2e4)
11
+ - Special suffixes: (x)=capture, (+)=check, (+*)=checkmate, (o)/(O)=castling
12
+ """
13
+
14
+ from __future__ import annotations
15
+
16
+ import json
17
+ import os
18
+ from pathlib import Path
19
+ from typing import Dict, List, Optional
20
+
21
+ from transformers import PreTrainedTokenizer
22
+
23
+
24
+ class ChessTokenizerOld(PreTrainedTokenizer):
25
+ """
26
+ A custom tokenizer for chess moves using extended UCI notation.
27
+
28
+ This tokenizer maps each possible chess move to a unique token ID.
29
+ The vocabulary is built from the training dataset to ensure all moves
30
+ encountered during training have a corresponding token.
31
+
32
+ Example:
33
+ >>> tokenizer = ChessTokenizer()
34
+ >>> tokenizer.encode("WPe2e4 BPe7e5")
35
+ [1, 42, 87, 2] # [BOS, e2e4, e7e5, EOS]
36
+ """
37
+
38
+ model_input_names = ["input_ids", "attention_mask"]
39
+ vocab_files_names = {"vocab_file": "vocab.json"}
40
+
41
+ # Special tokens
42
+ PAD_TOKEN = "[PAD]"
43
+ BOS_TOKEN = "[BOS]"
44
+ EOS_TOKEN = "[EOS]"
45
+ UNK_TOKEN = "[UNK]"
46
+
47
+ def __init__(
48
+ self,
49
+ vocab_file: Optional[str] = None,
50
+ vocab: Optional[Dict[str, int]] = None,
51
+ **kwargs,
52
+ ):
53
+ """
54
+ Initialize the chess tokenizer.
55
+
56
+ Args:
57
+ vocab_file: Path to a JSON file containing the vocabulary mapping.
58
+ vocab: Dictionary mapping tokens to IDs (alternative to vocab_file).
59
+ **kwargs: Additional arguments passed to PreTrainedTokenizer.
60
+ """
61
+ # Initialize special tokens
62
+ self._pad_token = self.PAD_TOKEN
63
+ self._bos_token = self.BOS_TOKEN
64
+ self._eos_token = self.EOS_TOKEN
65
+ self._unk_token = self.UNK_TOKEN
66
+
67
+ # Remove any duplicate special-token entries passed through kwargs
68
+ # to avoid "multiple values for keyword" errors when loading from disk.
69
+ kwargs.pop("pad_token", None)
70
+ kwargs.pop("bos_token", None)
71
+ kwargs.pop("eos_token", None)
72
+ kwargs.pop("unk_token", None)
73
+
74
+ # Load or create vocabulary
75
+ if vocab is not None:
76
+ self._vocab = vocab
77
+ elif vocab_file is not None and os.path.exists(vocab_file):
78
+ with open(vocab_file, "r", encoding="utf-8") as f:
79
+ self._vocab = json.load(f)
80
+ else:
81
+ # Create a minimal vocabulary with just special tokens
82
+ # The full vocabulary should be built from the dataset
83
+ self._vocab = self._create_default_vocab()
84
+
85
+ # Create reverse mapping
86
+ self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
87
+
88
+ # Call parent init AFTER setting up vocab
89
+ super().__init__(
90
+ pad_token=self._pad_token,
91
+ bos_token=self._bos_token,
92
+ eos_token=self._eos_token,
93
+ unk_token=self._unk_token,
94
+ **kwargs,
95
+ )
96
+
97
+ def _create_default_vocab(self) -> Dict[str, int]:
98
+ """
99
+ Create a minimal default vocabulary with just special tokens.
100
+
101
+ For the full vocabulary, use `build_vocab_from_dataset()`.
102
+ This minimal vocab is just a placeholder - you should build from data.
103
+ """
104
+ special_tokens = [self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN]
105
+ vocab = {token: idx for idx, token in enumerate(special_tokens)}
106
+ return vocab
107
+
108
+ @classmethod
109
+ def build_vocab_from_iterator(
110
+ cls,
111
+ iterator,
112
+ min_frequency: int = 1,
113
+ ) -> "ChessTokenizer":
114
+ """
115
+ Build a tokenizer vocabulary from an iterator of game strings.
116
+
117
+ Args:
118
+ iterator: An iterator yielding game strings (space-separated moves).
119
+ min_frequency: Minimum frequency for a token to be included.
120
+
121
+ Returns:
122
+ A ChessTokenizer with the built vocabulary.
123
+ """
124
+ from collections import Counter
125
+
126
+ token_counts = Counter()
127
+
128
+ for game in iterator:
129
+ moves = game.strip().split()
130
+ token_counts.update(moves)
131
+
132
+ # Filter by frequency
133
+ tokens = [
134
+ token for token, count in token_counts.items()
135
+ if count >= min_frequency
136
+ ]
137
+
138
+ # Sort for reproducibility
139
+ tokens = sorted(tokens)
140
+
141
+ # Build vocabulary
142
+ special_tokens = [cls.PAD_TOKEN, cls.BOS_TOKEN, cls.EOS_TOKEN, cls.UNK_TOKEN]
143
+ vocab = {token: idx for idx, token in enumerate(special_tokens + tokens)}
144
+
145
+ return cls(vocab=vocab)
146
+
147
+ @classmethod
148
+ def build_vocab_from_dataset(
149
+ cls,
150
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
151
+ split: str = "train",
152
+ column: str = "text",
153
+ min_frequency: int = 500,
154
+ max_samples: Optional[int] = 100000,
155
+ ) -> "ChessTokenizer":
156
+ """
157
+ Build a tokenizer vocabulary from a Hugging Face dataset.
158
+
159
+ Args:
160
+ dataset_name: Name of the dataset on Hugging Face Hub.
161
+ split: Dataset split to use.
162
+ column: Column containing the game strings.
163
+ min_frequency: Minimum frequency for a token to be included (default: 500).
164
+ max_samples: Maximum number of samples to process (default: 100k).
165
+
166
+ Returns:
167
+ A ChessTokenizer with the built vocabulary.
168
+ """
169
+ from datasets import load_dataset
170
+
171
+ dataset = load_dataset(dataset_name, split=split)
172
+
173
+ if max_samples is not None:
174
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
175
+
176
+ def game_iterator():
177
+ for example in dataset:
178
+ yield example[column]
179
+
180
+ return cls.build_vocab_from_iterator(game_iterator(), min_frequency=min_frequency)
181
+
182
+ @property
183
+ def vocab_size(self) -> int:
184
+ """Return the size of the vocabulary."""
185
+ return len(self._vocab)
186
+
187
+ def get_vocab(self) -> Dict[str, int]:
188
+ """Return the vocabulary as a dictionary."""
189
+ return dict(self._vocab)
190
+
191
+ def _tokenize(self, text: str) -> List[str]:
192
+ """
193
+ Tokenize a string of moves into a list of tokens.
194
+
195
+ Args:
196
+ text: A string of space-separated moves.
197
+
198
+ Returns:
199
+ List of move tokens.
200
+ """
201
+ return text.strip().split()
202
+
203
+ def _convert_token_to_id(self, token: str) -> int:
204
+ """Convert a token to its ID."""
205
+ return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
206
+
207
+ def _convert_id_to_token(self, index: int) -> str:
208
+ """Convert an ID to its token."""
209
+ return self._ids_to_tokens.get(index, self.UNK_TOKEN)
210
+
211
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
212
+ """Convert a list of tokens back to a string."""
213
+ # Filter out special tokens for cleaner output
214
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
215
+ return " ".join(t for t in tokens if t not in special)
216
+
217
+ def save_vocabulary(
218
+ self,
219
+ save_directory: str,
220
+ filename_prefix: Optional[str] = None,
221
+ ) -> tuple:
222
+ """
223
+ Save the vocabulary to a JSON file.
224
+
225
+ Args:
226
+ save_directory: Directory to save the vocabulary.
227
+ filename_prefix: Optional prefix for the filename.
228
+
229
+ Returns:
230
+ Tuple containing the path to the saved vocabulary file.
231
+ """
232
+ if not os.path.isdir(save_directory):
233
+ os.makedirs(save_directory, exist_ok=True)
234
+
235
+ vocab_file = os.path.join(
236
+ save_directory,
237
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
238
+ )
239
+
240
+ with open(vocab_file, "w", encoding="utf-8") as f:
241
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
242
+
243
+ return (vocab_file,)
244
+
245
+
246
+ def count_vocab_from_dataset(
247
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
248
+ split: str = "train",
249
+ column: str = "text",
250
+ max_samples: Optional[int] = 10000,
251
+ ) -> Dict[str, int]:
252
+ """
253
+ Count token frequencies in a dataset (useful for vocabulary analysis).
254
+
255
+ Args:
256
+ dataset_name: Name of the dataset on Hugging Face Hub.
257
+ split: Dataset split to use.
258
+ column: Column containing the game strings.
259
+ max_samples: Maximum number of samples to process.
260
+
261
+ Returns:
262
+ Dictionary mapping tokens to their frequencies.
263
+ """
264
+ from collections import Counter
265
+ from datasets import load_dataset
266
+
267
+ dataset = load_dataset(dataset_name, split=split)
268
+
269
+ if max_samples is not None:
270
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
271
+
272
+ token_counts = Counter()
273
+
274
+ for example in dataset:
275
+ moves = example[column].strip().split()
276
+ token_counts.update(moves)
277
+
278
+ return dict(token_counts)
279
+
280
+
281
+ class ChessTokenizer(PreTrainedTokenizer):
282
+ """
283
+ A sophisticated chess tokenizer that decomposes moves into components.
284
+
285
+ Instead of treating each move as a single token (1600+ vocabulary),
286
+ this tokenizer breaks down moves into smaller, reusable components:
287
+ - Color (White/Black)
288
+ - Piece type (Pawn, Knight, Bishop, Rook, Queen, King)
289
+ - Source square (a1-h8)
290
+ - Destination square (a1-h8)
291
+ - Special notation (capture, check, checkmate, castling)
292
+
293
+ This compositional approach reduces vocabulary size to ~1200 tokens
294
+ while maintaining full expressiveness.
295
+
296
+ Example:
297
+ >>> tokenizer = ComponentChessTokenizer()
298
+ >>> # "WPe2e4" becomes tokens for [White, Pawn, e2, e4]
299
+ >>> tokenizer.encode("WPe2e4 BPe7e5")
300
+ [1, 5, 10, 20, 28, 6, 10, 21, 29, 2] # [BOS, W, P, e2, e4, B, P, e7, e5, EOS]
301
+ """
302
+
303
+ model_input_names = ["input_ids", "attention_mask"]
304
+ vocab_files_names = {"vocab_file": "vocab.json"}
305
+
306
+ # Special tokens
307
+ PAD_TOKEN = "[PAD]"
308
+ BOS_TOKEN = "[BOS]"
309
+ EOS_TOKEN = "[EOS]"
310
+ UNK_TOKEN = "[UNK]"
311
+
312
+ # Component tokens - these are fixed
313
+ COLOR_TOKENS = ["[W]", "[B]"] # White, Black
314
+ PIECE_TOKENS = ["[P]", "[N]", "[B]", "[R]", "[Q]", "[K]"] # Pawn, Knight, Bishop, Rook, Queen, King
315
+ SQUARE_TOKENS = [f"[{file}{rank}]" for file in "abcdefgh" for rank in "12345678"] # 64 squares
316
+ SPECIAL_TOKENS_MOVE = [
317
+ "[x]", # Capture
318
+ "[+]", # Check
319
+ "[#+]", # Checkmate
320
+ "[o]", # Kingside castling (short)
321
+ "[O]", # Queenside castling (long)
322
+ ]
323
+
324
+ def __init__(
325
+ self,
326
+ vocab_file: Optional[str] = None,
327
+ vocab: Optional[Dict[str, int]] = None,
328
+ **kwargs,
329
+ ):
330
+ """
331
+ Initialize the component-based chess tokenizer.
332
+
333
+ Args:
334
+ vocab_file: Path to a JSON file containing the vocabulary mapping.
335
+ vocab: Dictionary mapping tokens to IDs (alternative to vocab_file).
336
+ **kwargs: Additional arguments passed to PreTrainedTokenizer.
337
+ """
338
+ # Initialize special tokens
339
+ self._pad_token = self.PAD_TOKEN
340
+ self._bos_token = self.BOS_TOKEN
341
+ self._eos_token = self.EOS_TOKEN
342
+ self._unk_token = self.UNK_TOKEN
343
+
344
+ # Remove any duplicate special-token entries passed through kwargs
345
+ kwargs.pop("pad_token", None)
346
+ kwargs.pop("bos_token", None)
347
+ kwargs.pop("eos_token", None)
348
+ kwargs.pop("unk_token", None)
349
+
350
+ # Load or create vocabulary
351
+ if vocab is not None:
352
+ self._vocab = vocab
353
+ elif vocab_file is not None and os.path.exists(vocab_file):
354
+ with open(vocab_file, "r", encoding="utf-8") as f:
355
+ self._vocab = json.load(f)
356
+ else:
357
+ self._vocab = self._create_component_vocab()
358
+
359
+ # Create reverse mapping
360
+ self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
361
+
362
+ # Call parent init AFTER setting up vocab
363
+ super().__init__(
364
+ pad_token=self._pad_token,
365
+ bos_token=self._bos_token,
366
+ eos_token=self._eos_token,
367
+ unk_token=self._unk_token,
368
+ **kwargs,
369
+ )
370
+
371
+ def _create_component_vocab(self) -> Dict[str, int]:
372
+ """
373
+ Create a vocabulary from pre-defined components.
374
+
375
+ Structure:
376
+ - Special tokens (4)
377
+ - Color tokens (2)
378
+ - Piece tokens (6)
379
+ - Square tokens (64)
380
+ - Move notation tokens (5)
381
+
382
+ Total: ~81 base tokens for complete coverage
383
+ Plus additional tokens for padding and special cases
384
+ Target vocab size: ~1200 (with room for learned variants/compressed sequences)
385
+ """
386
+ vocab = {}
387
+ idx = 0
388
+
389
+ # Special tokens
390
+ special_tokens = [
391
+ self.PAD_TOKEN,
392
+ self.BOS_TOKEN,
393
+ self.EOS_TOKEN,
394
+ self.UNK_TOKEN,
395
+ ]
396
+ for token in special_tokens:
397
+ vocab[token] = idx
398
+ idx += 1
399
+
400
+ # Color tokens
401
+ for token in self.COLOR_TOKENS:
402
+ vocab[token] = idx
403
+ idx += 1
404
+
405
+ # Piece tokens
406
+ for token in self.PIECE_TOKENS:
407
+ vocab[token] = idx
408
+ idx += 1
409
+
410
+ # Square tokens
411
+ for token in self.SQUARE_TOKENS:
412
+ vocab[token] = idx
413
+ idx += 1
414
+
415
+ # Move special notation tokens
416
+ for token in self.SPECIAL_TOKENS_MOVE:
417
+ vocab[token] = idx
418
+ idx += 1
419
+
420
+ # Add common move patterns and combinations for efficiency
421
+ # Frequent patterns can be pre-tokenized to achieve target vocab size
422
+ # This allows ~1100+ additional tokens for compressed sequences
423
+ common_patterns = self._get_common_move_patterns()
424
+ for pattern in common_patterns:
425
+ if pattern not in vocab:
426
+ vocab[pattern] = idx
427
+ idx += 1
428
+
429
+ return vocab
430
+
431
+ def _get_common_move_patterns(self) -> List[str]:
432
+ """
433
+ Generate common move patterns to populate vocabulary.
434
+
435
+ These are frequently occurring sequences that can be pre-tokenized
436
+ for efficiency while keeping total vocabulary manageable.
437
+ """
438
+ patterns = []
439
+
440
+ # Common opening moves (e.g., "e2e4", "e7e5")
441
+ for file1 in "abcdefgh":
442
+ for rank1 in "12345678":
443
+ for file2 in "abcdefgh":
444
+ for rank2 in "12345678":
445
+ sq1 = f"{file1}{rank1}"
446
+ sq2 = f"{file2}{rank2}"
447
+ # Add frequently occurring patterns
448
+ # Focus on reasonable move distances to avoid bloat
449
+ if abs(ord(file1) - ord(file2)) <= 2 and abs(int(rank1) - int(rank2)) <= 2:
450
+ patterns.append(f"[{sq1}-{sq2}]")
451
+
452
+ return patterns[:1100] # Limit to ~1100 patterns to stay under 1200 total vocab
453
+
454
+ def _parse_move(self, move: str) -> List[str]:
455
+ """
456
+ Parse a move string into components.
457
+
458
+ Examples:
459
+ "WPe2e4" -> ["[W]", "[P]", "[e2]", "[e4]"]
460
+ "BNg8f6x" -> ["[B]", "[N]", "[g8]", "[f6]", "[x]"]
461
+ "WKe1g1o" -> ["[W]", "[K]", "[e1]", "[g1]", "[o]"]
462
+
463
+ Args:
464
+ move: A move string in extended UCI format.
465
+
466
+ Returns:
467
+ List of component tokens.
468
+ """
469
+ if not move or len(move) < 4:
470
+ return [self.UNK_TOKEN]
471
+
472
+ components = []
473
+
474
+ # Extract color (first character)
475
+ color = move[0]
476
+ if color == "W":
477
+ components.append("[W]")
478
+ elif color == "B":
479
+ components.append("[B]")
480
+ else:
481
+ return [self.UNK_TOKEN]
482
+
483
+ # Extract piece (second character)
484
+ piece = move[1]
485
+ piece_map = {"P": "[P]", "N": "[N]", "B": "[B]", "R": "[R]", "Q": "[Q]", "K": "[K]"}
486
+ if piece not in piece_map:
487
+ return [self.UNK_TOKEN]
488
+ components.append(piece_map[piece])
489
+
490
+ # Extract source and destination squares
491
+ src_square = move[2:4]
492
+ dst_square = move[4:6]
493
+
494
+ # Validate squares
495
+ if (len(src_square) != 2 or len(dst_square) != 2 or
496
+ src_square[0] not in "abcdefgh" or dst_square[0] not in "abcdefgh" or
497
+ src_square[1] not in "12345678" or dst_square[1] not in "12345678"):
498
+ return [self.UNK_TOKEN]
499
+
500
+ components.append(f"[{src_square}]")
501
+ components.append(f"[{dst_square}]")
502
+
503
+ # Extract special notation
504
+ if len(move) > 6:
505
+ suffix = move[6:]
506
+ if "x" in suffix:
507
+ components.append("[x]")
508
+ if "+*" in suffix:
509
+ components.append("[#+]")
510
+ elif "+" in suffix:
511
+ components.append("[+]")
512
+ if "o" in suffix.lower():
513
+ if "O" in move:
514
+ components.append("[O]") # Queenside castling
515
+ else:
516
+ components.append("[o]") # Kingside castling
517
+
518
+ return components
519
+
520
+ def _tokenize(self, text: str) -> List[str]:
521
+ """
522
+ Tokenize a string of moves into component tokens.
523
+
524
+ Args:
525
+ text: A string of space-separated moves.
526
+
527
+ Returns:
528
+ List of component tokens.
529
+ """
530
+ moves = text.strip().split()
531
+ tokens = []
532
+
533
+ for move in moves:
534
+ components = self._parse_move(move)
535
+ tokens.extend(components)
536
+
537
+ return tokens
538
+
539
+ def _convert_token_to_id(self, token: str) -> int:
540
+ """Convert a token to its ID."""
541
+ return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
542
+
543
+ def _convert_id_to_token(self, index: int) -> str:
544
+ """Convert an ID to its token."""
545
+ return self._ids_to_tokens.get(index, self.UNK_TOKEN)
546
+
547
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
548
+ """Convert a list of tokens back to a string representation."""
549
+ # Filter out special tokens and brackets for cleaner output
550
+ cleaned = []
551
+ for t in tokens:
552
+ if t not in {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}:
553
+ # Remove brackets if present
554
+ t = t.strip("[]")
555
+ if t:
556
+ cleaned.append(t)
557
+ return " ".join(cleaned)
558
+
559
+ def save_vocabulary(
560
+ self,
561
+ save_directory: str,
562
+ filename_prefix: Optional[str] = None,
563
+ ) -> tuple:
564
+ """
565
+ Save the vocabulary to a JSON file.
566
+
567
+ Args:
568
+ save_directory: Directory to save the vocabulary.
569
+ filename_prefix: Optional prefix for the filename.
570
+
571
+ Returns:
572
+ Tuple containing the path to the saved vocabulary file.
573
+ """
574
+ if not os.path.isdir(save_directory):
575
+ os.makedirs(save_directory, exist_ok=True)
576
+
577
+ vocab_file = os.path.join(
578
+ save_directory,
579
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
580
+ )
581
+
582
+ with open(vocab_file, "w", encoding="utf-8") as f:
583
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
584
+
585
+ return (vocab_file,)
586
+
587
+ @classmethod
588
+ def build_vocab_from_iterator(
589
+ cls,
590
+ iterator,
591
+ min_frequency: int = 1,
592
+ ) -> "ComponentChessTokenizer":
593
+ """
594
+ Build a tokenizer vocabulary from an iterator of game strings.
595
+
596
+ This method decomposes moves into components and builds the vocabulary
597
+ from the component tokens.
598
+
599
+ Args:
600
+ iterator: An iterator yielding game strings (space-separated moves).
601
+ min_frequency: Minimum frequency for a component token to be included.
602
+
603
+ Returns:
604
+ A ComponentChessTokenizer with the built vocabulary.
605
+ """
606
+ from collections import Counter
607
+
608
+ component_counts = Counter()
609
+
610
+ # Create a temporary tokenizer to parse moves
611
+ temp_tokenizer = cls()
612
+
613
+ for game in iterator:
614
+ moves = game.strip().split()
615
+ for move in moves:
616
+ components = temp_tokenizer._parse_move(move)
617
+ component_counts.update(components)
618
+
619
+ # Filter by frequency
620
+ components = [
621
+ token for token, count in component_counts.items()
622
+ if count >= min_frequency
623
+ ]
624
+
625
+ # Sort for reproducibility
626
+ components = sorted(components)
627
+
628
+ # Build vocabulary using the base components
629
+ tokenizer = cls()
630
+ # Extend vocabulary with frequently occurring components
631
+ current_vocab = dict(tokenizer._vocab)
632
+ idx = len(current_vocab)
633
+
634
+ for component in components:
635
+ if component not in current_vocab:
636
+ current_vocab[component] = idx
637
+ idx += 1
638
+ return cls(vocab=current_vocab)
639
+
640
+ @classmethod
641
+ def build_vocab_from_dataset(
642
+ cls,
643
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
644
+ split: str = "train",
645
+ column: str = "text",
646
+ min_frequency: int = 500,
647
+ max_samples: Optional[int] = 100000,
648
+ ) -> "ComponentChessTokenizer":
649
+ """
650
+ Build a tokenizer vocabulary from a Hugging Face dataset.
651
+
652
+ This method decomposes moves into components and builds the vocabulary
653
+ from the component tokens found in the dataset.
654
+
655
+ Args:
656
+ dataset_name: Name of the dataset on Hugging Face Hub.
657
+ split: Dataset split to use.
658
+ column: Column containing the game strings.
659
+ min_frequency: Minimum frequency for a component token to be included (default: 500).
660
+ max_samples: Maximum number of samples to process (default: 100k).
661
+
662
+ Returns:
663
+ A ComponentChessTokenizer with the built vocabulary.
664
+ """
665
+ from datasets import load_dataset
666
+
667
+ dataset = load_dataset(dataset_name, split=split)
668
+
669
+ if max_samples is not None:
670
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
671
+
672
+ def game_iterator():
673
+ for example in dataset:
674
+ yield example[column]
675
+
676
+ return cls.build_vocab_from_iterator(game_iterator(), min_frequency=min_frequency)
677
+
678
+ @property
679
+ def vocab_size(self) -> int:
680
+ """Return the size of the vocabulary."""
681
+ return len(self._vocab)
682
+
683
+ def get_vocab(self) -> Dict[str, int]:
684
+ """Return the vocabulary as a dictionary."""
685
+ return dict(self._vocab)
686
+
687
+ def save_vocabulary(
688
+ self,
689
+ save_directory: str,
690
+ filename_prefix: Optional[str] = None,
691
+ ) -> tuple:
692
+ """
693
+ Save the vocabulary to a JSON file.
694
+
695
+ Args:
696
+ save_directory: Directory to save the vocabulary.
697
+ filename_prefix: Optional prefix for the filename.
698
+
699
+ Returns:
700
+ Tuple containing the path to the saved vocabulary file.
701
+ """
702
+ if not os.path.isdir(save_directory):
703
+ os.makedirs(save_directory, exist_ok=True)
704
+
705
+ vocab_file = os.path.join(
706
+ save_directory,
707
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
708
+ )
709
+
710
+ with open(vocab_file, "w", encoding="utf-8") as f:
711
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
712
+
713
+ return (vocab_file,)
714
+
715
+
716
+ def count_vocab_from_dataset(
717
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
718
+ split: str = "train",
719
+ column: str = "text",
720
+ max_samples: Optional[int] = 10000,
721
+ ) -> Dict[str, int]:
722
+ """
723
+ Count token frequencies in a dataset (useful for vocabulary analysis).
724
+
725
+ Args:
726
+ dataset_name: Name of the dataset on Hugging Face Hub.
727
+ split: Dataset split to use.
728
+ column: Column containing the game strings.
729
+ max_samples: Maximum number of samples to process.
730
+
731
+ Returns:
732
+ Dictionary mapping tokens to their frequencies.
733
+ """
734
+ from collections import Counter
735
+ from datasets import load_dataset
736
+
737
+ dataset = load_dataset(dataset_name, split=split)
738
+
739
+ if max_samples is not None:
740
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
741
+
742
+ token_counts = Counter()
743
+
744
+ for example in dataset:
745
+ moves = example[column].strip().split()
746
+ token_counts.update(moves)
747
+
748
+ return dict(token_counts)
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[BOS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[EOS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ }
35
+ },
36
+ "auto_map": {
37
+ "AutoTokenizer": [
38
+ "tokenizer.ChessTokenizer",
39
+ null
40
+ ]
41
+ },
42
+ "bos_token": "[BOS]",
43
+ "clean_up_tokenization_spaces": false,
44
+ "eos_token": "[EOS]",
45
+ "extra_special_tokens": {},
46
+ "model_max_length": 1000000000000000019884624838656,
47
+ "pad_token": "[PAD]",
48
+ "tokenizer_class": "ChessTokenizer",
49
+ "unk_token": "[UNK]"
50
+ }
vocab.json ADDED
@@ -0,0 +1,1182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "[PAD]": 0,
3
+ "[BOS]": 1,
4
+ "[EOS]": 2,
5
+ "[UNK]": 3,
6
+ "[W]": 4,
7
+ "[B]": 8,
8
+ "[P]": 6,
9
+ "[N]": 7,
10
+ "[R]": 9,
11
+ "[Q]": 10,
12
+ "[K]": 11,
13
+ "[a1]": 12,
14
+ "[a2]": 13,
15
+ "[a3]": 14,
16
+ "[a4]": 15,
17
+ "[a5]": 16,
18
+ "[a6]": 17,
19
+ "[a7]": 18,
20
+ "[a8]": 19,
21
+ "[b1]": 20,
22
+ "[b2]": 21,
23
+ "[b3]": 22,
24
+ "[b4]": 23,
25
+ "[b5]": 24,
26
+ "[b6]": 25,
27
+ "[b7]": 26,
28
+ "[b8]": 27,
29
+ "[c1]": 28,
30
+ "[c2]": 29,
31
+ "[c3]": 30,
32
+ "[c4]": 31,
33
+ "[c5]": 32,
34
+ "[c6]": 33,
35
+ "[c7]": 34,
36
+ "[c8]": 35,
37
+ "[d1]": 36,
38
+ "[d2]": 37,
39
+ "[d3]": 38,
40
+ "[d4]": 39,
41
+ "[d5]": 40,
42
+ "[d6]": 41,
43
+ "[d7]": 42,
44
+ "[d8]": 43,
45
+ "[e1]": 44,
46
+ "[e2]": 45,
47
+ "[e3]": 46,
48
+ "[e4]": 47,
49
+ "[e5]": 48,
50
+ "[e6]": 49,
51
+ "[e7]": 50,
52
+ "[e8]": 51,
53
+ "[f1]": 52,
54
+ "[f2]": 53,
55
+ "[f3]": 54,
56
+ "[f4]": 55,
57
+ "[f5]": 56,
58
+ "[f6]": 57,
59
+ "[f7]": 58,
60
+ "[f8]": 59,
61
+ "[g1]": 60,
62
+ "[g2]": 61,
63
+ "[g3]": 62,
64
+ "[g4]": 63,
65
+ "[g5]": 64,
66
+ "[g6]": 65,
67
+ "[g7]": 66,
68
+ "[g8]": 67,
69
+ "[h1]": 68,
70
+ "[h2]": 69,
71
+ "[h3]": 70,
72
+ "[h4]": 71,
73
+ "[h5]": 72,
74
+ "[h6]": 73,
75
+ "[h7]": 74,
76
+ "[h8]": 75,
77
+ "[x]": 76,
78
+ "[+]": 77,
79
+ "[#+]": 78,
80
+ "[o]": 79,
81
+ "[O]": 80,
82
+ "[a1-a1]": 81,
83
+ "[a1-a2]": 82,
84
+ "[a1-a3]": 83,
85
+ "[a1-b1]": 84,
86
+ "[a1-b2]": 85,
87
+ "[a1-b3]": 86,
88
+ "[a1-c1]": 87,
89
+ "[a1-c2]": 88,
90
+ "[a1-c3]": 89,
91
+ "[a2-a1]": 90,
92
+ "[a2-a2]": 91,
93
+ "[a2-a3]": 92,
94
+ "[a2-a4]": 93,
95
+ "[a2-b1]": 94,
96
+ "[a2-b2]": 95,
97
+ "[a2-b3]": 96,
98
+ "[a2-b4]": 97,
99
+ "[a2-c1]": 98,
100
+ "[a2-c2]": 99,
101
+ "[a2-c3]": 100,
102
+ "[a2-c4]": 101,
103
+ "[a3-a1]": 102,
104
+ "[a3-a2]": 103,
105
+ "[a3-a3]": 104,
106
+ "[a3-a4]": 105,
107
+ "[a3-a5]": 106,
108
+ "[a3-b1]": 107,
109
+ "[a3-b2]": 108,
110
+ "[a3-b3]": 109,
111
+ "[a3-b4]": 110,
112
+ "[a3-b5]": 111,
113
+ "[a3-c1]": 112,
114
+ "[a3-c2]": 113,
115
+ "[a3-c3]": 114,
116
+ "[a3-c4]": 115,
117
+ "[a3-c5]": 116,
118
+ "[a4-a2]": 117,
119
+ "[a4-a3]": 118,
120
+ "[a4-a4]": 119,
121
+ "[a4-a5]": 120,
122
+ "[a4-a6]": 121,
123
+ "[a4-b2]": 122,
124
+ "[a4-b3]": 123,
125
+ "[a4-b4]": 124,
126
+ "[a4-b5]": 125,
127
+ "[a4-b6]": 126,
128
+ "[a4-c2]": 127,
129
+ "[a4-c3]": 128,
130
+ "[a4-c4]": 129,
131
+ "[a4-c5]": 130,
132
+ "[a4-c6]": 131,
133
+ "[a5-a3]": 132,
134
+ "[a5-a4]": 133,
135
+ "[a5-a5]": 134,
136
+ "[a5-a6]": 135,
137
+ "[a5-a7]": 136,
138
+ "[a5-b3]": 137,
139
+ "[a5-b4]": 138,
140
+ "[a5-b5]": 139,
141
+ "[a5-b6]": 140,
142
+ "[a5-b7]": 141,
143
+ "[a5-c3]": 142,
144
+ "[a5-c4]": 143,
145
+ "[a5-c5]": 144,
146
+ "[a5-c6]": 145,
147
+ "[a5-c7]": 146,
148
+ "[a6-a4]": 147,
149
+ "[a6-a5]": 148,
150
+ "[a6-a6]": 149,
151
+ "[a6-a7]": 150,
152
+ "[a6-a8]": 151,
153
+ "[a6-b4]": 152,
154
+ "[a6-b5]": 153,
155
+ "[a6-b6]": 154,
156
+ "[a6-b7]": 155,
157
+ "[a6-b8]": 156,
158
+ "[a6-c4]": 157,
159
+ "[a6-c5]": 158,
160
+ "[a6-c6]": 159,
161
+ "[a6-c7]": 160,
162
+ "[a6-c8]": 161,
163
+ "[a7-a5]": 162,
164
+ "[a7-a6]": 163,
165
+ "[a7-a7]": 164,
166
+ "[a7-a8]": 165,
167
+ "[a7-b5]": 166,
168
+ "[a7-b6]": 167,
169
+ "[a7-b7]": 168,
170
+ "[a7-b8]": 169,
171
+ "[a7-c5]": 170,
172
+ "[a7-c6]": 171,
173
+ "[a7-c7]": 172,
174
+ "[a7-c8]": 173,
175
+ "[a8-a6]": 174,
176
+ "[a8-a7]": 175,
177
+ "[a8-a8]": 176,
178
+ "[a8-b6]": 177,
179
+ "[a8-b7]": 178,
180
+ "[a8-b8]": 179,
181
+ "[a8-c6]": 180,
182
+ "[a8-c7]": 181,
183
+ "[a8-c8]": 182,
184
+ "[b1-a1]": 183,
185
+ "[b1-a2]": 184,
186
+ "[b1-a3]": 185,
187
+ "[b1-b1]": 186,
188
+ "[b1-b2]": 187,
189
+ "[b1-b3]": 188,
190
+ "[b1-c1]": 189,
191
+ "[b1-c2]": 190,
192
+ "[b1-c3]": 191,
193
+ "[b1-d1]": 192,
194
+ "[b1-d2]": 193,
195
+ "[b1-d3]": 194,
196
+ "[b2-a1]": 195,
197
+ "[b2-a2]": 196,
198
+ "[b2-a3]": 197,
199
+ "[b2-a4]": 198,
200
+ "[b2-b1]": 199,
201
+ "[b2-b2]": 200,
202
+ "[b2-b3]": 201,
203
+ "[b2-b4]": 202,
204
+ "[b2-c1]": 203,
205
+ "[b2-c2]": 204,
206
+ "[b2-c3]": 205,
207
+ "[b2-c4]": 206,
208
+ "[b2-d1]": 207,
209
+ "[b2-d2]": 208,
210
+ "[b2-d3]": 209,
211
+ "[b2-d4]": 210,
212
+ "[b3-a1]": 211,
213
+ "[b3-a2]": 212,
214
+ "[b3-a3]": 213,
215
+ "[b3-a4]": 214,
216
+ "[b3-a5]": 215,
217
+ "[b3-b1]": 216,
218
+ "[b3-b2]": 217,
219
+ "[b3-b3]": 218,
220
+ "[b3-b4]": 219,
221
+ "[b3-b5]": 220,
222
+ "[b3-c1]": 221,
223
+ "[b3-c2]": 222,
224
+ "[b3-c3]": 223,
225
+ "[b3-c4]": 224,
226
+ "[b3-c5]": 225,
227
+ "[b3-d1]": 226,
228
+ "[b3-d2]": 227,
229
+ "[b3-d3]": 228,
230
+ "[b3-d4]": 229,
231
+ "[b3-d5]": 230,
232
+ "[b4-a2]": 231,
233
+ "[b4-a3]": 232,
234
+ "[b4-a4]": 233,
235
+ "[b4-a5]": 234,
236
+ "[b4-a6]": 235,
237
+ "[b4-b2]": 236,
238
+ "[b4-b3]": 237,
239
+ "[b4-b4]": 238,
240
+ "[b4-b5]": 239,
241
+ "[b4-b6]": 240,
242
+ "[b4-c2]": 241,
243
+ "[b4-c3]": 242,
244
+ "[b4-c4]": 243,
245
+ "[b4-c5]": 244,
246
+ "[b4-c6]": 245,
247
+ "[b4-d2]": 246,
248
+ "[b4-d3]": 247,
249
+ "[b4-d4]": 248,
250
+ "[b4-d5]": 249,
251
+ "[b4-d6]": 250,
252
+ "[b5-a3]": 251,
253
+ "[b5-a4]": 252,
254
+ "[b5-a5]": 253,
255
+ "[b5-a6]": 254,
256
+ "[b5-a7]": 255,
257
+ "[b5-b3]": 256,
258
+ "[b5-b4]": 257,
259
+ "[b5-b5]": 258,
260
+ "[b5-b6]": 259,
261
+ "[b5-b7]": 260,
262
+ "[b5-c3]": 261,
263
+ "[b5-c4]": 262,
264
+ "[b5-c5]": 263,
265
+ "[b5-c6]": 264,
266
+ "[b5-c7]": 265,
267
+ "[b5-d3]": 266,
268
+ "[b5-d4]": 267,
269
+ "[b5-d5]": 268,
270
+ "[b5-d6]": 269,
271
+ "[b5-d7]": 270,
272
+ "[b6-a4]": 271,
273
+ "[b6-a5]": 272,
274
+ "[b6-a6]": 273,
275
+ "[b6-a7]": 274,
276
+ "[b6-a8]": 275,
277
+ "[b6-b4]": 276,
278
+ "[b6-b5]": 277,
279
+ "[b6-b6]": 278,
280
+ "[b6-b7]": 279,
281
+ "[b6-b8]": 280,
282
+ "[b6-c4]": 281,
283
+ "[b6-c5]": 282,
284
+ "[b6-c6]": 283,
285
+ "[b6-c7]": 284,
286
+ "[b6-c8]": 285,
287
+ "[b6-d4]": 286,
288
+ "[b6-d5]": 287,
289
+ "[b6-d6]": 288,
290
+ "[b6-d7]": 289,
291
+ "[b6-d8]": 290,
292
+ "[b7-a5]": 291,
293
+ "[b7-a6]": 292,
294
+ "[b7-a7]": 293,
295
+ "[b7-a8]": 294,
296
+ "[b7-b5]": 295,
297
+ "[b7-b6]": 296,
298
+ "[b7-b7]": 297,
299
+ "[b7-b8]": 298,
300
+ "[b7-c5]": 299,
301
+ "[b7-c6]": 300,
302
+ "[b7-c7]": 301,
303
+ "[b7-c8]": 302,
304
+ "[b7-d5]": 303,
305
+ "[b7-d6]": 304,
306
+ "[b7-d7]": 305,
307
+ "[b7-d8]": 306,
308
+ "[b8-a6]": 307,
309
+ "[b8-a7]": 308,
310
+ "[b8-a8]": 309,
311
+ "[b8-b6]": 310,
312
+ "[b8-b7]": 311,
313
+ "[b8-b8]": 312,
314
+ "[b8-c6]": 313,
315
+ "[b8-c7]": 314,
316
+ "[b8-c8]": 315,
317
+ "[b8-d6]": 316,
318
+ "[b8-d7]": 317,
319
+ "[b8-d8]": 318,
320
+ "[c1-a1]": 319,
321
+ "[c1-a2]": 320,
322
+ "[c1-a3]": 321,
323
+ "[c1-b1]": 322,
324
+ "[c1-b2]": 323,
325
+ "[c1-b3]": 324,
326
+ "[c1-c1]": 325,
327
+ "[c1-c2]": 326,
328
+ "[c1-c3]": 327,
329
+ "[c1-d1]": 328,
330
+ "[c1-d2]": 329,
331
+ "[c1-d3]": 330,
332
+ "[c1-e1]": 331,
333
+ "[c1-e2]": 332,
334
+ "[c1-e3]": 333,
335
+ "[c2-a1]": 334,
336
+ "[c2-a2]": 335,
337
+ "[c2-a3]": 336,
338
+ "[c2-a4]": 337,
339
+ "[c2-b1]": 338,
340
+ "[c2-b2]": 339,
341
+ "[c2-b3]": 340,
342
+ "[c2-b4]": 341,
343
+ "[c2-c1]": 342,
344
+ "[c2-c2]": 343,
345
+ "[c2-c3]": 344,
346
+ "[c2-c4]": 345,
347
+ "[c2-d1]": 346,
348
+ "[c2-d2]": 347,
349
+ "[c2-d3]": 348,
350
+ "[c2-d4]": 349,
351
+ "[c2-e1]": 350,
352
+ "[c2-e2]": 351,
353
+ "[c2-e3]": 352,
354
+ "[c2-e4]": 353,
355
+ "[c3-a1]": 354,
356
+ "[c3-a2]": 355,
357
+ "[c3-a3]": 356,
358
+ "[c3-a4]": 357,
359
+ "[c3-a5]": 358,
360
+ "[c3-b1]": 359,
361
+ "[c3-b2]": 360,
362
+ "[c3-b3]": 361,
363
+ "[c3-b4]": 362,
364
+ "[c3-b5]": 363,
365
+ "[c3-c1]": 364,
366
+ "[c3-c2]": 365,
367
+ "[c3-c3]": 366,
368
+ "[c3-c4]": 367,
369
+ "[c3-c5]": 368,
370
+ "[c3-d1]": 369,
371
+ "[c3-d2]": 370,
372
+ "[c3-d3]": 371,
373
+ "[c3-d4]": 372,
374
+ "[c3-d5]": 373,
375
+ "[c3-e1]": 374,
376
+ "[c3-e2]": 375,
377
+ "[c3-e3]": 376,
378
+ "[c3-e4]": 377,
379
+ "[c3-e5]": 378,
380
+ "[c4-a2]": 379,
381
+ "[c4-a3]": 380,
382
+ "[c4-a4]": 381,
383
+ "[c4-a5]": 382,
384
+ "[c4-a6]": 383,
385
+ "[c4-b2]": 384,
386
+ "[c4-b3]": 385,
387
+ "[c4-b4]": 386,
388
+ "[c4-b5]": 387,
389
+ "[c4-b6]": 388,
390
+ "[c4-c2]": 389,
391
+ "[c4-c3]": 390,
392
+ "[c4-c4]": 391,
393
+ "[c4-c5]": 392,
394
+ "[c4-c6]": 393,
395
+ "[c4-d2]": 394,
396
+ "[c4-d3]": 395,
397
+ "[c4-d4]": 396,
398
+ "[c4-d5]": 397,
399
+ "[c4-d6]": 398,
400
+ "[c4-e2]": 399,
401
+ "[c4-e3]": 400,
402
+ "[c4-e4]": 401,
403
+ "[c4-e5]": 402,
404
+ "[c4-e6]": 403,
405
+ "[c5-a3]": 404,
406
+ "[c5-a4]": 405,
407
+ "[c5-a5]": 406,
408
+ "[c5-a6]": 407,
409
+ "[c5-a7]": 408,
410
+ "[c5-b3]": 409,
411
+ "[c5-b4]": 410,
412
+ "[c5-b5]": 411,
413
+ "[c5-b6]": 412,
414
+ "[c5-b7]": 413,
415
+ "[c5-c3]": 414,
416
+ "[c5-c4]": 415,
417
+ "[c5-c5]": 416,
418
+ "[c5-c6]": 417,
419
+ "[c5-c7]": 418,
420
+ "[c5-d3]": 419,
421
+ "[c5-d4]": 420,
422
+ "[c5-d5]": 421,
423
+ "[c5-d6]": 422,
424
+ "[c5-d7]": 423,
425
+ "[c5-e3]": 424,
426
+ "[c5-e4]": 425,
427
+ "[c5-e5]": 426,
428
+ "[c5-e6]": 427,
429
+ "[c5-e7]": 428,
430
+ "[c6-a4]": 429,
431
+ "[c6-a5]": 430,
432
+ "[c6-a6]": 431,
433
+ "[c6-a7]": 432,
434
+ "[c6-a8]": 433,
435
+ "[c6-b4]": 434,
436
+ "[c6-b5]": 435,
437
+ "[c6-b6]": 436,
438
+ "[c6-b7]": 437,
439
+ "[c6-b8]": 438,
440
+ "[c6-c4]": 439,
441
+ "[c6-c5]": 440,
442
+ "[c6-c6]": 441,
443
+ "[c6-c7]": 442,
444
+ "[c6-c8]": 443,
445
+ "[c6-d4]": 444,
446
+ "[c6-d5]": 445,
447
+ "[c6-d6]": 446,
448
+ "[c6-d7]": 447,
449
+ "[c6-d8]": 448,
450
+ "[c6-e4]": 449,
451
+ "[c6-e5]": 450,
452
+ "[c6-e6]": 451,
453
+ "[c6-e7]": 452,
454
+ "[c6-e8]": 453,
455
+ "[c7-a5]": 454,
456
+ "[c7-a6]": 455,
457
+ "[c7-a7]": 456,
458
+ "[c7-a8]": 457,
459
+ "[c7-b5]": 458,
460
+ "[c7-b6]": 459,
461
+ "[c7-b7]": 460,
462
+ "[c7-b8]": 461,
463
+ "[c7-c5]": 462,
464
+ "[c7-c6]": 463,
465
+ "[c7-c7]": 464,
466
+ "[c7-c8]": 465,
467
+ "[c7-d5]": 466,
468
+ "[c7-d6]": 467,
469
+ "[c7-d7]": 468,
470
+ "[c7-d8]": 469,
471
+ "[c7-e5]": 470,
472
+ "[c7-e6]": 471,
473
+ "[c7-e7]": 472,
474
+ "[c7-e8]": 473,
475
+ "[c8-a6]": 474,
476
+ "[c8-a7]": 475,
477
+ "[c8-a8]": 476,
478
+ "[c8-b6]": 477,
479
+ "[c8-b7]": 478,
480
+ "[c8-b8]": 479,
481
+ "[c8-c6]": 480,
482
+ "[c8-c7]": 481,
483
+ "[c8-c8]": 482,
484
+ "[c8-d6]": 483,
485
+ "[c8-d7]": 484,
486
+ "[c8-d8]": 485,
487
+ "[c8-e6]": 486,
488
+ "[c8-e7]": 487,
489
+ "[c8-e8]": 488,
490
+ "[d1-b1]": 489,
491
+ "[d1-b2]": 490,
492
+ "[d1-b3]": 491,
493
+ "[d1-c1]": 492,
494
+ "[d1-c2]": 493,
495
+ "[d1-c3]": 494,
496
+ "[d1-d1]": 495,
497
+ "[d1-d2]": 496,
498
+ "[d1-d3]": 497,
499
+ "[d1-e1]": 498,
500
+ "[d1-e2]": 499,
501
+ "[d1-e3]": 500,
502
+ "[d1-f1]": 501,
503
+ "[d1-f2]": 502,
504
+ "[d1-f3]": 503,
505
+ "[d2-b1]": 504,
506
+ "[d2-b2]": 505,
507
+ "[d2-b3]": 506,
508
+ "[d2-b4]": 507,
509
+ "[d2-c1]": 508,
510
+ "[d2-c2]": 509,
511
+ "[d2-c3]": 510,
512
+ "[d2-c4]": 511,
513
+ "[d2-d1]": 512,
514
+ "[d2-d2]": 513,
515
+ "[d2-d3]": 514,
516
+ "[d2-d4]": 515,
517
+ "[d2-e1]": 516,
518
+ "[d2-e2]": 517,
519
+ "[d2-e3]": 518,
520
+ "[d2-e4]": 519,
521
+ "[d2-f1]": 520,
522
+ "[d2-f2]": 521,
523
+ "[d2-f3]": 522,
524
+ "[d2-f4]": 523,
525
+ "[d3-b1]": 524,
526
+ "[d3-b2]": 525,
527
+ "[d3-b3]": 526,
528
+ "[d3-b4]": 527,
529
+ "[d3-b5]": 528,
530
+ "[d3-c1]": 529,
531
+ "[d3-c2]": 530,
532
+ "[d3-c3]": 531,
533
+ "[d3-c4]": 532,
534
+ "[d3-c5]": 533,
535
+ "[d3-d1]": 534,
536
+ "[d3-d2]": 535,
537
+ "[d3-d3]": 536,
538
+ "[d3-d4]": 537,
539
+ "[d3-d5]": 538,
540
+ "[d3-e1]": 539,
541
+ "[d3-e2]": 540,
542
+ "[d3-e3]": 541,
543
+ "[d3-e4]": 542,
544
+ "[d3-e5]": 543,
545
+ "[d3-f1]": 544,
546
+ "[d3-f2]": 545,
547
+ "[d3-f3]": 546,
548
+ "[d3-f4]": 547,
549
+ "[d3-f5]": 548,
550
+ "[d4-b2]": 549,
551
+ "[d4-b3]": 550,
552
+ "[d4-b4]": 551,
553
+ "[d4-b5]": 552,
554
+ "[d4-b6]": 553,
555
+ "[d4-c2]": 554,
556
+ "[d4-c3]": 555,
557
+ "[d4-c4]": 556,
558
+ "[d4-c5]": 557,
559
+ "[d4-c6]": 558,
560
+ "[d4-d2]": 559,
561
+ "[d4-d3]": 560,
562
+ "[d4-d4]": 561,
563
+ "[d4-d5]": 562,
564
+ "[d4-d6]": 563,
565
+ "[d4-e2]": 564,
566
+ "[d4-e3]": 565,
567
+ "[d4-e4]": 566,
568
+ "[d4-e5]": 567,
569
+ "[d4-e6]": 568,
570
+ "[d4-f2]": 569,
571
+ "[d4-f3]": 570,
572
+ "[d4-f4]": 571,
573
+ "[d4-f5]": 572,
574
+ "[d4-f6]": 573,
575
+ "[d5-b3]": 574,
576
+ "[d5-b4]": 575,
577
+ "[d5-b5]": 576,
578
+ "[d5-b6]": 577,
579
+ "[d5-b7]": 578,
580
+ "[d5-c3]": 579,
581
+ "[d5-c4]": 580,
582
+ "[d5-c5]": 581,
583
+ "[d5-c6]": 582,
584
+ "[d5-c7]": 583,
585
+ "[d5-d3]": 584,
586
+ "[d5-d4]": 585,
587
+ "[d5-d5]": 586,
588
+ "[d5-d6]": 587,
589
+ "[d5-d7]": 588,
590
+ "[d5-e3]": 589,
591
+ "[d5-e4]": 590,
592
+ "[d5-e5]": 591,
593
+ "[d5-e6]": 592,
594
+ "[d5-e7]": 593,
595
+ "[d5-f3]": 594,
596
+ "[d5-f4]": 595,
597
+ "[d5-f5]": 596,
598
+ "[d5-f6]": 597,
599
+ "[d5-f7]": 598,
600
+ "[d6-b4]": 599,
601
+ "[d6-b5]": 600,
602
+ "[d6-b6]": 601,
603
+ "[d6-b7]": 602,
604
+ "[d6-b8]": 603,
605
+ "[d6-c4]": 604,
606
+ "[d6-c5]": 605,
607
+ "[d6-c6]": 606,
608
+ "[d6-c7]": 607,
609
+ "[d6-c8]": 608,
610
+ "[d6-d4]": 609,
611
+ "[d6-d5]": 610,
612
+ "[d6-d6]": 611,
613
+ "[d6-d7]": 612,
614
+ "[d6-d8]": 613,
615
+ "[d6-e4]": 614,
616
+ "[d6-e5]": 615,
617
+ "[d6-e6]": 616,
618
+ "[d6-e7]": 617,
619
+ "[d6-e8]": 618,
620
+ "[d6-f4]": 619,
621
+ "[d6-f5]": 620,
622
+ "[d6-f6]": 621,
623
+ "[d6-f7]": 622,
624
+ "[d6-f8]": 623,
625
+ "[d7-b5]": 624,
626
+ "[d7-b6]": 625,
627
+ "[d7-b7]": 626,
628
+ "[d7-b8]": 627,
629
+ "[d7-c5]": 628,
630
+ "[d7-c6]": 629,
631
+ "[d7-c7]": 630,
632
+ "[d7-c8]": 631,
633
+ "[d7-d5]": 632,
634
+ "[d7-d6]": 633,
635
+ "[d7-d7]": 634,
636
+ "[d7-d8]": 635,
637
+ "[d7-e5]": 636,
638
+ "[d7-e6]": 637,
639
+ "[d7-e7]": 638,
640
+ "[d7-e8]": 639,
641
+ "[d7-f5]": 640,
642
+ "[d7-f6]": 641,
643
+ "[d7-f7]": 642,
644
+ "[d7-f8]": 643,
645
+ "[d8-b6]": 644,
646
+ "[d8-b7]": 645,
647
+ "[d8-b8]": 646,
648
+ "[d8-c6]": 647,
649
+ "[d8-c7]": 648,
650
+ "[d8-c8]": 649,
651
+ "[d8-d6]": 650,
652
+ "[d8-d7]": 651,
653
+ "[d8-d8]": 652,
654
+ "[d8-e6]": 653,
655
+ "[d8-e7]": 654,
656
+ "[d8-e8]": 655,
657
+ "[d8-f6]": 656,
658
+ "[d8-f7]": 657,
659
+ "[d8-f8]": 658,
660
+ "[e1-c1]": 659,
661
+ "[e1-c2]": 660,
662
+ "[e1-c3]": 661,
663
+ "[e1-d1]": 662,
664
+ "[e1-d2]": 663,
665
+ "[e1-d3]": 664,
666
+ "[e1-e1]": 665,
667
+ "[e1-e2]": 666,
668
+ "[e1-e3]": 667,
669
+ "[e1-f1]": 668,
670
+ "[e1-f2]": 669,
671
+ "[e1-f3]": 670,
672
+ "[e1-g1]": 671,
673
+ "[e1-g2]": 672,
674
+ "[e1-g3]": 673,
675
+ "[e2-c1]": 674,
676
+ "[e2-c2]": 675,
677
+ "[e2-c3]": 676,
678
+ "[e2-c4]": 677,
679
+ "[e2-d1]": 678,
680
+ "[e2-d2]": 679,
681
+ "[e2-d3]": 680,
682
+ "[e2-d4]": 681,
683
+ "[e2-e1]": 682,
684
+ "[e2-e2]": 683,
685
+ "[e2-e3]": 684,
686
+ "[e2-e4]": 685,
687
+ "[e2-f1]": 686,
688
+ "[e2-f2]": 687,
689
+ "[e2-f3]": 688,
690
+ "[e2-f4]": 689,
691
+ "[e2-g1]": 690,
692
+ "[e2-g2]": 691,
693
+ "[e2-g3]": 692,
694
+ "[e2-g4]": 693,
695
+ "[e3-c1]": 694,
696
+ "[e3-c2]": 695,
697
+ "[e3-c3]": 696,
698
+ "[e3-c4]": 697,
699
+ "[e3-c5]": 698,
700
+ "[e3-d1]": 699,
701
+ "[e3-d2]": 700,
702
+ "[e3-d3]": 701,
703
+ "[e3-d4]": 702,
704
+ "[e3-d5]": 703,
705
+ "[e3-e1]": 704,
706
+ "[e3-e2]": 705,
707
+ "[e3-e3]": 706,
708
+ "[e3-e4]": 707,
709
+ "[e3-e5]": 708,
710
+ "[e3-f1]": 709,
711
+ "[e3-f2]": 710,
712
+ "[e3-f3]": 711,
713
+ "[e3-f4]": 712,
714
+ "[e3-f5]": 713,
715
+ "[e3-g1]": 714,
716
+ "[e3-g2]": 715,
717
+ "[e3-g3]": 716,
718
+ "[e3-g4]": 717,
719
+ "[e3-g5]": 718,
720
+ "[e4-c2]": 719,
721
+ "[e4-c3]": 720,
722
+ "[e4-c4]": 721,
723
+ "[e4-c5]": 722,
724
+ "[e4-c6]": 723,
725
+ "[e4-d2]": 724,
726
+ "[e4-d3]": 725,
727
+ "[e4-d4]": 726,
728
+ "[e4-d5]": 727,
729
+ "[e4-d6]": 728,
730
+ "[e4-e2]": 729,
731
+ "[e4-e3]": 730,
732
+ "[e4-e4]": 731,
733
+ "[e4-e5]": 732,
734
+ "[e4-e6]": 733,
735
+ "[e4-f2]": 734,
736
+ "[e4-f3]": 735,
737
+ "[e4-f4]": 736,
738
+ "[e4-f5]": 737,
739
+ "[e4-f6]": 738,
740
+ "[e4-g2]": 739,
741
+ "[e4-g3]": 740,
742
+ "[e4-g4]": 741,
743
+ "[e4-g5]": 742,
744
+ "[e4-g6]": 743,
745
+ "[e5-c3]": 744,
746
+ "[e5-c4]": 745,
747
+ "[e5-c5]": 746,
748
+ "[e5-c6]": 747,
749
+ "[e5-c7]": 748,
750
+ "[e5-d3]": 749,
751
+ "[e5-d4]": 750,
752
+ "[e5-d5]": 751,
753
+ "[e5-d6]": 752,
754
+ "[e5-d7]": 753,
755
+ "[e5-e3]": 754,
756
+ "[e5-e4]": 755,
757
+ "[e5-e5]": 756,
758
+ "[e5-e6]": 757,
759
+ "[e5-e7]": 758,
760
+ "[e5-f3]": 759,
761
+ "[e5-f4]": 760,
762
+ "[e5-f5]": 761,
763
+ "[e5-f6]": 762,
764
+ "[e5-f7]": 763,
765
+ "[e5-g3]": 764,
766
+ "[e5-g4]": 765,
767
+ "[e5-g5]": 766,
768
+ "[e5-g6]": 767,
769
+ "[e5-g7]": 768,
770
+ "[e6-c4]": 769,
771
+ "[e6-c5]": 770,
772
+ "[e6-c6]": 771,
773
+ "[e6-c7]": 772,
774
+ "[e6-c8]": 773,
775
+ "[e6-d4]": 774,
776
+ "[e6-d5]": 775,
777
+ "[e6-d6]": 776,
778
+ "[e6-d7]": 777,
779
+ "[e6-d8]": 778,
780
+ "[e6-e4]": 779,
781
+ "[e6-e5]": 780,
782
+ "[e6-e6]": 781,
783
+ "[e6-e7]": 782,
784
+ "[e6-e8]": 783,
785
+ "[e6-f4]": 784,
786
+ "[e6-f5]": 785,
787
+ "[e6-f6]": 786,
788
+ "[e6-f7]": 787,
789
+ "[e6-f8]": 788,
790
+ "[e6-g4]": 789,
791
+ "[e6-g5]": 790,
792
+ "[e6-g6]": 791,
793
+ "[e6-g7]": 792,
794
+ "[e6-g8]": 793,
795
+ "[e7-c5]": 794,
796
+ "[e7-c6]": 795,
797
+ "[e7-c7]": 796,
798
+ "[e7-c8]": 797,
799
+ "[e7-d5]": 798,
800
+ "[e7-d6]": 799,
801
+ "[e7-d7]": 800,
802
+ "[e7-d8]": 801,
803
+ "[e7-e5]": 802,
804
+ "[e7-e6]": 803,
805
+ "[e7-e7]": 804,
806
+ "[e7-e8]": 805,
807
+ "[e7-f5]": 806,
808
+ "[e7-f6]": 807,
809
+ "[e7-f7]": 808,
810
+ "[e7-f8]": 809,
811
+ "[e7-g5]": 810,
812
+ "[e7-g6]": 811,
813
+ "[e7-g7]": 812,
814
+ "[e7-g8]": 813,
815
+ "[e8-c6]": 814,
816
+ "[e8-c7]": 815,
817
+ "[e8-c8]": 816,
818
+ "[e8-d6]": 817,
819
+ "[e8-d7]": 818,
820
+ "[e8-d8]": 819,
821
+ "[e8-e6]": 820,
822
+ "[e8-e7]": 821,
823
+ "[e8-e8]": 822,
824
+ "[e8-f6]": 823,
825
+ "[e8-f7]": 824,
826
+ "[e8-f8]": 825,
827
+ "[e8-g6]": 826,
828
+ "[e8-g7]": 827,
829
+ "[e8-g8]": 828,
830
+ "[f1-d1]": 829,
831
+ "[f1-d2]": 830,
832
+ "[f1-d3]": 831,
833
+ "[f1-e1]": 832,
834
+ "[f1-e2]": 833,
835
+ "[f1-e3]": 834,
836
+ "[f1-f1]": 835,
837
+ "[f1-f2]": 836,
838
+ "[f1-f3]": 837,
839
+ "[f1-g1]": 838,
840
+ "[f1-g2]": 839,
841
+ "[f1-g3]": 840,
842
+ "[f1-h1]": 841,
843
+ "[f1-h2]": 842,
844
+ "[f1-h3]": 843,
845
+ "[f2-d1]": 844,
846
+ "[f2-d2]": 845,
847
+ "[f2-d3]": 846,
848
+ "[f2-d4]": 847,
849
+ "[f2-e1]": 848,
850
+ "[f2-e2]": 849,
851
+ "[f2-e3]": 850,
852
+ "[f2-e4]": 851,
853
+ "[f2-f1]": 852,
854
+ "[f2-f2]": 853,
855
+ "[f2-f3]": 854,
856
+ "[f2-f4]": 855,
857
+ "[f2-g1]": 856,
858
+ "[f2-g2]": 857,
859
+ "[f2-g3]": 858,
860
+ "[f2-g4]": 859,
861
+ "[f2-h1]": 860,
862
+ "[f2-h2]": 861,
863
+ "[f2-h3]": 862,
864
+ "[f2-h4]": 863,
865
+ "[f3-d1]": 864,
866
+ "[f3-d2]": 865,
867
+ "[f3-d3]": 866,
868
+ "[f3-d4]": 867,
869
+ "[f3-d5]": 868,
870
+ "[f3-e1]": 869,
871
+ "[f3-e2]": 870,
872
+ "[f3-e3]": 871,
873
+ "[f3-e4]": 872,
874
+ "[f3-e5]": 873,
875
+ "[f3-f1]": 874,
876
+ "[f3-f2]": 875,
877
+ "[f3-f3]": 876,
878
+ "[f3-f4]": 877,
879
+ "[f3-f5]": 878,
880
+ "[f3-g1]": 879,
881
+ "[f3-g2]": 880,
882
+ "[f3-g3]": 881,
883
+ "[f3-g4]": 882,
884
+ "[f3-g5]": 883,
885
+ "[f3-h1]": 884,
886
+ "[f3-h2]": 885,
887
+ "[f3-h3]": 886,
888
+ "[f3-h4]": 887,
889
+ "[f3-h5]": 888,
890
+ "[f4-d2]": 889,
891
+ "[f4-d3]": 890,
892
+ "[f4-d4]": 891,
893
+ "[f4-d5]": 892,
894
+ "[f4-d6]": 893,
895
+ "[f4-e2]": 894,
896
+ "[f4-e3]": 895,
897
+ "[f4-e4]": 896,
898
+ "[f4-e5]": 897,
899
+ "[f4-e6]": 898,
900
+ "[f4-f2]": 899,
901
+ "[f4-f3]": 900,
902
+ "[f4-f4]": 901,
903
+ "[f4-f5]": 902,
904
+ "[f4-f6]": 903,
905
+ "[f4-g2]": 904,
906
+ "[f4-g3]": 905,
907
+ "[f4-g4]": 906,
908
+ "[f4-g5]": 907,
909
+ "[f4-g6]": 908,
910
+ "[f4-h2]": 909,
911
+ "[f4-h3]": 910,
912
+ "[f4-h4]": 911,
913
+ "[f4-h5]": 912,
914
+ "[f4-h6]": 913,
915
+ "[f5-d3]": 914,
916
+ "[f5-d4]": 915,
917
+ "[f5-d5]": 916,
918
+ "[f5-d6]": 917,
919
+ "[f5-d7]": 918,
920
+ "[f5-e3]": 919,
921
+ "[f5-e4]": 920,
922
+ "[f5-e5]": 921,
923
+ "[f5-e6]": 922,
924
+ "[f5-e7]": 923,
925
+ "[f5-f3]": 924,
926
+ "[f5-f4]": 925,
927
+ "[f5-f5]": 926,
928
+ "[f5-f6]": 927,
929
+ "[f5-f7]": 928,
930
+ "[f5-g3]": 929,
931
+ "[f5-g4]": 930,
932
+ "[f5-g5]": 931,
933
+ "[f5-g6]": 932,
934
+ "[f5-g7]": 933,
935
+ "[f5-h3]": 934,
936
+ "[f5-h4]": 935,
937
+ "[f5-h5]": 936,
938
+ "[f5-h6]": 937,
939
+ "[f5-h7]": 938,
940
+ "[f6-d4]": 939,
941
+ "[f6-d5]": 940,
942
+ "[f6-d6]": 941,
943
+ "[f6-d7]": 942,
944
+ "[f6-d8]": 943,
945
+ "[f6-e4]": 944,
946
+ "[f6-e5]": 945,
947
+ "[f6-e6]": 946,
948
+ "[f6-e7]": 947,
949
+ "[f6-e8]": 948,
950
+ "[f6-f4]": 949,
951
+ "[f6-f5]": 950,
952
+ "[f6-f6]": 951,
953
+ "[f6-f7]": 952,
954
+ "[f6-f8]": 953,
955
+ "[f6-g4]": 954,
956
+ "[f6-g5]": 955,
957
+ "[f6-g6]": 956,
958
+ "[f6-g7]": 957,
959
+ "[f6-g8]": 958,
960
+ "[f6-h4]": 959,
961
+ "[f6-h5]": 960,
962
+ "[f6-h6]": 961,
963
+ "[f6-h7]": 962,
964
+ "[f6-h8]": 963,
965
+ "[f7-d5]": 964,
966
+ "[f7-d6]": 965,
967
+ "[f7-d7]": 966,
968
+ "[f7-d8]": 967,
969
+ "[f7-e5]": 968,
970
+ "[f7-e6]": 969,
971
+ "[f7-e7]": 970,
972
+ "[f7-e8]": 971,
973
+ "[f7-f5]": 972,
974
+ "[f7-f6]": 973,
975
+ "[f7-f7]": 974,
976
+ "[f7-f8]": 975,
977
+ "[f7-g5]": 976,
978
+ "[f7-g6]": 977,
979
+ "[f7-g7]": 978,
980
+ "[f7-g8]": 979,
981
+ "[f7-h5]": 980,
982
+ "[f7-h6]": 981,
983
+ "[f7-h7]": 982,
984
+ "[f7-h8]": 983,
985
+ "[f8-d6]": 984,
986
+ "[f8-d7]": 985,
987
+ "[f8-d8]": 986,
988
+ "[f8-e6]": 987,
989
+ "[f8-e7]": 988,
990
+ "[f8-e8]": 989,
991
+ "[f8-f6]": 990,
992
+ "[f8-f7]": 991,
993
+ "[f8-f8]": 992,
994
+ "[f8-g6]": 993,
995
+ "[f8-g7]": 994,
996
+ "[f8-g8]": 995,
997
+ "[f8-h6]": 996,
998
+ "[f8-h7]": 997,
999
+ "[f8-h8]": 998,
1000
+ "[g1-e1]": 999,
1001
+ "[g1-e2]": 1000,
1002
+ "[g1-e3]": 1001,
1003
+ "[g1-f1]": 1002,
1004
+ "[g1-f2]": 1003,
1005
+ "[g1-f3]": 1004,
1006
+ "[g1-g1]": 1005,
1007
+ "[g1-g2]": 1006,
1008
+ "[g1-g3]": 1007,
1009
+ "[g1-h1]": 1008,
1010
+ "[g1-h2]": 1009,
1011
+ "[g1-h3]": 1010,
1012
+ "[g2-e1]": 1011,
1013
+ "[g2-e2]": 1012,
1014
+ "[g2-e3]": 1013,
1015
+ "[g2-e4]": 1014,
1016
+ "[g2-f1]": 1015,
1017
+ "[g2-f2]": 1016,
1018
+ "[g2-f3]": 1017,
1019
+ "[g2-f4]": 1018,
1020
+ "[g2-g1]": 1019,
1021
+ "[g2-g2]": 1020,
1022
+ "[g2-g3]": 1021,
1023
+ "[g2-g4]": 1022,
1024
+ "[g2-h1]": 1023,
1025
+ "[g2-h2]": 1024,
1026
+ "[g2-h3]": 1025,
1027
+ "[g2-h4]": 1026,
1028
+ "[g3-e1]": 1027,
1029
+ "[g3-e2]": 1028,
1030
+ "[g3-e3]": 1029,
1031
+ "[g3-e4]": 1030,
1032
+ "[g3-e5]": 1031,
1033
+ "[g3-f1]": 1032,
1034
+ "[g3-f2]": 1033,
1035
+ "[g3-f3]": 1034,
1036
+ "[g3-f4]": 1035,
1037
+ "[g3-f5]": 1036,
1038
+ "[g3-g1]": 1037,
1039
+ "[g3-g2]": 1038,
1040
+ "[g3-g3]": 1039,
1041
+ "[g3-g4]": 1040,
1042
+ "[g3-g5]": 1041,
1043
+ "[g3-h1]": 1042,
1044
+ "[g3-h2]": 1043,
1045
+ "[g3-h3]": 1044,
1046
+ "[g3-h4]": 1045,
1047
+ "[g3-h5]": 1046,
1048
+ "[g4-e2]": 1047,
1049
+ "[g4-e3]": 1048,
1050
+ "[g4-e4]": 1049,
1051
+ "[g4-e5]": 1050,
1052
+ "[g4-e6]": 1051,
1053
+ "[g4-f2]": 1052,
1054
+ "[g4-f3]": 1053,
1055
+ "[g4-f4]": 1054,
1056
+ "[g4-f5]": 1055,
1057
+ "[g4-f6]": 1056,
1058
+ "[g4-g2]": 1057,
1059
+ "[g4-g3]": 1058,
1060
+ "[g4-g4]": 1059,
1061
+ "[g4-g5]": 1060,
1062
+ "[g4-g6]": 1061,
1063
+ "[g4-h2]": 1062,
1064
+ "[g4-h3]": 1063,
1065
+ "[g4-h4]": 1064,
1066
+ "[g4-h5]": 1065,
1067
+ "[g4-h6]": 1066,
1068
+ "[g5-e3]": 1067,
1069
+ "[g5-e4]": 1068,
1070
+ "[g5-e5]": 1069,
1071
+ "[g5-e6]": 1070,
1072
+ "[g5-e7]": 1071,
1073
+ "[g5-f3]": 1072,
1074
+ "[g5-f4]": 1073,
1075
+ "[g5-f5]": 1074,
1076
+ "[g5-f6]": 1075,
1077
+ "[g5-f7]": 1076,
1078
+ "[g5-g3]": 1077,
1079
+ "[g5-g4]": 1078,
1080
+ "[g5-g5]": 1079,
1081
+ "[g5-g6]": 1080,
1082
+ "[g5-g7]": 1081,
1083
+ "[g5-h3]": 1082,
1084
+ "[g5-h4]": 1083,
1085
+ "[g5-h5]": 1084,
1086
+ "[g5-h6]": 1085,
1087
+ "[g5-h7]": 1086,
1088
+ "[g6-e4]": 1087,
1089
+ "[g6-e5]": 1088,
1090
+ "[g6-e6]": 1089,
1091
+ "[g6-e7]": 1090,
1092
+ "[g6-e8]": 1091,
1093
+ "[g6-f4]": 1092,
1094
+ "[g6-f5]": 1093,
1095
+ "[g6-f6]": 1094,
1096
+ "[g6-f7]": 1095,
1097
+ "[g6-f8]": 1096,
1098
+ "[g6-g4]": 1097,
1099
+ "[g6-g5]": 1098,
1100
+ "[g6-g6]": 1099,
1101
+ "[g6-g7]": 1100,
1102
+ "[g6-g8]": 1101,
1103
+ "[g6-h4]": 1102,
1104
+ "[g6-h5]": 1103,
1105
+ "[g6-h6]": 1104,
1106
+ "[g6-h7]": 1105,
1107
+ "[g6-h8]": 1106,
1108
+ "[g7-e5]": 1107,
1109
+ "[g7-e6]": 1108,
1110
+ "[g7-e7]": 1109,
1111
+ "[g7-e8]": 1110,
1112
+ "[g7-f5]": 1111,
1113
+ "[g7-f6]": 1112,
1114
+ "[g7-f7]": 1113,
1115
+ "[g7-f8]": 1114,
1116
+ "[g7-g5]": 1115,
1117
+ "[g7-g6]": 1116,
1118
+ "[g7-g7]": 1117,
1119
+ "[g7-g8]": 1118,
1120
+ "[g7-h5]": 1119,
1121
+ "[g7-h6]": 1120,
1122
+ "[g7-h7]": 1121,
1123
+ "[g7-h8]": 1122,
1124
+ "[g8-e6]": 1123,
1125
+ "[g8-e7]": 1124,
1126
+ "[g8-e8]": 1125,
1127
+ "[g8-f6]": 1126,
1128
+ "[g8-f7]": 1127,
1129
+ "[g8-f8]": 1128,
1130
+ "[g8-g6]": 1129,
1131
+ "[g8-g7]": 1130,
1132
+ "[g8-g8]": 1131,
1133
+ "[g8-h6]": 1132,
1134
+ "[g8-h7]": 1133,
1135
+ "[g8-h8]": 1134,
1136
+ "[h1-f1]": 1135,
1137
+ "[h1-f2]": 1136,
1138
+ "[h1-f3]": 1137,
1139
+ "[h1-g1]": 1138,
1140
+ "[h1-g2]": 1139,
1141
+ "[h1-g3]": 1140,
1142
+ "[h1-h1]": 1141,
1143
+ "[h1-h2]": 1142,
1144
+ "[h1-h3]": 1143,
1145
+ "[h2-f1]": 1144,
1146
+ "[h2-f2]": 1145,
1147
+ "[h2-f3]": 1146,
1148
+ "[h2-f4]": 1147,
1149
+ "[h2-g1]": 1148,
1150
+ "[h2-g2]": 1149,
1151
+ "[h2-g3]": 1150,
1152
+ "[h2-g4]": 1151,
1153
+ "[h2-h1]": 1152,
1154
+ "[h2-h2]": 1153,
1155
+ "[h2-h3]": 1154,
1156
+ "[h2-h4]": 1155,
1157
+ "[h3-f1]": 1156,
1158
+ "[h3-f2]": 1157,
1159
+ "[h3-f3]": 1158,
1160
+ "[h3-f4]": 1159,
1161
+ "[h3-f5]": 1160,
1162
+ "[h3-g1]": 1161,
1163
+ "[h3-g2]": 1162,
1164
+ "[h3-g3]": 1163,
1165
+ "[h3-g4]": 1164,
1166
+ "[h3-g5]": 1165,
1167
+ "[h3-h1]": 1166,
1168
+ "[h3-h2]": 1167,
1169
+ "[h3-h3]": 1168,
1170
+ "[h3-h4]": 1169,
1171
+ "[h3-h5]": 1170,
1172
+ "[h4-f2]": 1171,
1173
+ "[h4-f3]": 1172,
1174
+ "[h4-f4]": 1173,
1175
+ "[h4-f5]": 1174,
1176
+ "[h4-f6]": 1175,
1177
+ "[h4-g2]": 1176,
1178
+ "[h4-g3]": 1177,
1179
+ "[h4-g4]": 1178,
1180
+ "[h4-g5]": 1179,
1181
+ "[h4-g6]": 1180
1182
+ }