MarcosElFlamenco commited on
Commit
50a09eb
·
verified ·
1 Parent(s): d7ccaf4

Chess Challenge submission by MarcosElFlamenco

Browse files
README.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - chess
5
+ - llm-course
6
+ - chess-challenge
7
+ license: mit
8
+ ---
9
+
10
+ ## Chess model submitted to the LLM Course Chess Challenge.
11
+
12
+ ### Submission Info
13
+ - **Submitted by**: [MarcosElFlamenco](https://huggingface.co/MarcosElFlamenco)
14
+ - **Parameters**: 964,534
15
+ - **Organization**: LLM-course
16
+
17
+ ### Model Details
18
+ - **Architecture**: Chess Transformer (GPT-style)
19
+ - **Vocab size**: 82
20
+ - **Embedding dim**: 124
21
+ - **Layers**: 7
22
+ - **Heads**: 4
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ChessForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "model.ChessConfig",
7
+ "AutoModelForCausalLM": "model.ChessForCausalLM"
8
+ },
9
+ "bos_token_id": 1,
10
+ "dropout": 0.1,
11
+ "dtype": "float32",
12
+ "eos_token_id": 2,
13
+ "group_size": 4,
14
+ "layer_norm_epsilon": 1e-05,
15
+ "model_type": "chess_transformer",
16
+ "n_ctx": 256,
17
+ "n_embd": 124,
18
+ "n_head": 4,
19
+ "n_inner": 372,
20
+ "n_layer": 7,
21
+ "pad_token_id": 0,
22
+ "rms_Norm": true,
23
+ "tie_weights": true,
24
+ "transformers_version": "4.57.5",
25
+ "vocab_size": 83
26
+ }
model.py ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Chess Transformer Model for the Chess Challenge.
3
+
4
+ This module provides a simple GPT-style transformer architecture
5
+ designed to fit within the 1M parameter constraint.
6
+
7
+ Key components:
8
+ - ChessConfig: Configuration class for model hyperparameters
9
+ - ChessForCausalLM: The main model class for next-move prediction
10
+ """
11
+
12
+ from __future__ import annotations
13
+
14
+ import math
15
+ from dataclasses import dataclass
16
+ from typing import Optional, Tuple, Union
17
+
18
+ import torch
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+ from transformers import PretrainedConfig, PreTrainedModel
22
+ from transformers.modeling_outputs import CausalLMOutputWithPast
23
+
24
+
25
+ class ChessConfig(PretrainedConfig):
26
+ """
27
+ Configuration class for the Chess Transformer model.
28
+
29
+ This configuration is designed for a ~1M parameter model.
30
+ Students can adjust these values to explore different architectures.
31
+
32
+ Parameter budget breakdown (with default values):
33
+ - Embeddings (vocab): 1200 x 128 = 153,600
34
+ - Position Embeddings: 256 x 128 = 32,768
35
+ - Transformer Layers: 6 x ~120,000 = ~720,000
36
+ - LM Head (with weight tying): 0 (shared with embeddings)
37
+ - Total: ~906,000 parameters
38
+
39
+ Attributes:
40
+ vocab_size: Size of the vocabulary (number of unique moves).
41
+ n_embd: Embedding dimension (d_model).
42
+ n_layer: Number of transformer layers.
43
+ n_head: Number of attention heads.
44
+ n_ctx: Maximum sequence length (context window).
45
+ n_inner: Feed-forward inner dimension (default: 3 * n_embd).
46
+ dropout: Dropout probability.
47
+ layer_norm_epsilon: Epsilon for layer normalization.
48
+ tie_weights: Whether to tie embedding and output weights.
49
+ rms_Norm: Whether to use RMSNorm instead of LayerNorm.
50
+
51
+ """
52
+ model_type = "chess_transformer"
53
+
54
+ def __init__(
55
+ self,
56
+ vocab_size: int = 1200,
57
+ n_embd: int = 128,
58
+ n_layer: int = 6,
59
+ n_head: int = 4,
60
+ n_ctx: int = 256,
61
+ n_inner: Optional[int] = None,
62
+ group_size: Optional[int] = None,
63
+ dropout: float = 0.1,
64
+ layer_norm_epsilon: float = 1e-5,
65
+ tie_weights: bool = True,
66
+ rms_Norm: bool = False,
67
+ pad_token_id: int = 0,
68
+ bos_token_id: int = 1,
69
+ eos_token_id: int = 2,
70
+ **kwargs,
71
+ ):
72
+ super().__init__(
73
+ pad_token_id=pad_token_id,
74
+ bos_token_id=bos_token_id,
75
+ eos_token_id=eos_token_id,
76
+ **kwargs,
77
+ )
78
+
79
+ self.vocab_size = vocab_size
80
+ self.n_embd = n_embd
81
+ self.n_layer = n_layer
82
+ self.n_head = n_head
83
+ self.n_ctx = n_ctx
84
+ self.group_size = group_size
85
+ self.n_inner = n_inner if n_inner is not None else 3 * n_embd # Reduced from 4x to 3x
86
+ self.dropout = dropout
87
+ self.layer_norm_epsilon = layer_norm_epsilon
88
+ self.tie_weights = tie_weights
89
+ self.rms_Norm = rms_Norm
90
+ # Inform HF base class about tying behavior
91
+ self.tie_word_embeddings = bool(tie_weights)
92
+
93
+
94
+ class GroupedQueryAttention(nn.Module):
95
+
96
+ def __init__(self, config: ChessConfig):
97
+ super().__init__()
98
+
99
+ assert config.n_head % config.group_size == 0, "n_head must be divisible by group_size"
100
+ print(f"Using Grouped Query Attention with group_size={config.group_size}")
101
+ self.n_head = config.n_head # Total Query heads
102
+ self.group_size = config.group_size
103
+ self.n_kv_head = self.n_head // config.group_size # Number of KV heads
104
+
105
+ self.n_embd = config.n_embd
106
+ self.head_dim = config.n_embd // config.n_head
107
+
108
+ # Q projection stays the same, but K and V projections are smaller
109
+ # Total output: n_embd (for Q) + 2 * (n_kv_head * head_dim) (for K and V)
110
+ self.q_proj = nn.Linear(config.n_embd, config.n_embd)
111
+ self.kv_proj = nn.Linear(config.n_embd, 2 * self.n_kv_head * self.head_dim)
112
+
113
+ self.c_proj = nn.Linear(config.n_embd, config.n_embd)
114
+ self.dropout = nn.Dropout(config.dropout)
115
+
116
+ self.register_buffer("bias", torch.tril(torch.ones(config.n_ctx, config.n_ctx))
117
+ .view(1, 1, config.n_ctx, config.n_ctx), persistent=False)
118
+
119
+
120
+ def forward(self, x: torch.Tensor, attention_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
121
+ batch_size, seq_len, _ = x.size()
122
+
123
+ # 1. Project Q, K, V
124
+ q = self.q_proj(x) # (B, T, n_head * head_dim)
125
+ kv = self.kv_proj(x) # (B, T, 2 * n_kv_head * head_dim)
126
+ k, v = kv.split(self.n_kv_head * self.head_dim, dim=2)
127
+
128
+ # 2. Reshape Q normally
129
+ q = q.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
130
+
131
+ # 3. Reshape K, V and REPEAT them to match Q
132
+ k = k.view(batch_size, seq_len, self.n_kv_head, self.head_dim).transpose(1, 2)
133
+ v = v.view(batch_size, seq_len, self.n_kv_head, self.head_dim).transpose(1, 2)
134
+
135
+ # Repeat KV heads 'group_size' times to match n_head
136
+ # We use .repeat_interleave to ensure head 0 of KV is used by the first 'group_size' Q heads
137
+ k = k.repeat_interleave(self.group_size, dim=1) # (B, n_head, T, head_dim)
138
+ v = v.repeat_interleave(self.group_size, dim=1) # (B, n_head, T, head_dim)
139
+
140
+ # 4. Standard Scaled Dot-Product Attention
141
+ attn_weights = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.head_dim)
142
+
143
+ causal_mask = self.bias[:, :, :seq_len, :seq_len]
144
+ attn_weights = attn_weights.masked_fill(causal_mask == 0, float("-inf"))
145
+
146
+ if attention_mask is not None:
147
+ attn_weights = attn_weights.masked_fill(attention_mask.unsqueeze(1).unsqueeze(2) == 0, float("-inf"))
148
+
149
+ attn_weights = F.softmax(attn_weights, dim=-1)
150
+ attn_output = torch.matmul(self.dropout(attn_weights), v)
151
+
152
+ # 5. Recombine
153
+ attn_output = attn_output.transpose(1, 2).contiguous().view(batch_size, seq_len, self.n_embd)
154
+ return self.c_proj(attn_output)
155
+
156
+
157
+
158
+ class MultiHeadAttention(nn.Module):
159
+ """
160
+ Multi-head self-attention module.
161
+
162
+ This is a standard scaled dot-product attention implementation
163
+ with causal masking for autoregressive generation.
164
+ """
165
+
166
+ def __init__(self, config: ChessConfig):
167
+ super().__init__()
168
+
169
+ assert config.n_embd % config.n_head == 0, \
170
+ f"n_embd ({config.n_embd}) must be divisible by n_head ({config.n_head})"
171
+
172
+ print(f"Using Regular Attention with group_size={config.group_size}")
173
+ self.n_head = config.n_head
174
+ self.n_embd = config.n_embd
175
+ self.head_dim = config.n_embd // config.n_head
176
+
177
+ # Combined QKV projection for efficiency
178
+ self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd)
179
+ self.c_proj = nn.Linear(config.n_embd, config.n_embd)
180
+
181
+ self.dropout = nn.Dropout(config.dropout)
182
+
183
+ # Causal mask (will be created on first forward pass)
184
+ self.register_buffer(
185
+ "bias",
186
+ torch.tril(torch.ones(config.n_ctx, config.n_ctx)).view(
187
+ 1, 1, config.n_ctx, config.n_ctx
188
+ ),
189
+ persistent=False,
190
+ )
191
+
192
+ def forward(
193
+ self,
194
+ x: torch.Tensor,
195
+ attention_mask: Optional[torch.Tensor] = None,
196
+ ) -> torch.Tensor:
197
+ batch_size, seq_len, _ = x.size()
198
+
199
+ # Compute Q, K, V
200
+ qkv = self.c_attn(x)
201
+ q, k, v = qkv.split(self.n_embd, dim=2)
202
+
203
+ # Reshape for multi-head attention
204
+ q = q.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
205
+ k = k.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
206
+ v = v.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
207
+
208
+ # Scaled dot-product attention
209
+ attn_weights = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.head_dim)
210
+
211
+ # Apply causal mask
212
+ causal_mask = self.bias[:, :, :seq_len, :seq_len]
213
+ attn_weights = attn_weights.masked_fill(causal_mask == 0, float("-inf"))
214
+
215
+ # Apply attention mask (for padding)
216
+ if attention_mask is not None:
217
+ # attention_mask shape: (batch_size, seq_len) -> (batch_size, 1, 1, seq_len)
218
+ attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
219
+ attn_weights = attn_weights.masked_fill(attention_mask == 0, float("-inf"))
220
+
221
+ attn_weights = F.softmax(attn_weights, dim=-1)
222
+ attn_weights = self.dropout(attn_weights)
223
+
224
+ # Apply attention to values
225
+ attn_output = torch.matmul(attn_weights, v)
226
+
227
+ # Reshape back
228
+ attn_output = attn_output.transpose(1, 2).contiguous().view(
229
+ batch_size, seq_len, self.n_embd
230
+ )
231
+
232
+ # Output projection
233
+ attn_output = self.c_proj(attn_output)
234
+
235
+ return attn_output
236
+
237
+
238
+ class FeedForward(nn.Module):
239
+ """
240
+ Feed-forward network (MLP) module.
241
+
242
+ Standard two-layer MLP with GELU activation.
243
+ """
244
+
245
+ def __init__(self, config: ChessConfig):
246
+ super().__init__()
247
+
248
+ self.c_fc = nn.Linear(config.n_embd, config.n_inner)
249
+ self.c_proj = nn.Linear(config.n_inner, config.n_embd)
250
+ self.dropout = nn.Dropout(config.dropout)
251
+
252
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
253
+ x = self.c_fc(x)
254
+ x = F.gelu(x)
255
+ x = self.c_proj(x)
256
+ x = self.dropout(x)
257
+ return x
258
+
259
+
260
+ class TransformerBlock(nn.Module):
261
+ """
262
+ A single transformer block with attention and feed-forward layers.
263
+
264
+ Uses pre-normalization (LayerNorm before attention/FFN) for better
265
+ training stability.
266
+ """
267
+
268
+ def __init__(self, config: ChessConfig,group_size: int = None):
269
+ super().__init__()
270
+ #nn.modules.normalization.RMSNorm
271
+ if config.rms_Norm == True:
272
+ print(f"using RMSNorm")
273
+ self.ln_1 = nn.RMSNorm(config.n_embd, eps=config.layer_norm_epsilon)
274
+ else:
275
+ print(f"using LayerNorm")
276
+ self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
277
+
278
+ if config.group_size is not None:
279
+ self.attn = GroupedQueryAttention(config)
280
+ else:
281
+ self.attn = MultiHeadAttention(config)
282
+ if config.rms_Norm == True:
283
+ self.ln_2 = nn.RMSNorm(config.n_embd, eps=config.layer_norm_epsilon)
284
+ else:
285
+ self.ln_2 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
286
+ self.mlp = FeedForward(config)
287
+
288
+ def forward(
289
+ self,
290
+ x: torch.Tensor,
291
+ attention_mask: Optional[torch.Tensor] = None,
292
+ ) -> torch.Tensor:
293
+ # Pre-norm attention
294
+ x = x + self.attn(self.ln_1(x), attention_mask=attention_mask)
295
+ # Pre-norm FFN
296
+ x = x + self.mlp(self.ln_2(x))
297
+ return x
298
+
299
+
300
+ class ChessForCausalLM(PreTrainedModel):
301
+ """
302
+ Chess Transformer for Causal Language Modeling (next-move prediction).
303
+
304
+ This model is designed to predict the next chess move given a sequence
305
+ of previous moves. It uses a GPT-style architecture with:
306
+ - Token embeddings for chess moves
307
+ - Learned positional embeddings
308
+ - Stacked transformer blocks
309
+ - Linear head for next-token prediction
310
+
311
+ The model supports weight tying between the embedding layer and the
312
+ output projection to save parameters.
313
+
314
+ Example:
315
+ >>> config = ChessConfig(vocab_size=1200, n_embd=128, n_layer=6)
316
+ >>> model = ChessForCausalLM(config)
317
+ >>> inputs = {"input_ids": torch.tensor([[1, 42, 87]])}
318
+ >>> outputs = model(**inputs)
319
+ >>> next_move_logits = outputs.logits[:, -1, :]
320
+ """
321
+
322
+ config_class = ChessConfig
323
+ base_model_prefix = "transformer"
324
+ supports_gradient_checkpointing = True
325
+ # Suppress missing-key warning for tied lm_head when loading
326
+ keys_to_ignore_on_load_missing = ["lm_head.weight"]
327
+
328
+ def __init__(self, config: ChessConfig):
329
+ super().__init__(config)
330
+
331
+ # Token and position embeddings
332
+ self.wte = nn.Embedding(config.vocab_size, config.n_embd)
333
+ self.wpe = nn.Embedding(config.n_ctx, config.n_embd)
334
+
335
+ self.drop = nn.Dropout(config.dropout)
336
+
337
+ # Transformer blocks
338
+ self.h = nn.ModuleList([
339
+ TransformerBlock(config) for _ in range(config.n_layer)
340
+ ])
341
+
342
+
343
+
344
+ # Final layer norm
345
+ self.ln_f = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
346
+
347
+ # Output head
348
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
349
+
350
+ # Declare tied weights for proper serialization
351
+ if config.tie_weights:
352
+ self._tied_weights_keys = ["lm_head.weight"]
353
+
354
+ # Initialize weights
355
+ self.post_init()
356
+
357
+ # Tie weights if configured
358
+ if config.tie_weights:
359
+ self.tie_weights()
360
+
361
+ def get_input_embeddings(self) -> nn.Module:
362
+ return self.wte
363
+
364
+ def set_input_embeddings(self, new_embeddings: nn.Module):
365
+ self.wte = new_embeddings
366
+ if getattr(self.config, "tie_weights", False):
367
+ self.tie_weights()
368
+
369
+ def get_output_embeddings(self) -> nn.Module:
370
+ return self.lm_head
371
+
372
+ def set_output_embeddings(self, new_embeddings: nn.Module):
373
+ self.lm_head = new_embeddings
374
+
375
+ def tie_weights(self):
376
+ # Use HF helper to tie or clone depending on config
377
+ if getattr(self.config, "tie_weights", False) or getattr(self.config, "tie_word_embeddings", False):
378
+ self._tie_or_clone_weights(self.lm_head, self.wte)
379
+
380
+ def _init_weights(self, module: nn.Module):
381
+ """Initialize weights following GPT-2 style."""
382
+ if isinstance(module, nn.Linear):
383
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
384
+ if module.bias is not None:
385
+ torch.nn.init.zeros_(module.bias)
386
+ elif isinstance(module, nn.Embedding):
387
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
388
+ elif isinstance(module, nn.LayerNorm):
389
+ torch.nn.init.ones_(module.weight)
390
+ torch.nn.init.zeros_(module.bias)
391
+
392
+ def forward(
393
+ self,
394
+ input_ids: torch.LongTensor,
395
+ attention_mask: Optional[torch.Tensor] = None,
396
+ position_ids: Optional[torch.LongTensor] = None,
397
+ labels: Optional[torch.LongTensor] = None,
398
+ return_dict: Optional[bool] = None,
399
+ **kwargs,
400
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
401
+ """
402
+ Forward pass of the model.
403
+
404
+ Args:
405
+ input_ids: Token IDs of shape (batch_size, seq_len).
406
+ attention_mask: Attention mask of shape (batch_size, seq_len).
407
+ position_ids: Position IDs of shape (batch_size, seq_len).
408
+ labels: Labels for language modeling loss.
409
+ return_dict: Whether to return a ModelOutput object.
410
+
411
+ Returns:
412
+ CausalLMOutputWithPast containing loss (if labels provided) and logits.
413
+ """
414
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
415
+
416
+ batch_size, seq_len = input_ids.size()
417
+ device = input_ids.device
418
+
419
+ # Create position IDs if not provided
420
+ if position_ids is None:
421
+ position_ids = torch.arange(seq_len, device=device).unsqueeze(0).expand(batch_size, -1)
422
+
423
+ # Get embeddings
424
+ token_embeds = self.wte(input_ids)
425
+ position_embeds = self.wpe(position_ids)
426
+ hidden_states = self.drop(token_embeds + position_embeds)
427
+
428
+ # Pass through transformer blocks
429
+ for block in self.h:
430
+ hidden_states = block(hidden_states, attention_mask=attention_mask)
431
+
432
+ # Final layer norm
433
+ hidden_states = self.ln_f(hidden_states)
434
+
435
+ # Get logits
436
+ logits = self.lm_head(hidden_states)
437
+
438
+ # Compute loss if labels are provided
439
+ loss = None
440
+ if labels is not None:
441
+ # Shift logits and labels for next-token prediction
442
+ shift_logits = logits[..., :-1, :].contiguous()
443
+ shift_labels = labels[..., 1:].contiguous()
444
+
445
+ # Flatten for cross-entropy
446
+ loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
447
+ # loss_fct = nn.CrossEntropyLoss(ignore_index=self.config.pad_token_id)
448
+ loss = loss_fct(
449
+ shift_logits.view(-1, shift_logits.size(-1)),
450
+ shift_labels.view(-1),
451
+ )
452
+
453
+ if not return_dict:
454
+ output = (logits,)
455
+ return ((loss,) + output) if loss is not None else output
456
+
457
+ return CausalLMOutputWithPast(
458
+ loss=loss,
459
+ logits=logits,
460
+ past_key_values=None,
461
+ hidden_states=None,
462
+ attentions=None,
463
+ )
464
+
465
+ @torch.no_grad()
466
+ def generate_move(
467
+ self,
468
+ input_ids: torch.LongTensor,
469
+ temperature: float = 1.0,
470
+ top_k: Optional[int] = None,
471
+ top_p: Optional[float] = None,
472
+ ) -> int:
473
+ """
474
+ Generate the next move given a sequence of moves.
475
+
476
+ Applies structural constraints enforcing:
477
+ ColoredPiece [SOURCE] source [DEST] dest [modifiers]*
478
+
479
+ Args:
480
+ input_ids: Token IDs of shape (1, seq_len).
481
+ temperature: Sampling temperature (1.0 = no change).
482
+ top_k: If set, only sample from top k tokens.
483
+ top_p: If set, use nucleus sampling with this threshold.
484
+
485
+ Returns:
486
+ The token ID of the predicted next move.
487
+ """
488
+ self.eval()
489
+
490
+ # Get logits for the last position
491
+ outputs = self(input_ids)
492
+ logits = outputs.logits[:, -1, :].clone() / temperature
493
+
494
+ # Apply structural constraints (hardcoded to ChessTokenizer structure)
495
+ from src.tokenizer import ChessLogitsProcessor
496
+ processor = ChessLogitsProcessor()
497
+ logits = processor.constrain_logits(input_ids, logits)
498
+
499
+ # Apply top-k filtering
500
+ if top_k is not None:
501
+ indices_to_remove = logits < torch.topk(logits, min(top_k, logits.size(-1)))[0][..., -1, None]
502
+ logits[indices_to_remove] = float("-inf")
503
+
504
+ # Apply top-p (nucleus) filtering
505
+ if top_p is not None:
506
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True)
507
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
508
+
509
+ # Remove tokens with cumulative probability above the threshold
510
+ sorted_indices_to_remove = cumulative_probs > top_p
511
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
512
+ sorted_indices_to_remove[..., 0] = 0
513
+
514
+ indices_to_remove = sorted_indices_to_remove.scatter(
515
+ dim=-1, index=sorted_indices, src=sorted_indices_to_remove
516
+ )
517
+ logits[indices_to_remove] = float("-inf")
518
+
519
+ # Sample from the distribution
520
+ probs = F.softmax(logits, dim=-1)
521
+ next_token = torch.multinomial(probs, num_samples=1)
522
+ return next_token.item()
523
+
524
+
525
+
526
+
527
+ # Register the model with Auto classes for easy loading
528
+ from transformers import AutoConfig, AutoModelForCausalLM
529
+
530
+ AutoConfig.register("chess_transformer", ChessConfig)
531
+ AutoModelForCausalLM.register(ChessConfig, ChessForCausalLM)
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c22bdb64b6359302e943b73365d3ad45efc1b513c1fad1a5cc2937de7866676a
3
+ size 3865736
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[BOS]",
3
+ "eos_token": "[EOS]",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "[UNK]"
6
+ }
tokenizer.py ADDED
@@ -0,0 +1,906 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Custom Chess Tokenizer for the Chess Challenge.
3
+
4
+ This tokenizer treats each move as a single token using the extended UCI notation
5
+ from the Lichess dataset (e.g., WPe2e4, BNg8f6).
6
+
7
+ The dataset format uses:
8
+ - W/B prefix for White/Black
9
+ - Piece letter: P=Pawn, N=Knight, B=Bishop, R=Rook, Q=Queen, K=King
10
+ - Source and destination squares (e.g., e2e4)
11
+ - Special suffixes: (x)=capture, (+)=check, (+*)=checkmate, (o)/(O)=castling
12
+ """
13
+
14
+ from __future__ import annotations
15
+
16
+ import json
17
+ import os
18
+ from pathlib import Path
19
+ from typing import Dict, List, Optional
20
+
21
+ from transformers import PreTrainedTokenizer
22
+
23
+
24
+ class FrequencyChessTokenizer(PreTrainedTokenizer):
25
+ """
26
+ A frequency-based tokenizer for chess moves using extended UCI notation.
27
+
28
+ This tokenizer maps each possible chess move to a unique token ID.
29
+ The vocabulary is built from the training dataset to ensure all moves
30
+ encountered during training have a corresponding token.
31
+
32
+ Only includes moves that appear at least `min_frequency` times in the dataset.
33
+ Rare moves become [UNK] tokens.
34
+
35
+ Example:
36
+ >>> tokenizer = FrequencyChessTokenizer()
37
+ >>> tokenizer.encode("WPe2e4 BPe7e5")
38
+ [1, 42, 87, 2] # [BOS, e2e4, e7e5, EOS]
39
+ """
40
+
41
+ model_input_names = ["input_ids", "attention_mask"]
42
+ vocab_files_names = {"vocab_file": "vocab.json"}
43
+
44
+ # Special tokens
45
+ PAD_TOKEN = "[PAD]"
46
+ BOS_TOKEN = "[BOS]"
47
+ EOS_TOKEN = "[EOS]"
48
+ UNK_TOKEN = "[UNK]"
49
+
50
+ def __init__(
51
+ self,
52
+ vocab_file: Optional[str] = None,
53
+ vocab: Optional[Dict[str, int]] = None,
54
+ **kwargs,
55
+ ):
56
+ """
57
+ Initialize the chess tokenizer.
58
+
59
+ Args:
60
+ vocab_file: Path to a JSON file containing the vocabulary mapping.
61
+ vocab: Dictionary mapping tokens to IDs (alternative to vocab_file).
62
+ **kwargs: Additional arguments passed to PreTrainedTokenizer.
63
+ """
64
+ # Initialize special tokens
65
+ self._pad_token = self.PAD_TOKEN
66
+ self._bos_token = self.BOS_TOKEN
67
+ self._eos_token = self.EOS_TOKEN
68
+ self._unk_token = self.UNK_TOKEN
69
+
70
+ # Remove any duplicate special-token entries passed through kwargs
71
+ # to avoid "multiple values for keyword" errors when loading from disk.
72
+ kwargs.pop("pad_token", None)
73
+ kwargs.pop("bos_token", None)
74
+ kwargs.pop("eos_token", None)
75
+ kwargs.pop("unk_token", None)
76
+
77
+ # Load or create vocabulary
78
+ if vocab is not None:
79
+ self._vocab = vocab
80
+ elif vocab_file is not None and os.path.exists(vocab_file):
81
+ with open(vocab_file, "r", encoding="utf-8") as f:
82
+ self._vocab = json.load(f)
83
+ else:
84
+ # Create a minimal vocabulary with just special tokens
85
+ # The full vocabulary should be built from the dataset
86
+ self._vocab = self._create_default_vocab()
87
+
88
+ # Create reverse mapping
89
+ self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
90
+
91
+ # Call parent init AFTER setting up vocab
92
+ super().__init__(
93
+ pad_token=self._pad_token,
94
+ bos_token=self._bos_token,
95
+ eos_token=self._eos_token,
96
+ unk_token=self._unk_token,
97
+ **kwargs,
98
+ )
99
+
100
+ def _create_default_vocab(self) -> Dict[str, int]:
101
+ """
102
+ Create a minimal default vocabulary with just special tokens.
103
+
104
+ For the full vocabulary, use `build_vocab_from_dataset()`.
105
+ This minimal vocab is just a placeholder - you should build from data.
106
+ """
107
+ special_tokens = [self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN]
108
+ vocab = {token: idx for idx, token in enumerate(special_tokens)}
109
+ return vocab
110
+
111
+ @classmethod
112
+ def build_vocab_from_iterator(
113
+ cls,
114
+ iterator,
115
+ min_frequency: int = 1,
116
+ ) -> "FrequencyChessTokenizer":
117
+ """
118
+ Build a tokenizer vocabulary from an iterator of game strings.
119
+
120
+ Args:
121
+ iterator: An iterator yielding game strings (space-separated moves).
122
+ min_frequency: Minimum frequency for a token to be included.
123
+
124
+ Returns:
125
+ A FrequencyChessTokenizer with the built vocabulary.
126
+ """
127
+ from collections import Counter
128
+
129
+ token_counts = Counter()
130
+
131
+ for game in iterator:
132
+ moves = game.strip().split()
133
+ token_counts.update(moves)
134
+
135
+ # Filter by frequency
136
+ tokens = [
137
+ token for token, count in token_counts.items()
138
+ if count >= min_frequency
139
+ ]
140
+
141
+ # Sort for reproducibility
142
+ tokens = sorted(tokens)
143
+
144
+ # Build vocabulary
145
+ special_tokens = [cls.PAD_TOKEN, cls.BOS_TOKEN, cls.EOS_TOKEN, cls.UNK_TOKEN]
146
+ vocab = {token: idx for idx, token in enumerate(special_tokens + tokens)}
147
+
148
+ return cls(vocab=vocab)
149
+
150
+ @classmethod
151
+ def build_vocab_from_dataset(
152
+ cls,
153
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
154
+ split: str = "train",
155
+ column: str = "text",
156
+ min_frequency: int = 500,
157
+ max_samples: Optional[int] = 100000,
158
+ ) -> "FrequencyChessTokenizer":
159
+ """
160
+ Build a tokenizer vocabulary from a Hugging Face dataset.
161
+
162
+ Args:
163
+ dataset_name: Name of the dataset on Hugging Face Hub.
164
+ split: Dataset split to use.
165
+ column: Column containing the game strings.
166
+ min_frequency: Minimum frequency for a token to be included (default: 500).
167
+ max_samples: Maximum number of samples to process (default: 100k).
168
+
169
+ Returns:
170
+ A FrequencyChessTokenizer with the built vocabulary.
171
+ """
172
+ from datasets import load_dataset
173
+
174
+ dataset = load_dataset(dataset_name, split=split)
175
+
176
+ if max_samples is not None:
177
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
178
+
179
+ def game_iterator():
180
+ for example in dataset:
181
+ yield example[column]
182
+
183
+ return cls.build_vocab_from_iterator(game_iterator(), min_frequency=min_frequency)
184
+
185
+ @property
186
+ def vocab_size(self) -> int:
187
+ """Return the size of the vocabulary."""
188
+ return len(self._vocab)
189
+
190
+ def get_vocab(self) -> Dict[str, int]:
191
+ """Return the vocabulary as a dictionary."""
192
+ return dict(self._vocab)
193
+
194
+ def _tokenize(self, text: str) -> List[str]:
195
+ """
196
+ Tokenize a string of moves into a list of tokens.
197
+
198
+ Args:
199
+ text: A string of space-separated moves.
200
+
201
+ Returns:
202
+ List of move tokens.
203
+ """
204
+ return text.strip().split()
205
+
206
+ def _convert_token_to_id(self, token: str) -> int:
207
+ """Convert a token to its ID."""
208
+ return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
209
+
210
+ def _convert_id_to_token(self, index: int) -> str:
211
+ """Convert an ID to its token."""
212
+ return self._ids_to_tokens.get(index, self.UNK_TOKEN)
213
+
214
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
215
+ """Convert a list of tokens back to a string."""
216
+ # Filter out special tokens for cleaner output
217
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
218
+ return " ".join(t for t in tokens if t not in special)
219
+
220
+ def save_vocabulary(
221
+ self,
222
+ save_directory: str,
223
+ filename_prefix: Optional[str] = None,
224
+ ) -> tuple:
225
+ """
226
+ Save the vocabulary to a JSON file.
227
+
228
+ Args:
229
+ save_directory: Directory to save the vocabulary.
230
+ filename_prefix: Optional prefix for the filename.
231
+
232
+ Returns:
233
+ Tuple containing the path to the saved vocabulary file.
234
+ """
235
+ if not os.path.isdir(save_directory):
236
+ os.makedirs(save_directory, exist_ok=True)
237
+
238
+ vocab_file = os.path.join(
239
+ save_directory,
240
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
241
+ )
242
+
243
+ with open(vocab_file, "w", encoding="utf-8") as f:
244
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
245
+
246
+ return (vocab_file,)
247
+
248
+
249
+ def count_vocab_from_dataset(
250
+ dataset_name: str = "dlouapre/lichess_2025-01_1M",
251
+ split: str = "train",
252
+ column: str = "text",
253
+ max_samples: Optional[int] = 10000,
254
+ ) -> Dict[str, int]:
255
+ """
256
+ Count token frequencies in a dataset (useful for vocabulary analysis).
257
+
258
+ Args:
259
+ dataset_name: Name of the dataset on Hugging Face Hub.
260
+ split: Dataset split to use.
261
+ column: Column containing the game strings.
262
+ max_samples: Maximum number of samples to process.
263
+
264
+ Returns:
265
+ Dictionary mapping tokens to their frequencies.
266
+ """
267
+ from collections import Counter
268
+ from datasets import load_dataset
269
+
270
+ dataset = load_dataset(dataset_name, split=split)
271
+
272
+ if max_samples is not None:
273
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
274
+
275
+ token_counts = Counter()
276
+
277
+ for example in dataset:
278
+ moves = example[column].strip().split()
279
+ token_counts.update(moves)
280
+
281
+ return dict(token_counts)
282
+
283
+
284
+ class ChessTokenizer(FrequencyChessTokenizer):
285
+ """
286
+ A compositional tokenizer for chess moves using split color/piece tokens.
287
+
288
+ This tokenizer breaks each move into 6 core components with explicit structure:
289
+ 1. Color: W or B (makes turn information explicit!)
290
+ 2. Piece: P, N, B, R, Q, K
291
+ 3. SOURCE marker: [SOURCE]
292
+ 4. Source square: a1, a2, ..., h8
293
+ 5. DEST marker: [DEST]
294
+ 6. Destination square: a1, a2, ..., h8
295
+
296
+ Optional modifier tokens for captures, checks, checkmate, and castling.
297
+
298
+ Example:
299
+ >>> tokenizer = ChessTokenizer()
300
+ >>> tokenizer.encode("WPe2e4 BPe7e5")
301
+ [1, W_id, P_id, SRC_id, e2_id, DST_id, e4_id, B_id, P_id, SRC_id, e7_id, DST_id, e5_id, 2]
302
+
303
+ Vocabulary:
304
+ - Colors (2): W, B [makes turn alternation explicit]
305
+ - Pieces (6): P, N, B, R, Q, K
306
+ - Position markers (2): [SOURCE], [DEST]
307
+ - Squares (64): a1-h8
308
+ - Modifiers (5): [CAPTURE], [CHECK], [CHECKMATE], [CASTLING_KS], [CASTLING_QS]
309
+ - Special (4): [PAD], [BOS], [EOS], [UNK]
310
+ Total: ~83 tokens (deterministic, 4 fewer than before)
311
+
312
+ Key advantage: Color is now EXPLICIT, making turn alternation obvious to the model!
313
+ """
314
+
315
+ # Color tokens (split for explicit turn information)
316
+ COLORS = ['W', 'B']
317
+
318
+ # Piece tokens
319
+ PIECES = ['P', 'N', 'B', 'R', 'Q', 'K']
320
+
321
+ # Position markers
322
+ POSITION_MARKERS = ['[SOURCE]', '[DEST]']
323
+
324
+ # Board squares (standard chess notation)
325
+ SQUARES = [f"{file}{rank}" for rank in range(1, 9) for file in "abcdefgh"]
326
+
327
+ # Move modifiers
328
+ MODIFIERS = ['[CAPTURE]', '[CHECK]', '[CHECKMATE]', '[CASTLING_KS]', '[CASTLING_QS]']
329
+
330
+ def __init__(self, **kwargs):
331
+ """
332
+ Initialize the compositional chess tokenizer.
333
+
334
+ Vocabulary is built deterministically from pieces and squares.
335
+ No vocab_file or dataset scanning needed.
336
+ """
337
+ # Remove vocab-related kwargs to avoid conflicts
338
+ kwargs.pop("vocab_file", None)
339
+ kwargs.pop("vocab", None)
340
+
341
+ # Build deterministic vocabulary
342
+ vocab = self._build_deterministic_vocab()
343
+
344
+ # Initialize parent with the built vocab
345
+ super().__init__(vocab=vocab, **kwargs)
346
+
347
+ @property
348
+ def vocab_size(self) -> int:
349
+ """
350
+ Return the vocabulary size.
351
+
352
+ Tokens: [PAD]=0, [BOS]=1, [EOS]=2, [UNK]=3, W=4, B=5, P-K=6-11,
353
+ [SOURCE]=12, [DEST]=13, squares=14-77, modifiers=78-82
354
+
355
+ Total: 83 tokens (indices 0-82)
356
+ """
357
+ return 4 + 2 + 6 + 2 + 64 + 5 # special + colors + pieces + markers + squares + modifiers
358
+
359
+ def _build_deterministic_vocab(self) -> Dict[str, int]:
360
+ """
361
+ Build vocabulary deterministically from colored pieces, squares, and modifiers.
362
+
363
+ Returns:
364
+ Dictionary mapping token strings to IDs.
365
+ """
366
+ vocab = {}
367
+ idx = 0
368
+
369
+ # Special tokens first (matching parent class order)
370
+ special_tokens = [self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN]
371
+ for token in special_tokens:
372
+ vocab[token] = idx
373
+ idx += 1
374
+
375
+ # Color tokens (W, B)
376
+ for color in self.COLORS:
377
+ vocab[color] = idx
378
+ idx += 1
379
+
380
+ # Piece tokens (P, N, B, R, Q, K)
381
+ for piece in self.PIECES:
382
+ vocab[piece] = idx
383
+ idx += 1
384
+
385
+ # Position marker tokens
386
+ for marker in self.POSITION_MARKERS:
387
+ vocab[marker] = idx
388
+ idx += 1
389
+
390
+ # Square tokens
391
+ for square in self.SQUARES:
392
+ vocab[square] = idx
393
+ idx += 1
394
+
395
+ # Modifier tokens
396
+ for modifier in self.MODIFIERS:
397
+ vocab[modifier] = idx
398
+ idx += 1
399
+
400
+ return vocab
401
+
402
+ def _parse_move(self, move_str: str) -> Dict:
403
+ """
404
+ Parse a move string in extended UCI notation.
405
+
406
+ Args:
407
+ move_str: Move string like "WPe2e4" or "BNg8f6(x)" or "We1g1(o)"
408
+
409
+ Returns:
410
+ Dictionary with keys: piece, color, src, dest, modifiers
411
+ """
412
+ import re
413
+
414
+ # Pattern: [WB][PNBRQK]<square><square>(<modifiers>)
415
+ pattern = r'([WB])([PNBRQK])([a-h][1-8])([a-h][1-8])((?:\([^)]*\))?)'
416
+ match = re.match(pattern, move_str.strip())
417
+
418
+ if not match:
419
+ raise ValueError(f"Invalid move format: {move_str}")
420
+
421
+ color, piece, src, dest, modifier_str = match.groups()
422
+
423
+ # Parse modifiers
424
+ modifiers = []
425
+ if modifier_str:
426
+ # Remove parentheses and split by lowercase letters/symbols
427
+ mod_content = modifier_str.strip('()')
428
+
429
+ if 'x' in mod_content:
430
+ modifiers.append('[CAPTURE]')
431
+ if '+*' in mod_content:
432
+ modifiers.append('[CHECKMATE]')
433
+ elif '+' in mod_content:
434
+ modifiers.append('[CHECK]')
435
+ if 'o' in mod_content or 'O' in mod_content:
436
+ # Determine kingside vs queenside based on destination
437
+ if dest == 'g1' or dest == 'g8':
438
+ modifiers.append('[CASTLING_KS]')
439
+ elif dest == 'c1' or dest == 'c8':
440
+ modifiers.append('[CASTLING_QS]')
441
+
442
+ return {
443
+ 'piece': piece,
444
+ 'color': color,
445
+ 'src': src,
446
+ 'dest': dest,
447
+ 'modifiers': modifiers,
448
+ }
449
+
450
+ def _tokenize(self, text: str) -> List[str]:
451
+ """
452
+ Tokenize a string of moves into component tokens with positional markers.
453
+
454
+ Each move becomes: [ColoredPiece, [SOURCE], source, [DEST], dest, *modifiers]
455
+
456
+ Args:
457
+ text: String of space-separated moves (e.g., "WPe2e4 BPe7e5")
458
+
459
+ Returns:
460
+ List of component tokens with structure markers.
461
+ """
462
+ move_strings = text.strip().split()
463
+ tokens = []
464
+
465
+ for move_str in move_strings:
466
+ parsed = self._parse_move(move_str)
467
+
468
+ # Add color and piece as SEPARATE tokens (now explicit!)
469
+ tokens.append(parsed['color']) # W or B
470
+ tokens.append(parsed['piece']) # P, N, B, R, Q, K
471
+
472
+ # Add positional markers and squares
473
+ tokens.append('[SOURCE]')
474
+ tokens.append(parsed['src'])
475
+ tokens.append('[DEST]')
476
+ tokens.append(parsed['dest'])
477
+
478
+ # Add modifier tokens if any
479
+ tokens.extend(parsed['modifiers'])
480
+
481
+ return tokens
482
+
483
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
484
+ """
485
+ Reconstruct moves from component tokens with positional markers.
486
+
487
+ Expects structure: Color, Piece, [SOURCE], source, [DEST], dest, *modifiers
488
+
489
+ Args:
490
+ tokens: List of component tokens
491
+
492
+ Returns:
493
+ Space-separated move string.
494
+ """
495
+ moves = []
496
+ token_idx = 0
497
+
498
+ while token_idx < len(tokens):
499
+ token = tokens[token_idx]
500
+
501
+ # Skip special tokens
502
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
503
+ if token in special:
504
+ token_idx += 1
505
+ continue
506
+
507
+ # Expect: Color token (W or B)
508
+ if token not in self.COLORS:
509
+ break
510
+
511
+ color = token
512
+
513
+ # Expect: Piece token (P, N, B, R, Q, K)
514
+ if token_idx + 1 >= len(tokens) or tokens[token_idx + 1] not in self.PIECES:
515
+ break
516
+
517
+ piece = tokens[token_idx + 1]
518
+ colored_piece = color + piece
519
+
520
+ # Expect: [SOURCE] marker
521
+ if token_idx + 2 >= len(tokens) or tokens[token_idx + 2] != '[SOURCE]':
522
+ break
523
+
524
+ # Expect: source square
525
+ if token_idx + 3 >= len(tokens):
526
+ break
527
+ src = tokens[token_idx + 3]
528
+ if src not in self.SQUARES:
529
+ break
530
+
531
+ # Expect: [DEST] marker
532
+ if token_idx + 4 >= len(tokens) or tokens[token_idx + 4] != '[DEST]':
533
+ break
534
+
535
+ # Expect: dest square
536
+ if token_idx + 5 >= len(tokens):
537
+ break
538
+ dest = tokens[token_idx + 5]
539
+ if dest not in self.SQUARES:
540
+ break
541
+
542
+ # Build move string
543
+ move_str = f"{color}{piece}{src}{dest}"
544
+
545
+ # Collect modifiers (next tokens until we hit another color token or end)
546
+ token_idx += 6
547
+ modifiers_list = []
548
+
549
+ while token_idx < len(tokens) and tokens[token_idx] in self.MODIFIERS:
550
+ modifier = tokens[token_idx]
551
+ modifiers_list.append(modifier)
552
+ token_idx += 1
553
+
554
+ # Append modifier suffixes
555
+ if modifiers_list:
556
+ modifier_str = ""
557
+ if '[CAPTURE]' in modifiers_list:
558
+ modifier_str += "x"
559
+ if '[CHECKMATE]' in modifiers_list:
560
+ modifier_str += "+*"
561
+ elif '[CHECK]' in modifiers_list:
562
+ modifier_str += "+"
563
+ if '[CASTLING_KS]' in modifiers_list:
564
+ modifier_str += "o"
565
+ elif '[CASTLING_QS]' in modifiers_list:
566
+ modifier_str += "o"
567
+
568
+ move_str += f"({modifier_str})"
569
+
570
+ moves.append(move_str)
571
+
572
+ return " ".join(moves)
573
+
574
+ def decode(self, token_ids, skip_special_tokens=False, **kwargs):
575
+ """
576
+ Decode token IDs back to string representation.
577
+
578
+ Properly handles individual tokens by converting each ID to its token string.
579
+ For single tokens or incomplete move sequences, returns the raw token strings.
580
+ For complete move sequences, reconstructs the move format.
581
+
582
+ Args:
583
+ token_ids: List or tensor of token IDs
584
+ skip_special_tokens: Whether to skip special tokens in output
585
+ **kwargs: Additional arguments (for compatibility)
586
+
587
+ Returns:
588
+ String representation of the tokens
589
+ """
590
+ # Convert tensor to list if needed
591
+ if hasattr(token_ids, 'tolist'):
592
+ token_ids = token_ids.tolist()
593
+
594
+ # Handle 2D tensor/list (batch)
595
+ if isinstance(token_ids, list) and len(token_ids) > 0 and isinstance(token_ids[0], list):
596
+ return [self.decode(ids, skip_special_tokens=skip_special_tokens) for ids in token_ids]
597
+
598
+ # Convert IDs to tokens
599
+ tokens = []
600
+ for token_id in token_ids:
601
+ if isinstance(token_id, int):
602
+ token = self._convert_id_to_token(token_id)
603
+ else:
604
+ token = str(token_id)
605
+
606
+ tokens.append(token)
607
+
608
+ # Try to reconstruct moves from tokens
609
+ # If successful, return the reconstructed moves
610
+ reconstructed = self._try_reconstruct_moves(tokens, skip_special_tokens)
611
+ if reconstructed is not None:
612
+ return reconstructed
613
+
614
+ # Fallback: return tokens joined with spaces, filtering special tokens if requested
615
+ if skip_special_tokens:
616
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
617
+ tokens = [t for t in tokens if t not in special]
618
+
619
+ return " ".join(tokens)
620
+
621
+ def _try_reconstruct_moves(self, tokens: List[str], skip_special_tokens: bool = False) -> Optional[str]:
622
+ """
623
+ Try to reconstruct complete moves from tokens.
624
+
625
+ Returns the reconstructed move string if tokens form valid move(s),
626
+ None if tokens don't form a complete move structure.
627
+
628
+ Args:
629
+ tokens: List of token strings
630
+ skip_special_tokens: Whether to skip special tokens
631
+
632
+ Returns:
633
+ Reconstructed move string or None
634
+ """
635
+ moves = []
636
+ token_idx = 0
637
+ found_moves = False
638
+
639
+ while token_idx < len(tokens):
640
+ token = tokens[token_idx]
641
+
642
+ # Skip special tokens
643
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
644
+ if token in special:
645
+ token_idx += 1
646
+ continue
647
+
648
+ # Check if this starts a move (color token)
649
+ if token not in self.COLORS:
650
+ # No more complete moves
651
+ break
652
+
653
+ color = token
654
+
655
+ # Need at least 6 more tokens for a complete move
656
+ if token_idx + 5 >= len(tokens):
657
+ break
658
+
659
+ # Expect: Piece token (P, N, B, R, Q, K)
660
+ if tokens[token_idx + 1] not in self.PIECES:
661
+ break
662
+
663
+ piece = tokens[token_idx + 1]
664
+
665
+ # Expect: [SOURCE] marker
666
+ if tokens[token_idx + 2] != '[SOURCE]':
667
+ break
668
+
669
+ # Expect: source square
670
+ src = tokens[token_idx + 3]
671
+ if src not in self.SQUARES:
672
+ break
673
+
674
+ # Expect: [DEST] marker
675
+ if tokens[token_idx + 4] != '[DEST]':
676
+ break
677
+
678
+ # Expect: dest square
679
+ dest = tokens[token_idx + 5]
680
+ if dest not in self.SQUARES:
681
+ break
682
+
683
+ # Build move string
684
+ move_str = f"{color}{piece}{src}{dest}"
685
+
686
+ # Collect modifiers
687
+ token_idx += 6
688
+ modifiers_list = []
689
+
690
+ while token_idx < len(tokens) and tokens[token_idx] in self.MODIFIERS:
691
+ modifiers_list.append(tokens[token_idx])
692
+ token_idx += 1
693
+
694
+ # Append modifier suffixes
695
+ if modifiers_list:
696
+ modifier_str = ""
697
+ if '[CAPTURE]' in modifiers_list:
698
+ modifier_str += "x"
699
+ if '[CHECKMATE]' in modifiers_list:
700
+ modifier_str += "+*"
701
+ elif '[CHECK]' in modifiers_list:
702
+ modifier_str += "+"
703
+ if '[CASTLING_KS]' in modifiers_list:
704
+ modifier_str += "o"
705
+ elif '[CASTLING_QS]' in modifiers_list:
706
+ modifier_str += "o"
707
+
708
+ move_str += f"({modifier_str})"
709
+
710
+ moves.append(move_str)
711
+ found_moves = True
712
+
713
+ if found_moves:
714
+ return " ".join(moves)
715
+
716
+ return None
717
+
718
+
719
+ class ChessLogitsProcessor:
720
+ """
721
+ Logits processor for enforcing chess move structure during generation.
722
+
723
+ Enforces the token sequence pattern:
724
+ Color Piece [SOURCE] source [DEST] dest [modifiers]*
725
+
726
+ Uses a state machine with 7 states:
727
+ - State 0: Expect color (W, B)
728
+ - State 1: Expect piece (P, N, B, R, Q, K)
729
+ - State 2: Expect [SOURCE] marker
730
+ - State 3: Expect source square (a1-h8)
731
+ - State 4: Expect [DEST] marker
732
+ - State 5: Expect dest square (a1-h8)
733
+ - State 6: Expect modifiers or next color token
734
+
735
+ Token structure is hardcoded to match ChessTokenizer:
736
+ - Colors: W, B (EXPLICIT for turn alternation)
737
+ - Pieces: P, N, B, R, Q, K
738
+ - Position markers: [SOURCE], [DEST]
739
+ - Squares: a1-h8 (64 total)
740
+ - Modifiers: [CAPTURE], [CHECK], [CHECKMATE], [CASTLING_KS], [CASTLING_QS]
741
+ """
742
+
743
+ # Token vocabulary indices (hardcoded to match ChessTokenizer vocab order)
744
+ # Special tokens: [PAD]=0, [BOS]=1, [EOS]=2, [UNK]=3
745
+ # Colors (4-5)
746
+ COLOR_IDS = {'W': 4, 'B': 5}
747
+ # Pieces (6-11)
748
+ PIECE_IDS = {'P': 6, 'N': 7, 'B': 8, 'R': 9, 'Q': 10, 'K': 11}
749
+ # Position markers (12-13)
750
+ POSITION_MARKER_IDS = {'[SOURCE]': 12, '[DEST]': 13}
751
+ # Squares (14-77): a1=14, a2=15, ..., h8=77
752
+ SQUARE_IDS = {f"{file}{rank}": 14 + (rank - 1) * 8 + ord(file) - ord('a')
753
+ for rank in range(1, 9) for file in "abcdefgh"}
754
+ # Modifiers (78-82)
755
+ MODIFIER_IDS = {
756
+ '[CAPTURE]': 78, '[CHECK]': 79, '[CHECKMATE]': 80,
757
+ '[CASTLING_KS]': 81, '[CASTLING_QS]': 82
758
+ }
759
+
760
+ def __init__(self):
761
+ """
762
+ Initialize the logits processor with hardcoded ChessTokenizer structure.
763
+ """
764
+ import torch
765
+ self.torch = torch
766
+
767
+ # Convert to sets for membership testing
768
+ self.color_ids = set(self.COLOR_IDS.values())
769
+ self.piece_ids = set(self.PIECE_IDS.values())
770
+ self.square_ids = set(self.SQUARE_IDS.values())
771
+ self.modifier_ids = set(self.MODIFIER_IDS.values())
772
+
773
+ def _get_state(self, input_ids):
774
+ """
775
+ Determine current state in move sequence based on recent tokens.
776
+
777
+ Returns state (0-6) indicating what token type is expected next.
778
+ """
779
+ if input_ids.numel() == 0:
780
+ return 0 # Start: expect color
781
+
782
+ # Get the sequence of tokens
783
+ seq = input_ids[0].tolist()
784
+
785
+ # Work backwards to find the last color token (marks start of move)
786
+ last_move_idx = -1
787
+ for i in range(len(seq) - 1, -1, -1):
788
+ if seq[i] in self.color_ids:
789
+ last_move_idx = i
790
+ break
791
+
792
+ if last_move_idx == -1:
793
+ return 0 # No color found, expect color
794
+
795
+ # Count tokens since last color
796
+ tokens_since_color = len(seq) - 1 - last_move_idx
797
+
798
+ # Pattern: Color, Piece, [SOURCE], source, [DEST], dest, ...modifiers
799
+ if tokens_since_color == 0:
800
+ return 1 # Expect piece after color
801
+ elif tokens_since_color == 1:
802
+ # Should have: color, piece
803
+ if seq[-1] in self.piece_ids:
804
+ return 2 # Expect [SOURCE]
805
+ else:
806
+ return 1 # Unexpected, reset
807
+ elif tokens_since_color == 2:
808
+ # Should have: color, piece, [SOURCE]
809
+ if (seq[-2] in self.piece_ids and
810
+ seq[-1] in [self.POSITION_MARKER_IDS['[SOURCE]']]):
811
+ return 3 # Expect source square
812
+ else:
813
+ return 1 # Reset
814
+ elif tokens_since_color == 3:
815
+ # Should have: color, piece, [SOURCE], source
816
+ if (seq[-3] in self.piece_ids and
817
+ seq[-2] in [self.POSITION_MARKER_IDS['[SOURCE]']] and
818
+ seq[-1] in self.square_ids):
819
+ return 4 # Expect [DEST]
820
+ else:
821
+ return 1 # Reset
822
+ elif tokens_since_color == 4:
823
+ # Should have: color, piece, [SOURCE], source, [DEST]
824
+ if (seq[-2] in self.square_ids and
825
+ seq[-1] in [self.POSITION_MARKER_IDS['[DEST]']]):
826
+ return 5 # Expect dest square
827
+ else:
828
+ return 1 # Reset
829
+ elif tokens_since_color == 5:
830
+ # Should have: color, piece, [SOURCE], source, [DEST], dest
831
+ if seq[-1] in self.square_ids:
832
+ return 6 # Expect modifiers or next color (move complete)
833
+ else:
834
+ return 1 # Reset
835
+ else:
836
+ # tokens_since_color >= 6: We're in modifiers or expecting next move
837
+ # If last token is a modifier, still expect more modifiers or next color
838
+ # If last token is not a modifier, we should expect next color
839
+ if seq[-1] not in self.modifier_ids:
840
+ return 0 # Expect next move (next color)
841
+ else:
842
+ return 6 # Could be more modifiers or next color
843
+
844
+ def constrain_logits(self, input_ids, logits):
845
+ """
846
+ Mask invalid tokens in logits based on move structure.
847
+
848
+ Sets logits to -inf for tokens that violate move structure.
849
+
850
+ Args:
851
+ input_ids: Model input token IDs of shape (batch_size, seq_len)
852
+ logits: Model output logits of shape (batch_size, vocab_size)
853
+
854
+ Returns:
855
+ Modified logits with invalid tokens masked to -inf
856
+ """
857
+ state = self._get_state(input_ids)
858
+
859
+ # Create a mask for valid tokens (all ones initially)
860
+ valid_mask = self.torch.ones(logits.shape[-1], dtype=self.torch.bool)
861
+ valid_mask[:] = False # Start by forbidding all
862
+
863
+ # Allow tokens based on current state
864
+ if state == 0:
865
+ # Expect color (W or B)
866
+ for color_id in self.color_ids:
867
+ valid_mask[color_id] = True
868
+
869
+ elif state == 1:
870
+ # Expect piece (P, N, B, R, Q, K)
871
+ for piece_id in self.piece_ids:
872
+ valid_mask[piece_id] = True
873
+
874
+ elif state == 2:
875
+ # Expect [SOURCE]
876
+ valid_mask[self.POSITION_MARKER_IDS['[SOURCE]']] = True
877
+
878
+ elif state == 3:
879
+ # Expect source square
880
+ for square_id in self.square_ids:
881
+ valid_mask[square_id] = True
882
+
883
+ elif state == 4:
884
+ # Expect [DEST]
885
+ valid_mask[self.POSITION_MARKER_IDS['[DEST]']] = True
886
+
887
+ elif state == 5:
888
+ # Expect dest square
889
+ for square_id in self.square_ids:
890
+ valid_mask[square_id] = True
891
+
892
+ elif state == 6:
893
+ # Expect modifiers or next color token
894
+ # Allow: modifiers + colors + EOS
895
+ for modifier_id in self.modifier_ids:
896
+ valid_mask[modifier_id] = True
897
+ for color_id in self.color_ids:
898
+ valid_mask[color_id] = True
899
+ valid_mask[2] = True # Allow EOS to end sequence
900
+
901
+ # Apply mask
902
+ logits = logits.clone()
903
+ logits[0, ~valid_mask] = float('-inf')
904
+
905
+ return logits
906
+
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[BOS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[EOS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ }
35
+ },
36
+ "auto_map": {
37
+ "AutoTokenizer": [
38
+ "tokenizer_decomposed.ChessDecomposedTokenizer",
39
+ null
40
+ ]
41
+ },
42
+ "bos_token": "[BOS]",
43
+ "clean_up_tokenization_spaces": false,
44
+ "eos_token": "[EOS]",
45
+ "extra_special_tokens": {},
46
+ "model_max_length": 1000000000000000019884624838656,
47
+ "pad_token": "[PAD]",
48
+ "tokenizer_class": "ChessDecomposedTokenizer",
49
+ "unk_token": "[UNK]"
50
+ }
tokenizer_decomposed.py ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Decomposed Chess Tokenizer.
3
+
4
+ This tokenizer decomposes each move into 3-4 tokens:
5
+ - color+piece token (e.g., "WP", "BN")
6
+ - from-square token with suffix "_f" (e.g., "e2_f")
7
+ - to-square token with suffix "_t" (e.g., "e4_t")
8
+ - optional promotion token (one of "q", "r", "b", "n")
9
+
10
+ This avoids UNKs for rare moves and makes legality learning easier because the model
11
+ always emits explicit squares.
12
+ """
13
+
14
+ from __future__ import annotations
15
+
16
+ import json
17
+ import os
18
+ import re
19
+ from typing import Dict, List, Optional
20
+
21
+ from transformers import PreTrainedTokenizer
22
+
23
+
24
+ class ChessDecomposedTokenizer(PreTrainedTokenizer):
25
+ model_input_names = ["input_ids", "attention_mask"]
26
+ vocab_files_names = {"vocab_file": "vocab.json"}
27
+
28
+ PAD_TOKEN = "[PAD]"
29
+ BOS_TOKEN = "[BOS]"
30
+ EOS_TOKEN = "[EOS]"
31
+ UNK_TOKEN = "[UNK]"
32
+
33
+ _MOVE_RE = re.compile(r"^[WB][PNBRQK][a-h][1-8][a-h][1-8].*$")
34
+
35
+ def __init__(
36
+ self,
37
+ vocab_file: Optional[str] = None,
38
+ vocab: Optional[Dict[str, int]] = None,
39
+ **kwargs,
40
+ ):
41
+ self._pad_token = self.PAD_TOKEN
42
+ self._bos_token = self.BOS_TOKEN
43
+ self._eos_token = self.EOS_TOKEN
44
+ self._unk_token = self.UNK_TOKEN
45
+
46
+ kwargs.pop("pad_token", None)
47
+ kwargs.pop("bos_token", None)
48
+ kwargs.pop("eos_token", None)
49
+ kwargs.pop("unk_token", None)
50
+
51
+ if vocab is not None:
52
+ self._vocab = vocab
53
+ elif vocab_file is not None and os.path.exists(vocab_file):
54
+ with open(vocab_file, "r", encoding="utf-8") as f:
55
+ self._vocab = json.load(f)
56
+ else:
57
+ self._vocab = self._create_full_vocab()
58
+
59
+ self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
60
+
61
+ super().__init__(
62
+ pad_token=self._pad_token,
63
+ bos_token=self._bos_token,
64
+ eos_token=self._eos_token,
65
+ unk_token=self._unk_token,
66
+ **kwargs,
67
+ )
68
+
69
+ @staticmethod
70
+ def _create_full_vocab() -> Dict[str, int]:
71
+ special_tokens = [
72
+ ChessDecomposedTokenizer.PAD_TOKEN,
73
+ ChessDecomposedTokenizer.BOS_TOKEN,
74
+ ChessDecomposedTokenizer.EOS_TOKEN,
75
+ ChessDecomposedTokenizer.UNK_TOKEN,
76
+ ]
77
+
78
+ pieces = ["P", "N", "B", "R", "Q", "K"]
79
+ colors = ["W", "B"]
80
+ piece_tokens = [f"{c}{p}" for c in colors for p in pieces]
81
+
82
+ files = "abcdefgh"
83
+ ranks = "12345678"
84
+ squares = [f"{f}{r}" for f in files for r in ranks]
85
+ from_tokens = [f"{sq}_f" for sq in squares]
86
+ to_tokens = [f"{sq}_t" for sq in squares]
87
+
88
+ promo_tokens = ["q", "r", "b", "n"]
89
+
90
+ tokens = special_tokens + piece_tokens + from_tokens + to_tokens + promo_tokens
91
+ return {tok: idx for idx, tok in enumerate(tokens)}
92
+
93
+ @property
94
+ def vocab_size(self) -> int:
95
+ return len(self._vocab)
96
+
97
+ def get_vocab(self) -> Dict[str, int]:
98
+ return dict(self._vocab)
99
+
100
+ def _tokenize(self, text: str) -> List[str]:
101
+ raw = text.strip()
102
+ if not raw:
103
+ return []
104
+
105
+ parts = raw.split()
106
+ out: List[str] = []
107
+
108
+ for part in parts:
109
+ if part in {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}:
110
+ out.append(part)
111
+ continue
112
+
113
+ if not self._MOVE_RE.match(part):
114
+ out.append(self.UNK_TOKEN)
115
+ continue
116
+
117
+ color = part[0]
118
+ piece = part[1]
119
+ from_sq = part[2:4]
120
+ to_sq = part[4:6]
121
+ out.append(f"{color}{piece}")
122
+ out.append(f"{from_sq}_f")
123
+ out.append(f"{to_sq}_t")
124
+
125
+ if "=" in part:
126
+ promo_idx = part.find("=")
127
+ if promo_idx != -1 and promo_idx + 1 < len(part):
128
+ promo = part[promo_idx + 1].lower()
129
+ if promo in {"q", "r", "b", "n"}:
130
+ out.append(promo)
131
+
132
+ return out
133
+
134
+ def _convert_token_to_id(self, token: str) -> int:
135
+ return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
136
+
137
+ def _convert_id_to_token(self, index: int) -> str:
138
+ return self._ids_to_tokens.get(index, self.UNK_TOKEN)
139
+
140
+ def convert_tokens_to_string(self, tokens: List[str]) -> str:
141
+ special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
142
+ return " ".join(t for t in tokens if t not in special)
143
+
144
+ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> tuple:
145
+ if not os.path.isdir(save_directory):
146
+ os.makedirs(save_directory, exist_ok=True)
147
+
148
+ vocab_file = os.path.join(
149
+ save_directory,
150
+ (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
151
+ )
152
+
153
+ with open(vocab_file, "w", encoding="utf-8") as f:
154
+ json.dump(self._vocab, f, ensure_ascii=False, indent=2)
155
+
156
+ return (vocab_file,)
vocab.json ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "[PAD]": 0,
3
+ "[BOS]": 1,
4
+ "[EOS]": 2,
5
+ "[UNK]": 3,
6
+ "W": 4,
7
+ "B": 8,
8
+ "P": 6,
9
+ "N": 7,
10
+ "R": 9,
11
+ "Q": 10,
12
+ "K": 11,
13
+ "[SOURCE]": 12,
14
+ "[DEST]": 13,
15
+ "a1": 14,
16
+ "b1": 15,
17
+ "c1": 16,
18
+ "d1": 17,
19
+ "e1": 18,
20
+ "f1": 19,
21
+ "g1": 20,
22
+ "h1": 21,
23
+ "a2": 22,
24
+ "b2": 23,
25
+ "c2": 24,
26
+ "d2": 25,
27
+ "e2": 26,
28
+ "f2": 27,
29
+ "g2": 28,
30
+ "h2": 29,
31
+ "a3": 30,
32
+ "b3": 31,
33
+ "c3": 32,
34
+ "d3": 33,
35
+ "e3": 34,
36
+ "f3": 35,
37
+ "g3": 36,
38
+ "h3": 37,
39
+ "a4": 38,
40
+ "b4": 39,
41
+ "c4": 40,
42
+ "d4": 41,
43
+ "e4": 42,
44
+ "f4": 43,
45
+ "g4": 44,
46
+ "h4": 45,
47
+ "a5": 46,
48
+ "b5": 47,
49
+ "c5": 48,
50
+ "d5": 49,
51
+ "e5": 50,
52
+ "f5": 51,
53
+ "g5": 52,
54
+ "h5": 53,
55
+ "a6": 54,
56
+ "b6": 55,
57
+ "c6": 56,
58
+ "d6": 57,
59
+ "e6": 58,
60
+ "f6": 59,
61
+ "g6": 60,
62
+ "h6": 61,
63
+ "a7": 62,
64
+ "b7": 63,
65
+ "c7": 64,
66
+ "d7": 65,
67
+ "e7": 66,
68
+ "f7": 67,
69
+ "g7": 68,
70
+ "h7": 69,
71
+ "a8": 70,
72
+ "b8": 71,
73
+ "c8": 72,
74
+ "d8": 73,
75
+ "e8": 74,
76
+ "f8": 75,
77
+ "g8": 76,
78
+ "h8": 77,
79
+ "[CAPTURE]": 78,
80
+ "[CHECK]": 79,
81
+ "[CHECKMATE]": 80,
82
+ "[CASTLING_KS]": 81,
83
+ "[CASTLING_QS]": 82
84
+ }