iliasslasri commited on
Commit
a792c67
·
verified ·
1 Parent(s): 40a2d52

Chess Challenge submission by iliasslasri

Browse files
Files changed (4) hide show
  1. README.md +3 -3
  2. config.json +10 -4
  3. model.py +518 -0
  4. model.safetensors +2 -2
README.md CHANGED
@@ -14,13 +14,13 @@ Chess model submitted to the LLM Course Chess Challenge.
14
  ## Submission Info
15
 
16
  - **Submitted by**: [iliasslasri](https://huggingface.co/iliasslasri)
17
- - **Parameters**: 980,720
18
  - **Organization**: LLM-course
19
 
20
  ## Model Details
21
 
22
  - **Architecture**: Chess Transformer (GPT-style)
23
  - **Vocab size**: 75
24
- - **Embedding dim**: 92
25
  - **Layers**: 11
26
- - **Heads**: 4
 
14
  ## Submission Info
15
 
16
  - **Submitted by**: [iliasslasri](https://huggingface.co/iliasslasri)
17
+ - **Parameters**: 997,136
18
  - **Organization**: LLM-course
19
 
20
  ## Model Details
21
 
22
  - **Architecture**: Chess Transformer (GPT-style)
23
  - **Vocab size**: 75
24
+ - **Embedding dim**: 96
25
  - **Layers**: 11
26
+ - **Heads**: 8
config.json CHANGED
@@ -1,18 +1,24 @@
1
  {
2
- "_name_or_path": "./11_4_92/checkpoint-100197/",
3
  "architectures": [
4
  "ChessForCausalLM"
5
  ],
 
 
 
 
 
6
  "bos_token_id": 1,
7
  "dropout": 0.1,
8
  "eos_token_id": 2,
9
  "layer_norm_epsilon": 1e-05,
10
  "model_type": "chess_transformer",
11
  "n_ctx": 256,
12
- "n_embd": 92,
13
- "n_head": 4,
14
- "n_inner": 276,
15
  "n_layer": 11,
 
16
  "pad_token_id": 0,
17
  "tie_weights": false,
18
  "tie_word_embeddings": false,
 
1
  {
2
+ "_name_or_path": "./gqa_1/checkpoint-267192/",
3
  "architectures": [
4
  "ChessForCausalLM"
5
  ],
6
+ "attn": "GQA",
7
+ "auto_map": {
8
+ "AutoConfig": "model.ChessConfig",
9
+ "AutoModelForCausalLM": "model.ChessForCausalLM"
10
+ },
11
  "bos_token_id": 1,
12
  "dropout": 0.1,
13
  "eos_token_id": 2,
14
  "layer_norm_epsilon": 1e-05,
15
  "model_type": "chess_transformer",
16
  "n_ctx": 256,
17
+ "n_embd": 96,
18
+ "n_head": 8,
19
+ "n_inner": 304,
20
  "n_layer": 11,
21
+ "num_groups": 4,
22
  "pad_token_id": 0,
23
  "tie_weights": false,
24
  "tie_word_embeddings": false,
model.py ADDED
@@ -0,0 +1,518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Chess Transformer Model for the Chess Challenge.
3
+
4
+ This module provides a simple GPT-style transformer architecture
5
+ designed to fit within the 1M parameter constraint.
6
+
7
+ Key components:
8
+ - ChessConfig: Configuration class for model hyperparameters
9
+ - ChessForCausalLM: The main model class for next-move prediction
10
+ """
11
+
12
+ from __future__ import annotations
13
+
14
+ import math
15
+ from dataclasses import dataclass
16
+ from typing import Optional, Tuple, Union
17
+
18
+ import torch
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+ from transformers import PretrainedConfig, PreTrainedModel
22
+ from transformers.modeling_outputs import CausalLMOutputWithPast
23
+
24
+
25
+ class ChessConfig(PretrainedConfig):
26
+ """
27
+ Configuration class for the Chess Transformer model.
28
+
29
+ This configuration is designed for a ~1M parameter model.
30
+ Students can adjust these values to explore different architectures.
31
+
32
+ Parameter budget breakdown (with default values):
33
+ - Embeddings (vocab): 1200 x 128 = 153,600
34
+ - Position Embeddings: 256 x 128 = 32,768
35
+ - Transformer Layers: 6 x ~120,000 = ~720,000
36
+ - LM Head (with weight tying): 0 (shared with embeddings)
37
+ - Total: ~906,000 parameters
38
+
39
+ Attributes:
40
+ vocab_size: Size of the vocabulary (number of unique moves).
41
+ n_embd: Embedding dimension (d_model).
42
+ n_layer: Number of transformer layers.
43
+ n_head: Number of attention heads.
44
+ n_ctx: Maximum sequence length (context window).
45
+ n_inner: Feed-forward inner dimension (default: 3 * n_embd).
46
+ dropout: Dropout probability.
47
+ layer_norm_epsilon: Epsilon for layer normalization.
48
+ tie_weights: Whether to tie embedding and output weights.
49
+ """
50
+
51
+ model_type = "chess_transformer"
52
+
53
+ def __init__(
54
+ self,
55
+ vocab_size: int = 1200,
56
+ n_embd: int = 128,
57
+ n_layer: int = 6,
58
+ n_head: int = 4,
59
+ n_ctx: int = 256,
60
+ n_inner: Optional[int] = None,
61
+ dropout: float = 0.1,
62
+ layer_norm_epsilon: float = 1e-5,
63
+ tie_weights: bool = True,
64
+ pad_token_id: int = 0,
65
+ bos_token_id: int = 1,
66
+ eos_token_id: int = 2,
67
+ attn: str = "MHA",
68
+ num_groups: int = 2,
69
+ **kwargs,
70
+ ):
71
+ super().__init__(
72
+ pad_token_id=pad_token_id,
73
+ bos_token_id=bos_token_id,
74
+ eos_token_id=eos_token_id,
75
+ **kwargs,
76
+ )
77
+
78
+ self.vocab_size = vocab_size
79
+ self.n_embd = n_embd
80
+ self.n_layer = n_layer
81
+ self.n_head = n_head
82
+ self.n_ctx = n_ctx
83
+ self.n_inner = n_inner if n_inner is not None else 3 * n_embd # Reduced from 4x to 3x
84
+ self.dropout = dropout
85
+ self.layer_norm_epsilon = layer_norm_epsilon
86
+ self.tie_weights = tie_weights
87
+ # Inform HF base class about tying behavior
88
+ self.tie_word_embeddings = bool(tie_weights)
89
+
90
+ # for GQA
91
+ self.attn = attn
92
+ self.num_groups = num_groups
93
+
94
+
95
+ class MultiHeadAttention(nn.Module):
96
+ """
97
+ Multi-head self-attention module.
98
+
99
+ This is a standard scaled dot-product attention implementation
100
+ with causal masking for autoregressive generation.
101
+ """
102
+
103
+ def __init__(self, config: ChessConfig):
104
+ super().__init__()
105
+
106
+ assert config.n_embd % config.n_head == 0, \
107
+ f"n_embd ({config.n_embd}) must be divisible by n_head ({config.n_head})"
108
+
109
+ self.n_head = config.n_head
110
+ self.n_embd = config.n_embd
111
+ self.head_dim = config.n_embd // config.n_head
112
+
113
+ # Combined QKV projection for efficiency
114
+ self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd)
115
+ self.c_proj = nn.Linear(config.n_embd, config.n_embd)
116
+
117
+ self.dropout = nn.Dropout(config.dropout)
118
+
119
+ # Causal mask (will be created on first forward pass)
120
+ self.register_buffer(
121
+ "bias",
122
+ torch.tril(torch.ones(config.n_ctx, config.n_ctx)).view(
123
+ 1, 1, config.n_ctx, config.n_ctx
124
+ ),
125
+ persistent=False,
126
+ )
127
+
128
+ def forward(
129
+ self,
130
+ x: torch.Tensor,
131
+ attention_mask: Optional[torch.Tensor] = None,
132
+ ) -> torch.Tensor:
133
+ batch_size, seq_len, _ = x.size()
134
+
135
+ # Compute Q, K, V
136
+ qkv = self.c_attn(x)
137
+ q, k, v = qkv.split(self.n_embd, dim=2)
138
+
139
+ # Reshape for multi-head attention
140
+ q = q.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
141
+ k = k.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
142
+ v = v.view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
143
+
144
+ # Scaled dot-product attention
145
+ attn_weights = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.head_dim)
146
+
147
+ # Apply causal mask
148
+ causal_mask = self.bias[:, :, :seq_len, :seq_len]
149
+ attn_weights = attn_weights.masked_fill(causal_mask == 0, float("-inf"))
150
+
151
+ # Apply attention mask (for padding)
152
+ if attention_mask is not None:
153
+ # attention_mask shape: (batch_size, seq_len) -> (batch_size, 1, 1, seq_len)
154
+ attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
155
+ attn_weights = attn_weights.masked_fill(attention_mask == 0, float("-inf"))
156
+
157
+ attn_weights = F.softmax(attn_weights, dim=-1)
158
+ attn_weights = self.dropout(attn_weights)
159
+
160
+ # Apply attention to values
161
+ attn_output = torch.matmul(attn_weights, v)
162
+
163
+ # Reshape back
164
+ attn_output = attn_output.transpose(1, 2).contiguous().view(
165
+ batch_size, seq_len, self.n_embd
166
+ )
167
+
168
+ # Output projection
169
+ attn_output = self.c_proj(attn_output)
170
+
171
+ return attn_output
172
+
173
+
174
+ class FeedForward(nn.Module):
175
+ """
176
+ Feed-forward network (MLP) module.
177
+
178
+ Standard two-layer MLP with GELU activation.
179
+ """
180
+
181
+ def __init__(self, config: ChessConfig):
182
+ super().__init__()
183
+
184
+ self.c_fc = nn.Linear(config.n_embd, config.n_inner)
185
+ self.c_proj = nn.Linear(config.n_inner, config.n_embd)
186
+ self.dropout = nn.Dropout(config.dropout)
187
+
188
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
189
+ x = self.c_fc(x)
190
+ x = F.gelu(x)
191
+ x = self.c_proj(x)
192
+ x = self.dropout(x)
193
+ return x
194
+
195
+ class GroupedQueryAttention(nn.Module):
196
+ """
197
+ Standard Grouped Query Attention
198
+
199
+ """
200
+ def __init__(self, config: ChessConfig):
201
+ super().__init__()
202
+
203
+ assert config.n_embd % config.n_head == 0, \
204
+ f"n_embd ({config.n_embd}) must be divisible by n_head ({config.n_head})"
205
+
206
+ self.n_head = config.n_head # number of query heads
207
+ self.n_embd = config.n_embd
208
+ self.head_dim = config.n_embd // config.n_head
209
+
210
+ self.num_groups = config.num_groups
211
+ self.group_size = config.n_head // config.num_groups
212
+
213
+ self.q_proj = nn.Linear(self.n_embd, self.n_head * self.head_dim)
214
+ self.k_proj = nn.Linear(self.n_embd, self.num_groups * self.head_dim)
215
+ self.v_proj = nn.Linear(self.n_embd, self.num_groups * self.head_dim)
216
+ self.out_proj = nn.Linear(self.n_head * self.head_dim, self.n_embd)
217
+
218
+ # Causal mask (will be created on first forward pass)
219
+ self.register_buffer(
220
+ "bias",
221
+ torch.tril(torch.ones(config.n_ctx, config.n_ctx)).view(
222
+ 1, 1, config.n_ctx, config.n_ctx
223
+ ),
224
+ persistent=False,
225
+ )
226
+ def forward(
227
+ self,
228
+ x: torch.Tensor,
229
+ attention_mask: Optional[torch.Tensor] = None,
230
+ ) -> torch.Tensor:
231
+ batch_size, seq_len, _ = x.size()
232
+
233
+ # Project queries, keys, and values
234
+ q = self.q_proj(x).view(batch_size, seq_len, self.n_head, self.head_dim).transpose(1, 2)
235
+ k = self.k_proj(x).view(batch_size, seq_len, self.num_groups, self.head_dim).transpose(1, 2)
236
+ v = self.v_proj(x).view(batch_size, seq_len, self.num_groups, self.head_dim).transpose(1, 2)
237
+
238
+ # Expand k and v to match the number of query heads
239
+ k = k.repeat_interleave(self.group_size, dim=1)
240
+ v = v.repeat_interleave(self.group_size, dim=1)
241
+
242
+ # Compute attention attn_weights
243
+ attn_weights = torch.matmul(q, k.transpose(2, 3)) / math.sqrt(self.head_dim)
244
+ # Apply causal mask
245
+ causal_mask = self.bias[:, :, :seq_len, :seq_len]
246
+ attn_weights = attn_weights.masked_fill(causal_mask == 0, float("-inf"))
247
+
248
+ # Apply attention mask (for padding)
249
+ if attention_mask is not None:
250
+ # attention_mask shape: (batch_size, seq_len) -> (batch_size, 1, 1, seq_len)
251
+ attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
252
+ attn_weights = attn_weights.masked_fill(attention_mask == 0, float("-inf"))
253
+
254
+ attn_weights = F.softmax(attn_weights, dim=-1)
255
+ attn_weights = F.dropout(attn_weights, p=0.1)
256
+
257
+ # Apply attention weights to values
258
+ context = torch.matmul(attn_weights, v).transpose(1, 2).contiguous()
259
+ context = context.view(batch_size, seq_len, self.n_embd)
260
+
261
+ return self.out_proj(context)
262
+
263
+
264
+ class TransformerBlock(nn.Module):
265
+ """
266
+ A single transformer block with attention and feed-forward layers.
267
+
268
+ Uses pre-normalization (LayerNorm before attention/FFN) for better
269
+ training stability.
270
+ """
271
+
272
+ def __init__(self, config: ChessConfig):
273
+ super().__init__()
274
+
275
+ self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
276
+ if config.attn == "MHA":
277
+ self.attn = MultiHeadAttention(config)
278
+ elif config.attn == "GQA":
279
+ self.attn = GroupedQueryAttention(config)
280
+ else:
281
+ raise ValueError(f"config.attn expected either MHA or GQA, got {config.attn}")
282
+
283
+ self.ln_2 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
284
+ self.mlp = FeedForward(config)
285
+
286
+ def forward(
287
+ self,
288
+ x: torch.Tensor,
289
+ attention_mask: Optional[torch.Tensor] = None,
290
+ ) -> torch.Tensor:
291
+ # Pre-norm attention
292
+ x = x + self.attn(self.ln_1(x), attention_mask=attention_mask)
293
+ # Pre-norm FFN
294
+ x = x + self.mlp(self.ln_2(x))
295
+ return x
296
+
297
+
298
+ class ChessForCausalLM(PreTrainedModel):
299
+ """
300
+ Chess Transformer for Causal Language Modeling (next-move prediction).
301
+
302
+ This model is designed to predict the next chess move given a sequence
303
+ of previous moves. It uses a GPT-style architecture with:
304
+ - Token embeddings for chess moves
305
+ - Learned positional embeddings
306
+ - Stacked transformer blocks
307
+ - Linear head for next-token prediction
308
+
309
+ The model supports weight tying between the embedding layer and the
310
+ output projection to save parameters.
311
+
312
+ Example:
313
+ >>> config = ChessConfig(vocab_size=1200, n_embd=128, n_layer=6)
314
+ >>> model = ChessForCausalLM(config)
315
+ >>> inputs = {"input_ids": torch.tensor([[1, 42, 87]])}
316
+ >>> outputs = model(**inputs)
317
+ >>> next_move_logits = outputs.logits[:, -1, :]
318
+ """
319
+
320
+ config_class = ChessConfig
321
+ base_model_prefix = "transformer"
322
+ supports_gradient_checkpointing = True
323
+ # Suppress missing-key warning for tied lm_head when loading
324
+ keys_to_ignore_on_load_missing = ["lm_head.weight"]
325
+
326
+ def __init__(self, config: ChessConfig):
327
+ super().__init__(config)
328
+
329
+ # Token and position embeddings
330
+ self.wte = nn.Embedding(config.vocab_size, config.n_embd)
331
+ self.wpe = nn.Embedding(config.n_ctx, config.n_embd)
332
+
333
+ self.drop = nn.Dropout(config.dropout)
334
+
335
+ # Transformer blocks
336
+ self.h = nn.ModuleList([
337
+ TransformerBlock(config) for _ in range(config.n_layer)
338
+ ])
339
+
340
+ # Final layer norm
341
+ self.ln_f = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
342
+
343
+ # Output head
344
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
345
+
346
+ # Declare tied weights for proper serialization
347
+ if config.tie_weights:
348
+ self._tied_weights_keys = ["lm_head.weight"]
349
+
350
+ # Initialize weights
351
+ self.post_init()
352
+
353
+ # Tie weights if configured
354
+ if config.tie_weights:
355
+ self.tie_weights()
356
+
357
+ def get_input_embeddings(self) -> nn.Module:
358
+ return self.wte
359
+
360
+ def set_input_embeddings(self, new_embeddings: nn.Module):
361
+ self.wte = new_embeddings
362
+ if getattr(self.config, "tie_weights", False):
363
+ self.tie_weights()
364
+
365
+ def get_output_embeddings(self) -> nn.Module:
366
+ return self.lm_head
367
+
368
+ def set_output_embeddings(self, new_embeddings: nn.Module):
369
+ self.lm_head = new_embeddings
370
+
371
+ def tie_weights(self):
372
+ # Use HF helper to tie or clone depending on config
373
+ if getattr(self.config, "tie_weights", False) or getattr(self.config, "tie_word_embeddings", False):
374
+ self._tie_or_clone_weights(self.lm_head, self.wte)
375
+
376
+ def _init_weights(self, module: nn.Module):
377
+ """Initialize weights following GPT-2 style."""
378
+ if isinstance(module, nn.Linear):
379
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
380
+ if module.bias is not None:
381
+ torch.nn.init.zeros_(module.bias)
382
+ elif isinstance(module, nn.Embedding):
383
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
384
+ elif isinstance(module, nn.LayerNorm):
385
+ torch.nn.init.ones_(module.weight)
386
+ torch.nn.init.zeros_(module.bias)
387
+
388
+ def forward(
389
+ self,
390
+ input_ids: torch.LongTensor,
391
+ attention_mask: Optional[torch.Tensor] = None,
392
+ position_ids: Optional[torch.LongTensor] = None,
393
+ labels: Optional[torch.LongTensor] = None,
394
+ return_dict: Optional[bool] = None,
395
+ **kwargs,
396
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
397
+ """
398
+ Forward pass of the model.
399
+
400
+ Args:
401
+ input_ids: Token IDs of shape (batch_size, seq_len).
402
+ attention_mask: Attention mask of shape (batch_size, seq_len).
403
+ position_ids: Position IDs of shape (batch_size, seq_len).
404
+ labels: Labels for language modeling loss.
405
+ return_dict: Whether to return a ModelOutput object.
406
+
407
+ Returns:
408
+ CausalLMOutputWithPast containing loss (if labels provided) and logits.
409
+ """
410
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
411
+
412
+ batch_size, seq_len = input_ids.size()
413
+ device = input_ids.device
414
+
415
+ # Create position IDs if not provided
416
+ if position_ids is None:
417
+ position_ids = torch.arange(seq_len, device=device).unsqueeze(0).expand(batch_size, -1)
418
+
419
+ # Get embeddings
420
+ token_embeds = self.wte(input_ids)
421
+ position_embeds = self.wpe(position_ids)
422
+ hidden_states = self.drop(token_embeds + position_embeds)
423
+
424
+ # Pass through transformer blocks
425
+ for block in self.h:
426
+ hidden_states = block(hidden_states, attention_mask=attention_mask)
427
+
428
+ # Final layer norm
429
+ hidden_states = self.ln_f(hidden_states)
430
+
431
+ # Get logits
432
+ logits = self.lm_head(hidden_states)
433
+
434
+ # Compute loss if labels are provided
435
+ loss = None
436
+ if labels is not None:
437
+ # Shift logits and labels for next-token prediction
438
+ shift_logits = logits[..., :-1, :].contiguous()
439
+ shift_labels = labels[..., 1:].contiguous()
440
+
441
+ # Flatten for cross-entropy
442
+ loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
443
+ # loss_fct = nn.CrossEntropyLoss(ignore_index=self.config.pad_token_id)
444
+ loss = loss_fct(
445
+ shift_logits.view(-1, shift_logits.size(-1)),
446
+ shift_labels.view(-1),
447
+ )
448
+
449
+ if not return_dict:
450
+ output = (logits,)
451
+ return ((loss,) + output) if loss is not None else output
452
+
453
+ return CausalLMOutputWithPast(
454
+ loss=loss,
455
+ logits=logits,
456
+ past_key_values=None,
457
+ hidden_states=None,
458
+ attentions=None,
459
+ )
460
+
461
+ @torch.no_grad()
462
+ def generate_move(
463
+ self,
464
+ input_ids: torch.LongTensor,
465
+ temperature: float = 1.0,
466
+ top_k: Optional[int] = None,
467
+ top_p: Optional[float] = None,
468
+ ) -> int:
469
+ """
470
+ Generate the next move given a sequence of moves.
471
+
472
+ Args:
473
+ input_ids: Token IDs of shape (1, seq_len).
474
+ temperature: Sampling temperature (1.0 = no change).
475
+ top_k: If set, only sample from top k tokens.
476
+ top_p: If set, use nucleus sampling with this threshold.
477
+
478
+ Returns:
479
+ The token ID of the predicted next move.
480
+ """
481
+ self.eval()
482
+
483
+ # Get logits for the last position
484
+ outputs = self(input_ids)
485
+ logits = outputs.logits[:, -1, :] / temperature
486
+
487
+ # Apply top-k filtering
488
+ if top_k is not None:
489
+ indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
490
+ logits[indices_to_remove] = float("-inf")
491
+
492
+ # Apply top-p (nucleus) filtering
493
+ if top_p is not None:
494
+ sorted_logits, sorted_indices = torch.sort(logits, descending=True)
495
+ cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
496
+
497
+ # Remove tokens with cumulative probability above the threshold
498
+ sorted_indices_to_remove = cumulative_probs > top_p
499
+ sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
500
+ sorted_indices_to_remove[..., 0] = 0
501
+
502
+ indices_to_remove = sorted_indices_to_remove.scatter(
503
+ dim=-1, index=sorted_indices, src=sorted_indices_to_remove
504
+ )
505
+ logits[indices_to_remove] = float("-inf")
506
+
507
+ # Sample from the distribution
508
+ probs = F.softmax(logits, dim=-1)
509
+ next_token = torch.multinomial(probs, num_samples=1)
510
+
511
+ return next_token.item()
512
+
513
+
514
+ # Register the model with Auto classes for easy loading
515
+ from transformers import AutoConfig, AutoModelForCausalLM
516
+
517
+ AutoConfig.register("chess_transformer", ChessConfig)
518
+ AutoModelForCausalLM.register(ChessConfig, ChessForCausalLM)
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b949bf10b181977fa25e45d6d1a03712c6b4651e5cb0c6f6a591166cfb17de4f
3
- size 3934384
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f769e93b5387cc533a9a3d66b861b3466b8886a29b9b72a30783c222b3e58d70
3
+ size 4003888