File size: 13,771 Bytes
93b803b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
"""
Custom Chess Tokenizer for the Chess Challenge.

This tokenizer uses sub-structural tokenization: each move is decomposed into
its components (piece, source square, destination square, suffix) instead of
treating the whole move as a single token.

Example: WPe2e4 -> [P, e2, e4]  (color is implicit from move number)
         BNg8f6(x) -> [N, g8, f6, (x)]

This approach:
- Reduces vocabulary from ~1200 to ~80 tokens
- Enables generalization across similar moves
- Eliminates [UNK] tokens for rare moves
- Saves parameters in the embedding layer

The dataset format uses:
- W/B prefix for White/Black (ignored - implicit from position)
- Piece letter: P=Pawn, N=Knight, B=Bishop, R=Rook, Q=Queen, K=King
- Source and destination squares (e.g., e2e4)
- Special suffixes: (x)=capture, (+)=check, (+*)=checkmate, (o)/(O)=castling
"""

from __future__ import annotations

import json
import os
import re
from pathlib import Path
from typing import Dict, List, Optional, Tuple

from transformers import PreTrainedTokenizer


# Regex pattern to parse extended UCI notation
# Matches: (W|B)(Piece)(src_file)(src_rank)(dst_file)(dst_rank)(suffix?)
MOVE_PATTERN = re.compile(
    r'^([WB])([PNBRQK])([a-h])([1-8])([a-h])([1-8])(\([^)]+\))?$'
)


class ChessTokenizer(PreTrainedTokenizer):
    """
    A custom tokenizer for chess moves using sub-structural tokenization.

    Each move is decomposed into components:
    - Piece type (P, N, B, R, Q, K)
    - Source square (e2, d7, etc.)
    - Destination square (e4, f6, etc.)
    - Optional suffix for captures/checks ((x), (+), (+*), (o), (O))

    The color (W/B) is NOT tokenized as it's implicit from the move order.

    Example:
        >>> tokenizer = ChessTokenizer.build_vocab()
        >>> tokenizer.encode("WPe2e4 BPe7e5")
        [1, 5, 20, 28, 5, 52, 44, 2]  # [BOS, P, e2, e4, P, e7, e5, EOS]
    """
    
    model_input_names = ["input_ids", "attention_mask"]
    vocab_files_names = {"vocab_file": "vocab.json"}
    
    # Special tokens
    PAD_TOKEN = "[PAD]"
    BOS_TOKEN = "[BOS]"
    EOS_TOKEN = "[EOS]"
    UNK_TOKEN = "[UNK]"
    
    def __init__(
        self,
        vocab_file: Optional[str] = None,
        vocab: Optional[Dict[str, int]] = None,
        **kwargs,
    ):
        """
        Initialize the chess tokenizer.
        
        Args:
            vocab_file: Path to a JSON file containing the vocabulary mapping.
            vocab: Dictionary mapping tokens to IDs (alternative to vocab_file).
            **kwargs: Additional arguments passed to PreTrainedTokenizer.
        """
        # Initialize special tokens
        self._pad_token = self.PAD_TOKEN
        self._bos_token = self.BOS_TOKEN
        self._eos_token = self.EOS_TOKEN
        self._unk_token = self.UNK_TOKEN

        # Remove any duplicate special-token entries passed through kwargs
        # to avoid "multiple values for keyword" errors when loading from disk.
        kwargs.pop("pad_token", None)
        kwargs.pop("bos_token", None)
        kwargs.pop("eos_token", None)
        kwargs.pop("unk_token", None)
        
        # Load or create vocabulary
        if vocab is not None:
            self._vocab = vocab
        elif vocab_file is not None and os.path.exists(vocab_file):
            with open(vocab_file, "r", encoding="utf-8") as f:
                self._vocab = json.load(f)
        else:
            # Create a minimal vocabulary with just special tokens
            # The full vocabulary should be built from the dataset
            self._vocab = self._create_default_vocab()
        
        # Create reverse mapping
        self._ids_to_tokens = {v: k for k, v in self._vocab.items()}
        
        # Call parent init AFTER setting up vocab
        super().__init__(
            pad_token=self._pad_token,
            bos_token=self._bos_token,
            eos_token=self._eos_token,
            unk_token=self._unk_token,
            **kwargs,
        )
    
    def _create_default_vocab(self) -> Dict[str, int]:
        """
        Create the full sub-structural vocabulary.

        The vocabulary contains:
        - 4 special tokens: [PAD], [BOS], [EOS], [UNK]
        - 6 piece tokens: P, N, B, R, Q, K
        - 64 square tokens: a1, a2, ..., h8
        - 5 suffix tokens: (x), (+), (+*), (o), (O)

        Total: 79 tokens (vs ~1200 for move-level tokenization)
        """
        tokens = []

        # Special tokens first
        special_tokens = [self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN]
        tokens.extend(special_tokens)

        # Piece tokens
        pieces = ['P', 'N', 'B', 'R', 'Q', 'K']
        tokens.extend(pieces)

        # Square tokens (a1-h8)
        files = 'abcdefgh'
        ranks = '12345678'
        for f in files:
            for r in ranks:
                tokens.append(f + r)

        # Suffix tokens for special moves
        suffixes = ['(x)', '(+)', '(+*)', '(o)', '(O)']
        tokens.extend(suffixes)

        # Promotion tokens (pawn promotion to piece)
        # Format in dataset might be like WPe7e8Q for promotion
        promotion_pieces = ['=Q', '=R', '=B', '=N']
        tokens.extend(promotion_pieces)

        vocab = {token: idx for idx, token in enumerate(tokens)}
        return vocab
    
    @classmethod
    def build_vocab(cls) -> "ChessTokenizer":
        """
        Build a tokenizer with the pre-defined sub-structural vocabulary.

        This is the recommended way to create a tokenizer for the chess challenge.
        The vocabulary is deterministic and covers all possible moves.

        Returns:
            A ChessTokenizer with the full sub-structural vocabulary (~83 tokens).
        """
        return cls()

    @classmethod
    def build_vocab_from_iterator(
        cls,
        iterator,
        min_frequency: int = 1,
    ) -> "ChessTokenizer":
        """
        Build a tokenizer vocabulary from an iterator of game strings.

        Note: With sub-structural tokenization, this method is mainly useful
        for analyzing token frequencies. The default vocabulary already covers
        all possible moves.

        Args:
            iterator: An iterator yielding game strings (space-separated moves).
            min_frequency: Minimum frequency for a token to be included.

        Returns:
            A ChessTokenizer with the built vocabulary.
        """
        # With sub-structural tokenization, we use the default vocab
        # which already contains all possible sub-tokens
        return cls()

    @classmethod
    def build_vocab_from_dataset(
        cls,
        dataset_name: str = "dlouapre/lichess_2025-01_1M",
        split: str = "train",
        column: str = "text",
        min_frequency: int = 500,
        max_samples: Optional[int] = 100000,
    ) -> "ChessTokenizer":
        """
        Build a tokenizer vocabulary from a Hugging Face dataset.

        Note: With sub-structural tokenization, the vocabulary is pre-defined
        and doesn't need to be built from data. This method is kept for
        compatibility but simply returns a tokenizer with the default vocab.

        Args:
            dataset_name: Name of the dataset on Hugging Face Hub.
            split: Dataset split to use.
            column: Column containing the game strings.
            min_frequency: Minimum frequency for a token to be included.
            max_samples: Maximum number of samples to process.

        Returns:
            A ChessTokenizer with the full sub-structural vocabulary.
        """
        # With sub-structural tokenization, we don't need to scan the dataset
        return cls()
    
    @property
    def vocab_size(self) -> int:
        """Return the size of the vocabulary."""
        return len(self._vocab)
    
    def get_vocab(self) -> Dict[str, int]:
        """Return the vocabulary as a dictionary."""
        return dict(self._vocab)
    
    def _parse_move(self, move: str) -> List[str]:
        """
        Parse a single move into its sub-components.

        Args:
            move: A move in extended UCI notation (e.g., WPe2e4, BNg8f6(x))

        Returns:
            List of tokens: [piece, src_square, dst_square, suffix?]
            Color (W/B) is ignored as it's implicit from move order.
        """
        # Try standard move pattern
        match = MOVE_PATTERN.match(move)
        if match:
            color, piece, src_file, src_rank, dst_file, dst_rank, suffix = match.groups()
            tokens = [piece, src_file + src_rank, dst_file + dst_rank]
            if suffix:
                tokens.append(suffix)
            return tokens

        # Try promotion pattern: WPe7e8Q or WPe7e8Q(+)
        promo_pattern = re.match(
            r'^([WB])P([a-h])([1-8])([a-h])([1-8])([QRBN])(\([^)]+\))?$',
            move
        )
        if promo_pattern:
            color, src_file, src_rank, dst_file, dst_rank, promo_piece, suffix = promo_pattern.groups()
            tokens = ['P', src_file + src_rank, dst_file + dst_rank, '=' + promo_piece]
            if suffix:
                tokens.append(suffix)
            return tokens

        # Fallback: return as single token (will likely be UNK)
        return [move]

    def _tokenize(self, text: str) -> List[str]:
        """
        Tokenize a string of moves into sub-structural tokens.

        Each move is decomposed into:
        - Piece type (P, N, B, R, Q, K)
        - Source square (e2, d7, etc.)
        - Destination square (e4, f6, etc.)
        - Optional suffix ((x), (+), etc.)

        Args:
            text: A string of space-separated moves.

        Returns:
            List of sub-tokens.

        Example:
            "WPe2e4 BPe7e5" -> ['P', 'e2', 'e4', 'P', 'e7', 'e5']
        """
        tokens = []
        moves = text.strip().split()
        for move in moves:
            tokens.extend(self._parse_move(move))
        return tokens
    
    def _convert_token_to_id(self, token: str) -> int:
        """Convert a token to its ID."""
        return self._vocab.get(token, self._vocab.get(self.UNK_TOKEN, 0))
    
    def _convert_id_to_token(self, index: int) -> str:
        """Convert an ID to its token."""
        return self._ids_to_tokens.get(index, self.UNK_TOKEN)
    
    def convert_tokens_to_string(self, tokens: List[str]) -> str:
        """
        Convert a list of sub-tokens back to a string of moves.

        Reconstructs moves from their components. Each move consists of:
        - Piece token (P, N, B, R, Q, K)
        - Source square (e2, d7, etc.)
        - Destination square (e4, f6, etc.)
        - Optional suffix ((x), (+), etc.) or promotion (=Q, =R, etc.)

        Args:
            tokens: List of sub-tokens.

        Returns:
            Space-separated string of reconstructed moves.
        """
        special = {self.PAD_TOKEN, self.BOS_TOKEN, self.EOS_TOKEN, self.UNK_TOKEN}
        pieces = {'P', 'N', 'B', 'R', 'Q', 'K'}
        suffixes = {'(x)', '(+)', '(+*)', '(o)', '(O)'}
        promotions = {'=Q', '=R', '=B', '=N'}

        moves = []
        current_move = []

        for token in tokens:
            if token in special:
                continue

            if token in pieces:
                # Start of a new move - save previous if exists
                if current_move:
                    moves.append(''.join(current_move))
                current_move = [token]
            elif token in suffixes or token in promotions:
                # End of move with suffix/promotion
                current_move.append(token)
            else:
                # Square token
                current_move.append(token)

        # Don't forget the last move
        if current_move:
            moves.append(''.join(current_move))

        return " ".join(moves)
    
    def save_vocabulary(
        self,
        save_directory: str,
        filename_prefix: Optional[str] = None,
    ) -> tuple:
        """
        Save the vocabulary to a JSON file.
        
        Args:
            save_directory: Directory to save the vocabulary.
            filename_prefix: Optional prefix for the filename.
        
        Returns:
            Tuple containing the path to the saved vocabulary file.
        """
        if not os.path.isdir(save_directory):
            os.makedirs(save_directory, exist_ok=True)
        
        vocab_file = os.path.join(
            save_directory,
            (filename_prefix + "-" if filename_prefix else "") + "vocab.json",
        )
        
        with open(vocab_file, "w", encoding="utf-8") as f:
            json.dump(self._vocab, f, ensure_ascii=False, indent=2)
        
        return (vocab_file,)


def count_vocab_from_dataset(
    dataset_name: str = "dlouapre/lichess_2025-01_1M",
    split: str = "train",
    column: str = "text",
    max_samples: Optional[int] = 10000,
) -> Dict[str, int]:
    """
    Count sub-token frequencies in a dataset (useful for vocabulary analysis).

    Args:
        dataset_name: Name of the dataset on Hugging Face Hub.
        split: Dataset split to use.
        column: Column containing the game strings.
        max_samples: Maximum number of samples to process.

    Returns:
        Dictionary mapping sub-tokens to their frequencies.
    """
    from collections import Counter
    from datasets import load_dataset

    dataset = load_dataset(dataset_name, split=split)

    if max_samples is not None:
        dataset = dataset.select(range(min(max_samples, len(dataset))))

    # Use a tokenizer instance to parse moves into sub-tokens
    tokenizer = ChessTokenizer()
    token_counts = Counter()

    for example in dataset:
        sub_tokens = tokenizer._tokenize(example[column])
        token_counts.update(sub_tokens)

    return dict(token_counts)