mjbommar commited on
Commit
9409ac8
·
verified ·
1 Parent(s): 734dbe7

Upload binary-tokenizer-001-64k tokenizer

Browse files
Files changed (3) hide show
  1. README.md +55 -144
  2. analysis_results.json +3 -141
  3. tokenizer.json +0 -0
README.md CHANGED
@@ -1,27 +1,24 @@
1
  ---
2
  language:
3
  - code
4
- license: apache-2.0
5
  tags:
6
  - tokenizer
7
  - binary-analysis
8
  - binary-tokenization
9
  - bpe
10
  - byte-pair-encoding
11
- - malware-analysis
12
  - reverse-engineering
13
- - security
14
- - x86-64
15
- - arm64
16
- - elf
17
- - pe
18
- library_name: tokenizers
19
  pipeline_tag: feature-extraction
 
20
  ---
21
 
22
  # binary-tokenizer-001-64k
23
 
24
- A cross-platform BPE tokenizer for binary executables and machine code. Trained using advanced chunked training with deduplication on 23 GB of diverse binaries spanning Linux and Windows platforms.
25
 
26
  **🔗 Model**: [`mjbommar/binary-tokenizer-001-64k`](https://huggingface.co/mjbommar/binary-tokenizer-001-64k)
27
  **📊 Dataset**: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
@@ -31,9 +28,9 @@ A cross-platform BPE tokenizer for binary executables and machine code. Trained
31
 
32
  - **Vocabulary Size**: 65,536 tokens (2^16)
33
  - **Token Composition**: 256 base bytes + 65,273 learned merges + 7 special tokens
34
- - **Average Token Length**: 3.749 bytes
35
- - **3-byte Instructions**: 16.5% of vocabulary (10,800 tokens)
36
- - **Compression Ratio**: ~2.6 bytes/token on typical binaries
37
 
38
  ---
39
 
@@ -41,18 +38,17 @@ A cross-platform BPE tokenizer for binary executables and machine code. Trained
41
 
42
  **Training Corpus**:
43
  - Source: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
44
- - Size: ~23 GB (24.7 billion bytes)
45
- - Processed Chunks: 40,574 total (37,083 unique + 8,454 duplicates reused)
46
- - Platforms: Linux (Alpine, Debian, Ubuntu - ELF), Windows (8, 10, 11 - PE)
47
- - Architectures: x86-64, x86-32, ARM64
48
 
49
  **Training Parameters**:
50
  - Vocabulary size: 65,536 (including 7 special tokens)
51
- - Min frequency: 4
52
- - Chunk size: 4,194,304 bytes (4 MB)
53
- - Training method: Chunked BPE with deduplication and support-based merge combination
54
  - Allowed lengths: DEFAULT (1-16 bytes)
55
- - Training duration: ~8-9 hours
56
 
57
  ---
58
 
@@ -76,33 +72,33 @@ A cross-platform BPE tokenizer for binary executables and machine code. Trained
76
  | Length | Count | Percentage | Description |
77
  |--------|-------|------------|-------------|
78
  | 1 byte | 256 | 0.4% | Base bytes |
79
- | 2 bytes | 28,561 | 43.6% | Byte pairs (most common patterns) |
80
- | 3 bytes | 10,800 | 16.5% | Complete x86-64 instructions |
81
- | 4 bytes | 14,376 | 21.9% | Instructions with operands |
82
- | 5 bytes | 2,780 | 4.2% | Complex patterns |
83
- | 6 bytes | 2,213 | 3.4% | Complex patterns |
84
- | 7 bytes | 1,167 | 1.8% | Complex patterns |
85
- | 8 bytes | 2,329 | 3.6% | Multi-byte sequences |
86
- | 9+ bytes | 3,045 | 4.6% | Long patterns |
87
 
88
- **Average Token Length**: 3.749 bytes
89
 
90
  ---
91
 
92
  ## Byte Content Analysis
93
 
94
  **Content Categories**:
95
- - Contains NULL byte (0x00): 17,418 tokens (26.6%)
96
- - ASCII printable (0x20-0x7E): 9,478 tokens (14.5%)
97
- - All ASCII (<0x80): 20,816 tokens (31.8%)
98
- - High bytes (≥0x80): 44,711 tokens (68.2%)
99
 
100
  **Most Common Bytes in Tokens**:
101
- - `0x00` (NULL): 34,482 occurrences - Padding and alignment
102
- - `0xFF`: 6,545 occurrences - Sentinel values
103
- - `0x48` (REX.W): 3,419 occurrences - x86-64 REX prefix
104
- - `0x8B` (MOV): 2,486 occurrences - x86-64 MOV opcode
105
- - `0x40` (@): 4,538 occurrences - ASCII and instruction patterns
106
 
107
  ---
108
 
@@ -112,19 +108,18 @@ A cross-platform BPE tokenizer for binary executables and machine code. Trained
112
  | Length | Learned Tokens | Possible Sequences | Coverage |
113
  |--------|----------------|-------------------|----------|
114
  | 1-byte | 256 | 256 | 100.00% |
115
- | 2-byte | 28,561 | 65,536 | 43.58% |
116
- | 3-byte | 10,800 | 16,777,216 | 0.064% |
117
- | 4-byte | 14,376 | 4,294,967,296 | 0.00034% |
118
-
119
- **Notable Achievement**: 43.6% coverage of all possible 2-byte sequences - excellent for pattern recognition.
120
 
121
  ---
122
 
123
  ## Files
124
 
125
- - `tokenizer-65536.json` - Trained tokenizer model (2.4 MB)
126
  - `analysis_results.json` - Detailed analysis statistics
127
- - `original_README.md` - Original README from HuggingFace
 
128
 
129
  ---
130
 
@@ -135,7 +130,7 @@ A cross-platform BPE tokenizer for binary executables and machine code. Trained
135
  from tokenizers import Tokenizer
136
 
137
  # Load directly from HuggingFace
138
- tokenizer = Tokenizer.from_pretrained("mjbommar/glaurung-binary-tokenizer-002")
139
  ```
140
 
141
  **Load from local file**:
@@ -150,7 +145,7 @@ bbpe info tokenizer-65536.json
150
  from tokenizers import Tokenizer
151
 
152
  # Load from HuggingFace or local file
153
- tokenizer = Tokenizer.from_pretrained("mjbommar/glaurung-binary-tokenizer-002")
154
  # OR: tokenizer = Tokenizer.from_file("tokenizer-65536.json")
155
 
156
  # Read binary file and decode as latin-1 (preserves all byte values 0-255)
@@ -178,20 +173,20 @@ assert decoded_bytes == data # Perfect reconstruction
178
  **Example output for `/usr/bin/ls` (142,312 bytes)**:
179
  ```
180
  File size: 142312 bytes
181
- Total tokens: 54537
182
- Compression: 2.609 bytes/token
183
 
184
  First 10 tokens:
185
- Token 0: ID= 127 hex=7f (1 bytes)
186
- Token 1: ID= 2382 hex=454c (2 bytes)
187
- Token 2: ID= 5923 hex=4602 (2 bytes)
188
- Token 3: ID= 394 hex=0101 (2 bytes)
189
- Token 4: ID= 268 hex=000000000000 (6 bytes)
190
- Token 5: ID= 259 hex=000000 (3 bytes)
191
- Token 6: ID= 295 hex=0300 (2 bytes)
192
- Token 7: ID= 2124 hex=3e00 (2 bytes)
193
- Token 8: ID= 271 hex=01000000 (4 bytes)
194
- Token 9: ID=59106 hex=306d (2 bytes)
195
 
196
  Decoded: 7f454c4602010100000000000000000003003e0001000000306d...
197
  (ELF header: 7f 45 4c 46 = ELF magic bytes)
@@ -199,68 +194,6 @@ Decoded: 7f454c4602010100000000000000000003003e0001000000306d...
199
 
200
  ---
201
 
202
- ## Performance Characteristics
203
-
204
- **Compression on Real-World Binaries**:
205
-
206
- | Binary | Size | Tokens | bytes/token |
207
- |--------|------|--------|-------------|
208
- | bash | 1.38 MB | 602,719 | 2.399 |
209
- | python3.12 | 7.65 MB | 2,997,303 | 2.676 |
210
- | gcc-13 | 0.98 MB | 375,331 | 2.726 |
211
- | ls | 0.14 MB | 54,537 | 2.609 |
212
- | grep | 0.18 MB | 73,500 | 2.542 |
213
-
214
- **Average**: 2.590 bytes/token
215
-
216
- **Information-Theoretic Efficiency**:
217
- - Binary entropy: ~6.5 bits/byte
218
- - Theoretical optimal: 2.46 bytes/token
219
- - Actual performance: 2.590 bytes/token
220
- - **Efficiency: 95.0%** of theoretical optimum
221
-
222
- ---
223
-
224
- ## Key Features
225
-
226
- **Instruction-Aware Patterns**:
227
- - REX prefixes: `0x48`, `0x4c`, `0x4d` (x86-64 64-bit operands)
228
- - Common opcodes: `0x8b` (MOV), `0x89` (MOV), `0xe8` (CALL)
229
- - ModR/M patterns: `0xc0`, `0x45`, `0x5d`
230
-
231
- **Common Binary Patterns**:
232
- - Padding: `0xcc 0xcc` (INT3 debug breakpoints), `0x90 0x90` (NOP sleds)
233
- - Alignment: `0x00 0x00 0x00 0x00` (NULL padding)
234
- - String terminators: `0x00` at word boundaries
235
-
236
- **String-Rich Vocabulary**:
237
- - 11.81% of vocabulary contains function names, paths, and library references
238
- - Better semantic understanding than standard BPE
239
-
240
- ---
241
-
242
- ## Comparison with Other Tokenizers
243
-
244
- **vs. binary-tokenizer-001 Series** (this repository):
245
-
246
- | Metric | 4K | 8K | 16K | 64K (this) | Improvement |
247
- |--------|----|----|-----|------------|-------------|
248
- | Vocab size | 4,096 | 8,192 | 16,384 | 65,536 | 4-16x larger |
249
- | Avg token length | 3.000 | 3.312 | 3.498 | 3.749 | +25% vs 4K |
250
- | 3-byte tokens % | 20.6% | 21.7% | 20.5% | 16.5% | Different focus |
251
- | 2-byte coverage | 3.0% | 5.6% | 10.9% | 43.6% | 14x better |
252
- | Compression (ls) | 2.00 | 2.17 | 2.39 | 2.61 | +30% vs 4K |
253
- | Training method | Standard | Standard | Standard | Chunked+dedup | Advanced |
254
-
255
- **Key Advantages of 64K Vocabulary**:
256
- - **43.6% 2-byte coverage**: Captures nearly half of all possible byte pairs
257
- - **Chunked training**: Deduplication-aware training improves merge quality
258
- - **Better compression**: 2.609 bytes/token vs 2.0 bytes/token (4K)
259
- - **Longer patterns**: 3.749 byte average vs 3.0 bytes (4K)
260
- - **String-rich**: 11.81% vocabulary contains semantic strings
261
-
262
- ---
263
-
264
  ## Citation
265
 
266
  If you use this tokenizer in your research, please cite:
@@ -275,32 +208,10 @@ If you use this tokenizer in your research, please cite:
275
  }
276
  ```
277
 
278
- **Relation to Glaurung Tokenizer**:
279
-
280
- This tokenizer is part of the binary-tokenizer-001 series and is identical to [`mjbommar/glaurung-binary-tokenizer-002`](https://huggingface.co/mjbommar/glaurung-binary-tokenizer-002), provided here with consistent naming and formatting. The original Glaurung tokenizer was trained using:
281
-
282
- ```
283
- Glaurung Binary Tokenizer 002
284
- Training: October-November 2025
285
- Training Method: Chunked BPE with deduplication (bbpe v0.3.2)
286
- Dataset: 23GB binaries-small (40,574 chunks, 8,454 duplicates)
287
- Performance: 2.590 bytes/token (95% of theoretical optimum)
288
- ```
289
-
290
  **Author**: Michael J. Bommarito II ([michael.bommarito@gmail.com](mailto:michael.bommarito@gmail.com))
291
 
292
  ---
293
 
294
- ## License
295
-
296
- **Apache License 2.0**
297
-
298
- This tokenizer is part of the [Glaurung](https://github.com/mjbommar/glaurung) project.
299
-
300
- ---
301
-
302
  **Generated**: November 13, 2025
303
- **Part of**: binary-tokenizer-001 series (4K, 8K, 16K, 64K)
304
- **Original Model**: `mjbommar/glaurung-binary-tokenizer-002`
305
- **Training Tool**: bbpe v0.3.2
306
  **Analysis Script**: `analyze_tokenizer.py`
 
1
  ---
2
  language:
3
  - code
 
4
  tags:
5
  - tokenizer
6
  - binary-analysis
7
  - binary-tokenization
8
  - bpe
9
  - byte-pair-encoding
 
10
  - reverse-engineering
11
+ - malware-analysis
12
+ - cybersecurity
13
+ - executable-analysis
14
+ license: mit
 
 
15
  pipeline_tag: feature-extraction
16
+ library_name: tokenizers
17
  ---
18
 
19
  # binary-tokenizer-001-64k
20
 
21
+ A cross-platform BPE tokenizer for binary executables and machine code. Trained on 13 GB of diverse binaries spanning Linux, Windows, macOS, and Android platforms.
22
 
23
  **🔗 Model**: [`mjbommar/binary-tokenizer-001-64k`](https://huggingface.co/mjbommar/binary-tokenizer-001-64k)
24
  **📊 Dataset**: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
 
28
 
29
  - **Vocabulary Size**: 65,536 tokens (2^16)
30
  - **Token Composition**: 256 base bytes + 65,273 learned merges + 7 special tokens
31
+ - **Average Token Length**: 4.173 bytes
32
+ - **3-byte Instructions**: 17.9% of vocabulary (11,729 tokens)
33
+ - **Compression Ratio**: ~3.0 bytes/token on typical binaries
34
 
35
  ---
36
 
 
38
 
39
  **Training Corpus**:
40
  - Source: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
41
+ - Size: ~13 GB
42
+ - Files: 30,738 binary files
43
+ - Platforms: Linux (ELF), Windows (PE), macOS (Mach-O), Android (APK)
44
+ - Architectures: x86-64, x86, ARM64, ARM, MIPS, RISC-V
45
 
46
  **Training Parameters**:
47
  - Vocabulary size: 65,536 (including 7 special tokens)
48
+ - Min frequency: 10
49
+ - Chunk size: 8,192 bytes
 
50
  - Allowed lengths: DEFAULT (1-16 bytes)
51
+ - Training duration: ~12-14 hours
52
 
53
  ---
54
 
 
72
  | Length | Count | Percentage | Description |
73
  |--------|-------|------------|-------------|
74
  | 1 byte | 256 | 0.4% | Base bytes |
75
+ | 2 bytes | 24,943 | 38.1% | Byte pairs (most common) |
76
+ | 3 bytes | 11,729 | 17.9% | Complete x86-64 instructions |
77
+ | 4 bytes | 13,189 | 20.1% | Instructions with operands |
78
+ | 5 bytes | 3,737 | 5.7% | Complex patterns |
79
+ | 6 bytes | 3,109 | 4.7% | Complex patterns |
80
+ | 7 bytes | 1,564 | 2.4% | Complex patterns |
81
+ | 8 bytes | 2,498 | 3.8% | Multi-byte sequences |
82
+ | 9+ bytes | 3,302 | 5.0% | Long patterns |
83
 
84
+ **Average Token Length**: 4.173 bytes
85
 
86
  ---
87
 
88
  ## Byte Content Analysis
89
 
90
  **Content Categories**:
91
+ - Contains NULL byte (0x00): 16,988 tokens (25.9%)
92
+ - ASCII printable (0x20-0x7E): 11,552 tokens (17.6%)
93
+ - All ASCII (<0x80): 24,706 tokens (37.7%)
94
+ - High bytes (≥0x80): 40,821 tokens (62.3%)
95
 
96
  **Most Common Bytes in Tokens**:
97
+ - `0x00` (NULL): 42,537 occurrences - Padding and alignment
98
+ - `0xFF`: 7,204 occurrences - Sentinel values
99
+ - `0x48` (REX.W): 6,105 occurrences - x86-64 REX prefix
100
+ - `0x8B` (MOV): 4,016 occurrences - x86-64 MOV opcode
101
+ - `0x20` (space): 4,087 occurrences - ASCII strings
102
 
103
  ---
104
 
 
108
  | Length | Learned Tokens | Possible Sequences | Coverage |
109
  |--------|----------------|-------------------|----------|
110
  | 1-byte | 256 | 256 | 100.00% |
111
+ | 2-byte | 24,943 | 65,536 | 38.06% |
112
+ | 3-byte | 11,729 | 16,777,216 | 0.070% |
113
+ | 4-byte | 13,189 | 4,294,967,296 | 0.00031% |
 
 
114
 
115
  ---
116
 
117
  ## Files
118
 
119
+ - `tokenizer-65536.json` - Trained tokenizer model (5.0 MB)
120
  - `analysis_results.json` - Detailed analysis statistics
121
+ - `training.log` - Training output log (if available)
122
+ - `training_stats.txt` - Training summary (if available)
123
 
124
  ---
125
 
 
130
  from tokenizers import Tokenizer
131
 
132
  # Load directly from HuggingFace
133
+ tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-64k")
134
  ```
135
 
136
  **Load from local file**:
 
145
  from tokenizers import Tokenizer
146
 
147
  # Load from HuggingFace or local file
148
+ tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-64k")
149
  # OR: tokenizer = Tokenizer.from_file("tokenizer-65536.json")
150
 
151
  # Read binary file and decode as latin-1 (preserves all byte values 0-255)
 
173
  **Example output for `/usr/bin/ls` (142,312 bytes)**:
174
  ```
175
  File size: 142312 bytes
176
+ Total tokens: 47993
177
+ Compression: 2.965 bytes/token
178
 
179
  First 10 tokens:
180
+ Token 0: ID=45813 hex=7f454c46020101 (7 bytes)
181
+ Token 1: ID= 662 hex=000000000000000000 (9 bytes)
182
+ Token 2: ID= 265 hex=0300 (2 bytes)
183
+ Token 3: ID= 1369 hex=3e00 (2 bytes)
184
+ Token 4: ID= 279 hex=01000000 (4 bytes)
185
+ Token 5: ID=41250 hex=306d (2 bytes)
186
+ Token 6: ID= 288 hex=000000000000 (6 bytes)
187
+ Token 7: ID= 5908 hex=4000000000000000 (8 bytes)
188
+ Token 8: ID= 8377 hex=2824 (2 bytes)
189
+ Token 9: ID=14325 hex=02000000000000000000 (10 bytes)
190
 
191
  Decoded: 7f454c4602010100000000000000000003003e0001000000306d...
192
  (ELF header: 7f 45 4c 46 = ELF magic bytes)
 
194
 
195
  ---
196
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
  ## Citation
198
 
199
  If you use this tokenizer in your research, please cite:
 
208
  }
209
  ```
210
 
 
 
 
 
 
 
 
 
 
 
 
 
211
  **Author**: Michael J. Bommarito II ([michael.bommarito@gmail.com](mailto:michael.bommarito@gmail.com))
212
 
213
  ---
214
 
 
 
 
 
 
 
 
 
215
  **Generated**: November 13, 2025
216
+ **Training Script**: `train_tokenizers.sh`
 
 
217
  **Analysis Script**: `analyze_tokenizer.py`
analysis_results.json CHANGED
@@ -1,141 +1,3 @@
1
- {
2
- "vocab_size": {
3
- "total": 65529,
4
- "total_with_special": 65536,
5
- "base": 256,
6
- "merges": 65273,
7
- "special": 7,
8
- "is_power_of_2": true,
9
- "power": 16,
10
- "matches_expected": true
11
- },
12
- "reachability": {
13
- "valid_merges": 65273,
14
- "invalid_merges": 0,
15
- "reachable": 65529,
16
- "unreachable": 0,
17
- "all_reachable": true
18
- },
19
- "length_dist": {
20
- "distribution": {
21
- "1": 256,
22
- "2": 28561,
23
- "3": 10800,
24
- "4": 14376,
25
- "5": 2780,
26
- "6": 2213,
27
- "7": 1167,
28
- "8": 2329,
29
- "9": 295,
30
- "10": 368,
31
- "11": 244,
32
- "12": 711,
33
- "13": 101,
34
- "14": 177,
35
- "15": 112,
36
- "16": 340,
37
- "17": 28,
38
- "18": 79,
39
- "19": 57,
40
- "20": 136,
41
- "21": 15,
42
- "22": 34,
43
- "23": 49,
44
- "24": 121,
45
- "25": 16,
46
- "26": 17,
47
- "27": 16,
48
- "28": 51,
49
- "29": 8,
50
- "31": 7,
51
- "32": 47,
52
- "30": 16
53
- },
54
- "avg_length": 3.748882140186488,
55
- "min_length": 1,
56
- "max_length": 32,
57
- "length_3_count": 10800,
58
- "length_3_percent": 16.481755612190394
59
- },
60
- "byte_content": {
61
- "null_tokens": 17418,
62
- "ascii_printable": 9478,
63
- "ascii_only": 20816,
64
- "high_byte": 44711,
65
- "mixed": 25785,
66
- "byte_distribution": {
67
- "0": 34482,
68
- "255": 6545,
69
- "3": 6081,
70
- "64": 4538,
71
- "1": 4447,
72
- "249": 3888,
73
- "2": 3436,
74
- "72": 3419,
75
- "128": 3195,
76
- "32": 3026,
77
- "4": 2593,
78
- "169": 2534,
79
- "139": 2486,
80
- "145": 2423,
81
- "224": 2182,
82
- "170": 2122,
83
- "84": 2106,
84
- "116": 1953,
85
- "101": 1855,
86
- "65": 1852,
87
- "151": 1838,
88
- "185": 1805,
89
- "253": 1708,
90
- "97": 1703,
91
- "36": 1700,
92
- "99": 1687,
93
- "115": 1669,
94
- "8": 1615,
95
- "6": 1503,
96
- "5": 1479,
97
- "96": 1471,
98
- "192": 1436,
99
- "23": 1429,
100
- "114": 1424,
101
- "204": 1414,
102
- "31": 1371,
103
- "7": 1368,
104
- "82": 1350,
105
- "33": 1307,
106
- "105": 1260,
107
- "137": 1176,
108
- "131": 1172,
109
- "67": 1162,
110
- "19": 1155,
111
- "111": 1153,
112
- "110": 1143,
113
- "68": 1130,
114
- "15": 1124,
115
- "66": 1123,
116
- "113": 1116
117
- }
118
- },
119
- "diversity": {
120
- "1": {
121
- "learned": 256,
122
- "possible": 256,
123
- "coverage": 100.0
124
- },
125
- "2": {
126
- "learned": 28561,
127
- "possible": 65536,
128
- "coverage": 43.58062744140625
129
- },
130
- "3": {
131
- "learned": 10800,
132
- "possible": 16777216,
133
- "coverage": 0.06437301635742188
134
- },
135
- "4": {
136
- "learned": 14376,
137
- "possible": 4294967296,
138
- "coverage": 0.000334717333316803
139
- }
140
- }
141
- }
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c1c3c4b4ffb26b54bca3602e86c21a88281ba1e8ecf26fc5504703c267414f4
3
+ size 2622
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff