mjbommar's picture
Upload binary-tokenizer-001-16k tokenizer
9db8b79 verified
---
language:
- code
tags:
- tokenizer
- binary-analysis
- binary-tokenization
- bpe
- byte-pair-encoding
- reverse-engineering
- malware-analysis
- cybersecurity
- executable-analysis
license: mit
pipeline_tag: feature-extraction
library_name: tokenizers
---
# binary-tokenizer-001-16k
A cross-platform BPE tokenizer for binary executables and machine code. Trained on 13 GB of diverse binaries spanning Linux, Windows, macOS, and Android platforms.
**πŸ”— Model**: [`mjbommar/binary-tokenizer-001-16k`](https://huggingface.co/mjbommar/binary-tokenizer-001-16k)
**πŸ“Š Dataset**: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
**πŸ“„ Paper**: *Binary BPE: Cross-Platform Tokenization for Binary Analysis* (arXiv preprint coming soon)
## Overview
- **Vocabulary Size**: 16,384 tokens (2^14)
- **Token Composition**: 256 base bytes + 16,121 learned merges + 7 special tokens
- **Average Token Length**: 3.498 bytes
- **3-byte Instructions**: 20.5% of vocabulary (3,360 tokens)
- **Compression Ratio**: ~2.4 bytes/token on typical binaries
---
## Training Configuration
**Training Corpus**:
- Source: [`mjbommar/binary-30k-tokenized`](https://huggingface.co/datasets/mjbommar/binary-30k-tokenized)
- Size: ~13 GB
- Files: 30,738 binary files
- Platforms: Linux (ELF), Windows (PE), macOS (Mach-O), Android (APK)
- Architectures: x86-64, x86, ARM64, ARM, MIPS, RISC-V
**Training Parameters**:
- Vocabulary size: 16,384 (including 7 special tokens)
- Min frequency: 10
- Chunk size: 8,192 bytes
- Allowed lengths: DEFAULT (1-16 bytes)
- Training duration: ~3-4 hours
---
## Vocabulary Statistics
**Composition**:
- Base bytes (0-255): 256 tokens
- Learned merges: 16,121 tokens
- Special tokens: 7 tokens (`<|start|>`, `<|end|>`, `<|pad|>`, `<|unk|>`, `<|cls|>`, `<|sep|>`, `<|mask|>`)
- **Total**: 16,384 tokens
**Quality Metrics**:
- All tokens reachable: βœ“ Yes
- Valid merges: 16,121 / 16,121
- Power-of-2 size: βœ“ Yes (2^14)
---
## Token Length Distribution
| Length | Count | Percentage | Description |
|--------|-------|------------|-------------|
| 1 byte | 256 | 1.6% | Base bytes |
| 2 bytes | 7,149 | 43.7% | Byte pairs |
| 3 bytes | 3,360 | 20.5% | Complete x86-64 instructions |
| 4 bytes | 3,082 | 18.8% | Instructions with operands |
| 5 bytes | 719 | 4.4% | Complex patterns |
| 6 bytes | 606 | 3.7% | Complex patterns |
| 7 bytes | 228 | 1.4% | Complex patterns |
| 8 bytes | 377 | 2.3% | Complex patterns |
| 9+ bytes | 607 | 3.7% | Long patterns |
**Average Token Length**: 3.498 bytes
---
## Byte Content Analysis
**Content Categories**:
- Contains NULL byte (0x00): 4,128 tokens (25.2%)
- ASCII printable (0x20-0x7E): 3,513 tokens (21.5%)
- All ASCII (<0x80): 7,256 tokens (44.3%)
- High bytes (β‰₯0x80): 9,121 tokens (55.7%)
**Most Common Bytes in Tokens**:
- `0x00` (NULL): 9,741 occurrences - Padding and alignment
- `0xFF`: 1,718 occurrences - Sentinel values
- `0x48` (REX.W): 1,352 occurrences - x86-64 REX prefix
- `0x8B` (MOV): 955 occurrences - x86-64 MOV opcode
- `0xCC` (INT3): 751 occurrences - Debug breakpoint padding
---
## Sequence Coverage
**N-byte Sequence Diversity**:
| Length | Learned Tokens | Possible Sequences | Coverage |
|--------|----------------|-------------------|----------|
| 1-byte | 256 | 256 | 100.00% |
| 2-byte | 7,149 | 65,536 | 10.91% |
| 3-byte | 3,360 | 16,777,216 | 0.020% |
| 4-byte | 3,082 | 4,294,967,296 | 0.000072% |
---
## Files
- `tokenizer-16384.json` - Trained tokenizer model (1.2 MB)
- `analysis_results.json` - Detailed analysis statistics
- `training.log` - Training output log (if available)
- `training_stats.txt` - Training summary (if available)
---
## Usage
**Load from HuggingFace Hub**:
```python
from tokenizers import Tokenizer
# Load directly from HuggingFace
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-16k")
```
**Load from local file**:
```bash
# With bbpe CLI
bbpe encode --tokenizer tokenizer-16384.json /path/to/binary
bbpe info tokenizer-16384.json
```
**Complete Python Example**:
```python
from tokenizers import Tokenizer
# Load from HuggingFace or local file
tokenizer = Tokenizer.from_pretrained("mjbommar/binary-tokenizer-001-16k")
# OR: tokenizer = Tokenizer.from_file("tokenizer-16384.json")
# Read binary file and decode as latin-1 (preserves all byte values 0-255)
with open("/usr/bin/ls", "rb") as f:
data = f.read()
data_str = data.decode("latin-1")
# Encode the binary data
encoding = tokenizer.encode(data_str)
print(f"File size: {len(data)} bytes")
print(f"Total tokens: {len(encoding.ids)}")
print(f"Compression: {len(data) / len(encoding.ids):.3f} bytes/token")
# First 10 tokens
for i, (token_id, token) in enumerate(zip(encoding.ids[:10], encoding.tokens[:10])):
token_bytes = token.encode("latin-1")
print(f" Token {i}: ID={token_id:5d} hex={token_bytes.hex():20s} ({len(token_bytes)} bytes)")
# Decode tokens back to bytes
decoded_str = tokenizer.decode(encoding.ids)
decoded_bytes = decoded_str.encode("latin-1")
assert decoded_bytes == data # Perfect reconstruction
```
**Example output for `/usr/bin/ls` (142,312 bytes)**:
```
File size: 142312 bytes
Total tokens: 59531
Compression: 2.391 bytes/token
First 10 tokens:
Token 0: ID= 127 hex=7f (1 bytes)
Token 1: ID=15580 hex=454c (2 bytes)
Token 2: ID= 70 hex=46 (1 bytes)
Token 3: ID= 2 hex=02 (1 bytes)
Token 4: ID= 1516 hex=0101 (2 bytes)
Token 5: ID= 2624 hex=000000000000000000 (9 bytes)
Token 6: ID= 1046 hex=0300 (2 bytes)
Token 7: ID= 5675 hex=3e00 (2 bytes)
Token 8: ID= 1099 hex=01000000 (4 bytes)
Token 9: ID= 48 hex=30 (1 bytes)
Decoded: 7f454c4602010100000000000000000003003e000100000030...
(ELF header: 7f 45 4c 46 = ELF magic bytes)
```
---
## Citation
If you use this tokenizer in your research, please cite:
```bibtex
@article{bommarito2025binarybpe,
title={Binary BPE: Cross-Platform Tokenization for Binary Analysis},
author={Bommarito II, Michael J.},
journal={arXiv preprint},
year={2025},
note={Preprint coming soon}
}
```
**Author**: Michael J. Bommarito II ([michael.bommarito@gmail.com](mailto:michael.bommarito@gmail.com))
---
**Generated**: November 12, 2025
**Training Script**: `train_tokenizers.sh`
**Analysis Script**: `analyze_tokenizer.py`