--- license: mit language: - und tags: - tokenizer - bpe - flexitok - fineweb2 datasets: - flexitok/mod-arithmetic --- # Byte-Level BPE Tokenizer: numeric (1K) A **Byte-Level BPE** tokenizer trained on **numeric** data from Fineweb-2-HQ. ## Training Details | Parameter | Value | |-----------|-------| | Algorithm | Byte-Level BPE | | Language | `numeric` | | Target Vocab Size | 1,106 | | Final Vocab Size | 1,102 | | Pre-tokenizer | byte_level | | Number handling | ltr_3digit | | Contraction handling | False | | Normalizer | NONE | | Special Tokens | ``, ``, ``, `` | | Training Shards | 1 | ## Usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("None") tokens = tokenizer.encode("Hello, world!") ``` ## Files - `tokenizer.json` — Full HuggingFace tokenizer - `vocab.json` — Vocabulary mapping - `merges.txt` — BPE merge rules ## Sample Encoding | Text | Tokens | Token IDs | |------|--------|-----------| | `103500109 mod 67` | `103, 500, 109, , mod, , 67` | `452, 749, 458, 6, 4, 6, 53` |