--- license: mit language: - dig tags: - tokenizer - bpe - flexitok - fineweb2 --- # Byte-Level BPE Tokenizer: ['digit'] (2K) A **Byte-Level BPE** tokenizer trained on **['digit']** data from Fineweb-2-HQ. ## Training Details | Parameter | Value | |-----------|-------| | Algorithm | Byte-Level BPE | | Language | `['digit']` | | Target Vocab Size | 2,000 | | Final Vocab Size | 1,249 | | Pre-tokenizer | custom:addition | | Number handling | ltr_3digit | | Contraction handling | False | | Normalizer | NFC | | Special Tokens | ``, ``, ``, `` | | Training Shards | 2 | ## Usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("flexitok/maddition_digit_2000") tokens = tokenizer.encode("Hello, world!") ``` ## Files - `tokenizer.json` — Full HuggingFace tokenizer - `vocab.json` — Vocabulary mapping - `merges.txt` — BPE merge rules ## Sample Encoding | Text | Tokens | Token IDs | |------|--------|-----------| | `yirmi iki+dokuz=otuz bir\ntwenty two+nine=thirty one` | `y, i, r, m, i, Ġ, i, k, i, +, d, o, k, u, z, =, o, t, u, z` | `91, 75, 84, 79, 75, 223, 75, 77, 75, 13, 70, 81, 77, 87, 92, 31, 81, 86, 87, 92` | Command used to create this tokenizer: ```bash ['/home/gsa/tokenizers2/flexitok/tokenizer_training/train_tokenizers.py', 'algorithm=bpe', 'vocab_size=2000', 'langs=[digit]', 'data_dir=/scratch/gsa/data/multilingual-addition/', 'output_dir=/scratch/gsa/trained_tokenizers/multilingual_addition', 'pretokenizer=custom:addition', 'number_handling=ltr_3digit', 'add_numbers=false', 'handle_contractions=false', 'unicode_normalization=nfc', 'use_byte_level_regex=false', 'byte_fallback=false', 'strip_zero_width=false', 'cjk_char_split=false', 'add_cjk_chars=false', 'max_lines=-1', 'test_string=yirmi iki+dokuz=otuz bir\\ntwenty two+nine=thirty one', 'hf.publish_to_hf=true', 'hf_repo_prefix=flexitok/', 'hf.hf_repo_id=flexitok/maddition_digit_2000', 'hf.collections=[flexitok/multilingual_addition_tokenizers]']