| | --- |
| | license: mit |
| | language: |
| | - und |
| | tags: |
| | - tokenizer |
| | - bpe |
| | - flexitok |
| | - fineweb2 |
| | --- |
| | |
| | # Byte-Level BPE Tokenizer: numeric (0K) |
| |
|
| | A **Byte-Level BPE** tokenizer trained on **numeric** data from Fineweb-2-HQ. |
| |
|
| | ## Training Details |
| |
|
| | | Parameter | Value | |
| | |-----------|-------| |
| | | Algorithm | Byte-Level BPE | |
| | | Language | `numeric` | |
| | | Target Vocab Size | 107 | |
| | | Final Vocab Size | 107 | |
| | | Pre-tokenizer | byte_level | |
| | | Number handling | rtl_2digit | |
| | | Contraction handling | False | |
| | | Normalizer | NONE | |
| | | Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` | |
| | | Training Shards | 1 | |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from transformers import AutoTokenizer |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("None") |
| | tokens = tokenizer.encode("Hello, world!") |
| | ``` |
| |
|
| | ## Files |
| |
|
| | - `tokenizer.json` — Full HuggingFace tokenizer |
| | - `vocab.json` — Vocabulary mapping |
| | - `merges.txt` — BPE merge rules |
| |
|
| | ## Sample Encoding |
| | | Text | Tokens | Token IDs | |
| | |------|--------|-----------| |
| | | `123500119 mod 67` | `1, 23, 50, 0, 1, 19, , mod, , 67` | `8, 30, 57, 7, 8, 26, 6, 4, 6, 74` | |
| |
|