Fineweb2-HQ-Tokenizers
Collection
203 items • Updated
A Byte-Level BPE tokenizer trained on deu_Latn data from Fineweb-2-HQ.
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | deu_Latn |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,000 |
| Pre-tokenizer | gpt4 |
| Number handling | individual |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 2 |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/-bpe_deu_Latn_16000")
tokens = tokenizer.encode("Hello, world!")
tokenizer.json — Full HuggingFace tokenizervocab.json — Vocabulary mappingmerges.txt — BPE merge rules| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ello, ,, Ġwor, ld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ, ãģ |
43, 13747, 15, 4888, 4904, 4, 178, 15741, 3646, 14449, 1823, 228, 14560, 17, 178, 3012, 198, 4577, 198, 3012 |