File size: 1,487 Bytes
8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a 8c8cbf1 8358f8a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | ---
license: mit
language:
- dan
- deu
- nld
- swe #['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn'] # ISO 639-3 code or "und" if not identifiable
tags:
- tokenizer
- bpe
- flexitok
- fineweb2
---
# Byte-Level BPE Tokenizer: ['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn'] (16K)
A **Byte-Level BPE** tokenizer trained on **['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn']** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn']` |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,953 |
| Pre-tokenizer | custom:dan_Latn |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
| Training Shards | 8 |
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_script_Germ_16000")
tokens = tokenizer.encode("Hello, world!")
```
## Files
- `tokenizer.json` — Full HuggingFace tokenizer
- `vocab.json` — Vocabulary mapping
- `merges.txt` — BPE merge rules
## Sample Encoding
| Text | Tokens | Token IDs |
|------|--------|-----------|
| `Hello, world! 12345 This is a test. こんにちは` | `H, ello, ,, Ġw, orld, !, Ġ, 123, 45, ĠTh, is, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ` | `42, 13486, 14, 275, 5150, 3, 223, 16446, 3832, 1249, 289, 516, 270, 5190, 16, 223, 3768, 244, 5986, 244` |
|