---
license: mit
language:
- ind
- vie #['ind_Latn', 'vie_Latn'] # ISO 639-3 code or "und" if not identifiable
tags:
- tokenizer
- bpe
- flexitok
- fineweb2
---
# Byte-Level BPE Tokenizer: ['ind_Latn', 'vie_Latn'] (16K)
A **Byte-Level BPE** tokenizer trained on **['ind_Latn', 'vie_Latn']** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `['ind_Latn', 'vie_Latn']` |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,959 |
| Pre-tokenizer | custom:ind_Latn |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | ``, ``, ``, `` |
| Training Shards | 4 |
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_script_SEAS_16000")
tokens = tokenizer.encode("Hello, world!")
```
## Files
- `tokenizer.json` — Full HuggingFace tokenizer
- `vocab.json` — Vocabulary mapping
- `merges.txt` — BPE merge rules
## Sample Encoding
| Text | Tokens | Token IDs |
|------|--------|-----------|
| `Hello, world! 12345 This is a test. こんにちは` | `H, el, lo, ,, Ġw, orld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ã, Ĥ` | `42, 324, 2155, 14, 505, 4659, 3, 223, 16876, 4702, 15780, 1555, 1333, 8184, 16, 223, 11148, 244, 162, 227` |