File size: 1,358 Bytes
7775c1a ccf3178 7775c1a ccf3178 7775c1a ccf3178 7775c1a ccf3178 7775c1a ccf3178 7775c1a ccf3178 7775c1a ccf3178 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | ---
license: mit
language:
- arb
- fas #['arb_Arab', 'fas_Arab'] # ISO 639-3 code or "und" if not identifiable
tags:
- tokenizer
- bpe
- flexitok
- fineweb2
---
# Byte-Level BPE Tokenizer: ['arb_Arab', 'fas_Arab'] (16K)
A **Byte-Level BPE** tokenizer trained on **['arb_Arab', 'fas_Arab']** data from Fineweb-2-HQ.
## Training Details
| Parameter | Value |
|-----------|-------|
| Algorithm | Byte-Level BPE |
| Language | `['arb_Arab', 'fas_Arab']` |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,960 |
| Pre-tokenizer | custom:arb_Arab |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NONE |
| Special Tokens | `<s>`, `</s>`, `<pad>`, `<unk>` |
| Training Shards | 4 |
## Usage
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_script_Arab_16000")
tokens = tokenizer.encode("Hello, world!")
```
## Files
- `tokenizer.json` — Full HuggingFace tokenizer
- `vocab.json` — Vocabulary mapping
- `merges.txt` — BPE merge rules
## Sample Encoding
| Text | Tokens | Token IDs |
|------|--------|-----------|
| `Hello, world! 12345 This is a test. こんにちは` | `H, ell, o, ,, Ġ, w, orld, !, Ġ, 123, 45, Ġ, Th, is, Ġ, is, Ġ, a, Ġ, t` | `42, 5027, 81, 14, 223, 89, 12762, 3, 223, 16853, 5208, 223, 5728, 1147, 223, 1147, 223, 67, 223, 86` |
|