--- license: mit language: - fra - ita - por - spa #['fra_Latn', 'ita_Latn', 'por_Latn', 'spa_Latn'] # ISO 639-3 code or "und" if not identifiable tags: - tokenizer - bpe - flexitok - fineweb2 --- # Byte-Level BPE Tokenizer: ['fra_Latn', 'ita_Latn', 'por_Latn', 'spa_Latn'] (32K) A **Byte-Level BPE** tokenizer trained on **['fra_Latn', 'ita_Latn', 'por_Latn', 'spa_Latn']** data from Fineweb-2-HQ. ## Training Details | Parameter | Value | |-----------|-------| | Algorithm | Byte-Level BPE | | Language | `['fra_Latn', 'ita_Latn', 'por_Latn', 'spa_Latn']` | | Target Vocab Size | 32,000 | | Final Vocab Size | 32,871 | | Pre-tokenizer | custom:fra_Latn | | Number handling | ltr_3digit | | Contraction handling | True | | Normalizer | NFC | | Special Tokens | ``, ``, ``, `` | | Training Shards | 8 | ## Usage ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_script_Roma_32000") tokens = tokenizer.encode("Hello, world!") ``` ## Files - `tokenizer.json` — Full HuggingFace tokenizer - `vocab.json` — Vocabulary mapping - `merges.txt` — BPE merge rules ## Sample Encoding | Text | Tokens | Token IDs | |------|--------|-----------| | `Hello, world! 12345 This is a test. こんにちは` | `H, ello, ,, Ġworld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ, ãģ, «` | `42, 2110, 14, 25291, 3, 223, 22415, 4328, 17636, 1008, 267, 3037, 16, 223, 8090, 244, 14187, 244, 8090, 107` |