Byte-Level BPE Tokenizer: vie_Latn (4K)

A Byte-Level BPE tokenizer trained on vie_Latn data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language vie_Latn
Target Vocab Size 4,000
Final Vocab Size 4,000
Pre-tokenizer gpt4
Number handling individual
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/-bpe_vie_Latn_4000")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは H, el, lo, ,, Ġw, orld, !, Ġ, 12, 3, 45, ĠTh, is, Ġis, Ġa, Ġt, est, ., Ġ, ã 43, 674, 1542, 15, 1212, 2553, 4, 173, 1147, 22, 2698, 365, 733, 2982, 1626, 211, 1790, 17, 173, 159
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collections including flexitok/bpe_vie_Latn_4000