Byte-Level BPE Tokenizer: rus_Cyrl (16K)

A Byte-Level BPE tokenizer trained on rus_Cyrl data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language rus_Cyrl
Target Vocab Size 16,000
Final Vocab Size 16,000
Pre-tokenizer gpt4
Number handling individual
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/-bpe_rus_Cyrl_16000")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは H, ell, o, ,, Ġw, orld, !, Ġ, 12, 3, 45, ĠTh, is, Ġis, Ġa, Ġt, est, ., Ġ, ãģ 43, 4733, 82, 15, 1507, 10018, 4, 178, 1437, 22, 4486, 7340, 1077, 4024, 1632, 796, 3387, 17, 178, 9601
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including flexitok/bpe_rus_Cyrl_16000