Byte-Level BPE Tokenizer: jpn_Jpan (16K)

A Byte-Level BPE tokenizer trained on jpn_Jpan data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language jpn_Jpan
Target Vocab Size 16,000
Final Vocab Size 16,000
Pre-tokenizer gpt4
Number handling individual
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/-bpe_jpn_Jpan_16000")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは Hello, ,, Ġworld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġ, ãģĵ, ãĤĵ, ãģ«, ãģ¡, ãģ¯ 11344, 15, 6176, 4, 178, 4767, 2220, 6501, 2252, 1540, 7485, 17, 178, 240, 272, 219, 332, 225
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including flexitok/bpe_jpn_Jpan_16000