bpe_ell_Grek_4000 / README.md
gsaltintas's picture
Upload folder using huggingface_hub
c1908cc verified
metadata
license: mit
language:
  - ell
tags:
  - tokenizer
  - bpe
  - flexitok
  - fineweb2

Byte-Level BPE Tokenizer: ell_Grek (4K)

A Byte-Level BPE tokenizer trained on ell_Grek data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language ell_Grek
Target Vocab Size 4,000
Final Vocab Size 4,000
Pre-tokenizer gpt4
Number handling individual
Contraction handling True
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("flexitok/-bpe_ell_Grek_4000")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json — Full HuggingFace tokenizer
  • vocab.json — Vocabulary mapping
  • merges.txt — BPE merge rules

Sample Encoding

Text Tokens Token IDs
Hello, world! 12345 This is a test. こんにちは H, ell, o, ,, Ġw, or, l, d, !, Ġ, 12, 3, 4, 5, ĠT, h, is, Ġ, is, Ġa 43, 2571, 82, 15, 1793, 650, 79, 71, 4, 177, 1549, 22, 23, 24, 753, 75, 901, 177, 901, 2841