UnigramLM Tokenizer: fas_Arab (16K)

A UnigramLM tokenizer trained on fas_Arab data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm UnigramLM
Language fas_Arab
Target Vocab Size 16,000
Final Vocab Size 0
Pre-tokenizer ByteLevel
Normalizer NFC
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 2
Data Source /scratch/gsa/data/flexitok//fas_Arab/

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("<repo_id>")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json โ€” Full HuggingFace tokenizer
  • vocab.json โ€” Vocabulary mapping
  • tokenizer.model โ€” SentencePiece protobuf (if available)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support