UTF32-LM-tiny
This model is a fine-tuned version of sign/utf8-lm-tiny on the HuggingFaceFW/fineweb dataset.
Using this training script, from utf8-tokenizer.
Unlike the base model, where we train directly on UTF-8 bytes - here, we train on characters (UTF-32 blocks), where each character is decomposed into a fixed, four "bytes" which are encoded independently and then concatenated.
| Character | UTF-8 | UTF-32 | UTF-32 Decomposed (bytes) |
|---|---|---|---|
| A | \x41 |
U+00000041 |
[0, 0, 0, 65] |
| é | \xC3\xA9 |
U+000000E9 |
[0, 0, 0, 233] |
| € | \xE2\x82\xAC |
U+000020AC |
[0, 0, 32, 172] |
| 😀 | \xF0\x9F\x98\x80 |
U+0001F600 |
[0, 1, 246, 0] |
This is effectively switching from variable-width canonical UTF-8 byte sequences, to fixed size groups, making training/inference up to 4x more efficient for complex scripts.
Usage
from transformers import AutoModelForCausalLM, LogitsProcessorList
import torch
from utf8_tokenizer.logits_processor import UTF8ValidationLogitsProcessor
from utf8_tokenizer.char_causal_lm import CharacterCausalLMWrapper
from utf8_tokenizer import UTF8Tokenizer
model_id = "sign/utf32-lm-tiny"
tokenizer = UTF8Tokenizer()
model = AutoModelForCausalLM.from_pretrained(model_id)
prompt = "My name is"
inputs = tokenizer([prompt], return_tensors="pt",
padding=True,
add_special_tokens=True)
# We need to remove the EOS token
inputs["input_ids"] = inputs["input_ids"][:, :-1]
inputs["attention_mask"] = inputs["attention_mask"][:, :-1]
with torch.no_grad():
out = model.generate(
**inputs,
max_new_tokens=256,
)
print(tokenizer.decode(out[0], skip_special_tokens=False))
Training procedure
python run_clm.py \
--use_bit_embeddings True \
--encoding utf32 \
--output_dir ./output-tiny-lm-fineweb-groups \
--dataset_name HuggingFaceFW/fineweb \
--streaming True \
--dataloader_num_workers 1 \
--dataloader_prefetch_factor 4 \
--dataloader_pin_memory True \
--dataloader_persistent_workers True \
--do_train True \
--save_strategy steps \
--max_steps 100000 \
--save_steps 1000 \
--save_total_limit 1 \
--logging_steps 100 \
--logging_strategy steps \
--model_name_or_path sbintuitions/tiny-lm \
--per_device_train_batch_size 256 \
--block_size 256 \
--optim adamw_torch_fused \
--learning_rate 3e-4 \
--lr_scheduler_type cosine \
--warmup_ratio 0.01 \
--weight_decay 0.1 \
--adam_beta1 0.9 \
--adam_beta2 0.95 \
--max_grad_norm 1.0 \
--gradient_checkpointing True \
--bf16 True \
--seed 42 \
--report_to wandb \
--include_num_input_tokens_seen True
Framework versions
- Transformers 4.57.3
- Pytorch 2.9.1+cu130
- Datasets 4.4.1
- Tokenizers 0.22.1
- Downloads last month
- 26
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support