Qwen3-32B-KoTokenizer

Qwen3-32B with 3,682 Korean colloquial tokens added to the vocabulary.

Qwen3-32B's BPE tokenizer over-segments common Korean endings and particles (์–ด๋ฏธ/์กฐ์‚ฌ) into 2-4 sub-tokens. This model adds them as single tokens, trained via QLoRA to be natively used during generation.

What was done

Before After
ํ–ˆ์ž–์•„ ํ–ˆ + ์ž– + ์•„ (3 tokens) ํ–ˆ์ž–์•„ (1 token)
๋ดค๋Š”๋ฐ ๋ดค + ๋Š”๋ฐ (2 tokens) ๋ดค๋Š”๋ฐ (1 token)
์ฃ„์†กํ•˜์ง€๋งŒ ์ฃ„ + ์†ก + ํ•˜์ง€๋งŒ (3 tokens) ์ฃ„์†กํ•˜์ง€๋งŒ (1 token)
Vocab size 151,669 155,351 (+3,682)

The 3,682 tokens were extracted from HyperCLOVA's Korean-optimized vocabulary โ€” specifically endings (์–ด๋ฏธ) and particles (์กฐ์‚ฌ) that Qwen's BPE consistently fragments.

Training

  • Method: QLoRA (r=64, alpha=128) on Colab A100
  • Key technique: Old embedding freeze โ€” gradient hook zeros out gradients for the original 151K token embeddings, forcing the optimizer to only update the 3,682 new token rows
  • Data: ~77K Korean samples filtered for high new-token density (โ‰ฅ5 target tokens per sample), sourced from KoAlpaca, alpaca-gpt4-korean, KULLM-v2
  • Epochs: 1 (with high-density data + freeze, convergence is fast)
  • New token initialization: Mean pooling of constituent sub-token embeddings

Training curve

Step Loss Accuracy
50 1.649 65.6%
500 1.186 70.9%
1000 1.140 71.8%

Results

New token adoption rate: 92.9% โ€” when the model generates text containing a string that matches a new token, it uses the single new token ID 92.9% of the time (vs. falling back to the old fragmented sub-tokens).

Prompt Adoption New tokens used
์–ด์ œ ์นœ๊ตฌ๋ฅผ ๋งŒ๋‚ฌ๋Š”๋ฐ ๊ฑ”๊ฐ€ ๊ฐ‘์ž๊ธฐ... 4/4 = 100% ๋‚ฌ์ง€๋งŒ, ๋‹ฌ๋ผ๊ณ , ํ–ˆ์ง€๋งŒ, ๊ฐ”์Šต๋‹ˆ๋‹ค
์†”์งํžˆ ๊ทธ๊ฑด ์ข€ ์•„๋‹Œ ๊ฒƒ ๊ฐ™๊ฑฐ๋“ ? 1/1 = 100% ์ข‹์•„ํ•˜๋Š”
ํ•œ๊ตญ์˜ ๊ฒฝ์ œ ์„ฑ์žฅ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ฃผ์„ธ์š” 1/1 = 100% ๋•Œ๋ฌธ
์ด๊ฑฐ ์ง„์งœ ๋ง›์žˆ๊ฑฐ๋“ ? ๋„ˆ๋„ ํ•œ๋ฒˆ ๋จน์–ด๋ด 3/3 = 100% ๋ณด์„ธ์š”, ์žˆ์œผ๋ฉฐ, ํ•˜๋‹ค๊ณ 
Write a Python function... 0/0 = N/A (no Korean tokens expected)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch

model_id = "2264K/Qwen3-32B-KoTokenizer"

# NF4 quantization (fits in 24GB VRAM)
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=True,
)

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

# Verify new tokens work
print(tokenizer.encode("ํ–ˆ์ž–์•„", add_special_tokens=False))
# [155305]  โ† single token (was 3 tokens before)

# Generate
messages = [{"role": "user", "content": "์–ด์ œ ์นœ๊ตฌ๋ฅผ ๋งŒ๋‚ฌ๋Š”๋ฐ ๊ฑ”๊ฐ€ ๊ฐ‘์ž๊ธฐ ์ด์ƒํ•œ ์–˜๊ธฐ๋ฅผ ํ•˜๋”๋ผ๊ณ ."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7, top_p=0.9)

print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))

Important notes

  • This is a merged model (not an adapter). Load it directly like any HuggingFace model.
  • The tokenizer is included. No need to load the base Qwen3-32B tokenizer separately.
  • The model's generation style is unchanged from Qwen3-32B โ€” this modification only affects tokenization efficiency, not the model's personality or capabilities.
  • English and code generation are unaffected (0 new tokens in English outputs, as expected).

Files

  • model-*.safetensors โ€” merged model weights (bf16)
  • tokenizer.json, tokenizer_config.json โ€” expanded tokenizer
  • token_expansion_metadata.json โ€” metadata for all 3,682 added tokens (token strings, IDs, source sub-token IDs used for mean pooling init)

License

Apache 2.0 (same as Qwen3-32B)

Downloads last month
1
Safetensors
Model size
33B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 2264K/Qwen3-32B-KoTokenizer

Base model

Qwen/Qwen3-32B
Finetuned
(416)
this model