anuj0456 commited on
Commit
feb05fd
·
1 Parent(s): 43b7508

Add application file

Browse files
Files changed (3) hide show
  1. README.md +37 -6
  2. app.py +720 -0
  3. requirements.txt +10 -0
README.md CHANGED
@@ -1,14 +1,45 @@
1
  ---
2
  title: TokenizerBench
3
- emoji: 🌖
4
- colorFrom: blue
5
- colorTo: red
6
  sdk: gradio
7
- sdk_version: 6.11.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
- short_description: A comprehensive multilingual tokenizer evaluator
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: TokenizerBench
3
+ emoji: 🤗
4
+ colorFrom: yellow
5
+ colorTo: orange
6
  sdk: gradio
7
+ sdk_version: "4.0.0"
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
+ short_description: Evaluate & compare tokenizers on multilingual text, code, and math
12
  ---
13
 
14
+ # TokenizerBench
15
+
16
+ Evaluate any Hugging Face or tiktoken tokenizer against the **TokenizerBench** dataset — covering multilingual text, programming languages, scientific formulas, and edge cases.
17
+
18
+ ## Features
19
+
20
+ - **🧪 Playground** — type any text and see live tokenization (token IDs, fertility, compression, fidelity check)
21
+ - **📊 Evaluate** — run a full evaluation on a single tokenizer with heatmap, language bar chart, and scatter plot
22
+ - **⚖️ Compare** — compare two tokenizers side-by-side with grouped bar charts and a leaderboard
23
+
24
+ ## Dataset categories
25
+
26
+ | Category | Subcategories |
27
+ |----------|--------------|
28
+ | 🌍 Human languages | English, Hindi, Chinese, Arabic, Japanese, German, Russian, Korean |
29
+ | 💻 Programming languages | Python, JavaScript, SQL, Rust |
30
+ | 🧮 Scientific formulas | Algebra, Calculus, Physics, Statistics |
31
+ | ⚠️ Edge cases | Whitespace, Long tokens, Mixed scripts |
32
+
33
+ ## Metrics
34
+
35
+ | Metric | Better | Notes |
36
+ |--------|--------|-------|
37
+ | `avg_fertility` | Lower | Tokens per word. Near 1.0 = ideal. ≥4 = poor. |
38
+ | `avg_compression_ratio` | Lower | Tokens per character. |
39
+ | `avg_byte_compression` | Lower | Tokens per UTF-8 byte. Language-agnostic. |
40
+ | `fidelity_pass_rate` | 1.0 | Must be 1.0 — any failure indicates a problem. |
41
+
42
+ ## Supported tokenizer types
43
+
44
+ - **HuggingFace AutoTokenizer** — any model from the Hub, e.g. `bert-base-multilingual-cased`, `xlm-roberta-base`, `google/mt5-base`
45
+ - **tiktoken** — OpenAI encodings: `cl100k_base`, `o200k_base`, `p50k_base`
app.py ADDED
@@ -0,0 +1,720 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ TokenizerBench — Hugging Face Space
3
+ A Gradio app that lets users try any HF/tiktoken tokenizer against
4
+ the TokenizerBench dataset and visualise the results.
5
+ """
6
+
7
+ import io
8
+ import json
9
+ import tempfile
10
+ import traceback
11
+ from pathlib import Path
12
+ from typing import Any
13
+
14
+ import gradio as gr
15
+ import matplotlib
16
+ import matplotlib.pyplot as plt
17
+ import matplotlib.patches as mpatches
18
+ import numpy as np
19
+ import pandas as pd
20
+
21
+ matplotlib.use("Agg")
22
+
23
+ # ─────────────────────────────────────────────────────────────────
24
+ # Inline dataset (subset of the full TokenizerBench data)
25
+ # ─────────────────────────────────────────────────────────────────
26
+
27
+ DATASET: dict[str, dict[str, list[str]]] = {
28
+ "human_languages": {
29
+ "english": [
30
+ "The quick brown fox jumps over the lazy dog.",
31
+ "Artificial intelligence is transforming every industry.",
32
+ "Natural language processing enables machines to understand text.",
33
+ "Tokenization is the first step in most NLP pipelines.",
34
+ "The model achieved state-of-the-art results on all benchmarks.",
35
+ ],
36
+ "hindi": [
37
+ "कृत्रिम बुद्धिमत्ता दुनिया को तेजी से बदल रही है।",
38
+ "मुझे नई तकनीकें सीखना पसंद है।",
39
+ "यह एक परीक्षण वाक्य है।",
40
+ "संख्याएँ 12345 और चिह्नों को सही ढंग से संसाधित किया जाना चाहिए।",
41
+ "प्राकृतिक भाषा प्रसंस्करण कृत्रिम बुद्धिमत्ता का एक महत्वपूर्ण क्षेत्र है।",
42
+ ],
43
+ "chinese": [
44
+ "人工智能正在迅速改变世界。",
45
+ "我喜欢学习新技术。",
46
+ "这是一个测试句子。",
47
+ "数字12345和符号需要正确处理。",
48
+ "自然语言处理是人工智能的重要领域。",
49
+ ],
50
+ "arabic": [
51
+ "الذكاء الاصطناعي يغير العالم بسرعة.",
52
+ "أحب تعلم التقنيات الجديدة.",
53
+ "هذه جملة اختبارية.",
54
+ "معالجة اللغة الطبيعية مجال مهم في الذكاء الاصطناعي.",
55
+ "يجب معالجة الأرقام 12345 والرموز بشكل صحيح.",
56
+ ],
57
+ "japanese": [
58
+ "人工知能は世界を急速に変えています。",
59
+ "私は新しい技術を学ぶのが好きです。",
60
+ "これはテスト用の文です。",
61
+ "数字12345と記号を正しく処理する必要があります。",
62
+ "自然言語処理は人工知能の重要な分野です。",
63
+ ],
64
+ "german": [
65
+ "Künstliche Intelligenz verändert die Welt schnell.",
66
+ "Ich lerne gerne neue Technologien.",
67
+ "Dies ist ein Testsatz.",
68
+ "Donaudampfschifffahrtsgesellschaft ist ein langes deutsches Wort.",
69
+ "Natürliche Sprachverarbeitung ist ein wichtiges Forschungsgebiet.",
70
+ ],
71
+ "russian": [
72
+ "Искусственный интеллект быстро меняет мир.",
73
+ "Мне нравится изучать новые технологии.",
74
+ "Это тестовое предложение.",
75
+ "Обработка естественного языка — важная область ИИ.",
76
+ "Числа 12345 и символы должны обрабатываться корректно.",
77
+ ],
78
+ "korean": [
79
+ "인공지능은 세상을 빠르게 변화시키고 있습니다.",
80
+ "나는 새로운 기술을 배우는 것을 좋아합니다.",
81
+ "이것은 테스트 문장입니다.",
82
+ "자연어 처리는 인공지능의 중요한 분야입니다.",
83
+ "숫자 12345와 기호를 올바르게 처리해야 합니다.",
84
+ ],
85
+ },
86
+ "programming_languages": {
87
+ "python": [
88
+ "def greet(name): return f'Hello, {name}!'",
89
+ "numbers = [1,2,3]; squared = [x**2 for x in numbers]",
90
+ "import torch\nmodel = torch.nn.Linear(128, 64)",
91
+ "async def fetch(url):\n async with aiohttp.ClientSession() as s:\n return await s.get(url)",
92
+ "class Tokenizer:\n def __init__(self, vocab):\n self.vocab = vocab",
93
+ ],
94
+ "javascript": [
95
+ "const greet = name => `Hello, ${name}!`;",
96
+ "const nums = [1,2,3]; const sq = nums.map(x => x**2);",
97
+ "async function fetchData(url) { const res = await fetch(url); return res.json(); }",
98
+ "const obj = { key: 'value', nested: { a: 1 } };",
99
+ "document.querySelector('#app').innerHTML = '<h1>Hello</h1>';",
100
+ ],
101
+ "sql": [
102
+ "SELECT u.name, COUNT(o.id) FROM users u JOIN orders o ON u.id = o.user_id GROUP BY u.name;",
103
+ "CREATE INDEX idx_users_email ON users(email);",
104
+ "WITH ranked AS (SELECT *, ROW_NUMBER() OVER (PARTITION BY dept ORDER BY salary DESC) rn FROM emp) SELECT * FROM ranked WHERE rn=1;",
105
+ "INSERT INTO logs (event, ts) VALUES ('login', NOW());",
106
+ ],
107
+ "rust": [
108
+ "fn main() { println!(\"Hello, world!\"); }",
109
+ "let v: Vec<i32> = (1..=10).collect();",
110
+ "impl fmt::Display for Point { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"({}, {})\", self.x, self.y) } }",
111
+ ],
112
+ },
113
+ "scientific_formulas": {
114
+ "algebra": [
115
+ "x² + y² = z²",
116
+ "x = (-b ± √(b² - 4ac)) / 2a",
117
+ "e^(iπ) + 1 = 0",
118
+ "∑ᵢ₌₁ⁿ i = n(n+1)/2",
119
+ ],
120
+ "calculus": [
121
+ "∫₀¹ x² dx = 1/3",
122
+ "d/dx (x²) = 2x",
123
+ "lim(x→0) sin(x)/x = 1",
124
+ "∂²u/∂x² + ∂²u/∂y² = 0",
125
+ ],
126
+ "physics": [
127
+ "E = mc²",
128
+ "∇·E = ρ/ε₀",
129
+ "ψ(x,t) = Ae^{i(kx - ωt)}",
130
+ "|ψ⟩ = α|0⟩ + β|1⟩",
131
+ ],
132
+ "statistics": [
133
+ "P(A|B) = P(A∩B)/P(B)",
134
+ "H(X) = -∑ p(x) log p(x)",
135
+ "KL(P||Q) = ∑ P(x) log(P(x)/Q(x))",
136
+ "E[X] = ∑ xP(x), Var(X) = E[X²] - (E[X])²",
137
+ ],
138
+ },
139
+ "edge_cases": {
140
+ "whitespace_control": [
141
+ "word1\t\tword2\t\tword3",
142
+ "line1\nline2\nline3",
143
+ " leading spaces",
144
+ "trailing spaces ",
145
+ ],
146
+ "long_tokens": [
147
+ "https://www.example.com/very/long/path/to/some/resource?param1=value1&param2=value2",
148
+ "thisIsAReallyLongCamelCaseIdentifierThatMightAppearInCode",
149
+ "SGVsbG8gV29ybGQhIFRoaXMgaXMgYSBiYXNlNjQgZW5jb2RlZCBzdHJpbmc=",
150
+ "550e8400-e29b-41d4-a716-446655440000",
151
+ ],
152
+ "mixed_scripts": [
153
+ "Hello 世界 مرحبا Привет こんにちは",
154
+ "AI模型 and NLP技术 are transforming الذكاء الاصطناعي",
155
+ "math: α + β = γ, code: x += 1, emoji: 🚀",
156
+ ],
157
+ },
158
+ }
159
+
160
+ CATEGORY_LABELS = {
161
+ "human_languages": "🌍 Human languages",
162
+ "programming_languages": "💻 Programming languages",
163
+ "scientific_formulas": "🧮 Scientific formulas",
164
+ "edge_cases": "⚠️ Edge cases",
165
+ }
166
+
167
+ # ─────────────────────────────────────────────────────────────────
168
+ # Metrics (mirrors metrics.py from the repo)
169
+ # ─────────────────────────────────────────────────────────────────
170
+
171
+ def fertility_score(tokenizer, text: str) -> float:
172
+ words = text.split()
173
+ if not words:
174
+ return 0.0
175
+ tokens = tokenizer.encode(text)
176
+ return len(tokens) / len(words)
177
+
178
+ def compression_ratio(tokenizer, text: str) -> float:
179
+ if not text:
180
+ return 0.0
181
+ return len(tokenizer.encode(text)) / len(text)
182
+
183
+ def byte_compression_ratio(tokenizer, text: str) -> float:
184
+ n_bytes = len(text.encode("utf-8"))
185
+ if n_bytes == 0:
186
+ return 0.0
187
+ return len(tokenizer.encode(text)) / n_bytes
188
+
189
+ def roundtrip_fidelity(tokenizer, text: str) -> bool:
190
+ try:
191
+ ids = tokenizer.encode(text)
192
+ decoded = tokenizer.decode(ids)
193
+ return text.strip() == decoded.strip()
194
+ except Exception:
195
+ return False
196
+
197
+ def evaluate_tokenizer(tokenizer, dataset: dict) -> dict:
198
+ results: dict[str, Any] = {}
199
+ all_f, all_c = [], []
200
+ failures = 0
201
+
202
+ for category, subcategories in dataset.items():
203
+ results[category] = {}
204
+ for subcategory, samples in subcategories.items():
205
+ ferts, comps, byte_comps, token_counts = [], [], [], []
206
+ sub_fails = 0
207
+ for text in samples:
208
+ if not text or not text.strip():
209
+ continue
210
+ try:
211
+ toks = tokenizer.encode(text)
212
+ token_counts.append(len(toks))
213
+ f = fertility_score(tokenizer, text)
214
+ ferts.append(f); all_f.append(f)
215
+ c = compression_ratio(tokenizer, text)
216
+ comps.append(c); all_c.append(c)
217
+ byte_comps.append(byte_compression_ratio(tokenizer, text))
218
+ if not roundtrip_fidelity(tokenizer, text):
219
+ sub_fails += 1; failures += 1
220
+ except Exception:
221
+ pass
222
+
223
+ def avg(lst): return round(sum(lst)/len(lst), 4) if lst else 0.0
224
+ results[category][subcategory] = {
225
+ "n_samples": len(token_counts),
226
+ "avg_tokens": avg(token_counts),
227
+ "avg_fertility": avg(ferts),
228
+ "avg_compression_ratio": avg(comps),
229
+ "avg_byte_compression": avg(byte_comps),
230
+ "fidelity_failures": sub_fails,
231
+ }
232
+
233
+ results["__summary__"] = {
234
+ "overall_avg_fertility": round(sum(all_f)/len(all_f), 4) if all_f else 0,
235
+ "overall_avg_compression": round(sum(all_c)/len(all_c), 4) if all_c else 0,
236
+ "total_samples": sum(len(s) for cat in dataset.values() for s in cat.values()),
237
+ "fidelity_failure_count": failures,
238
+ }
239
+ return results
240
+
241
+ # ─────────────────────────────────────────────────────────────────
242
+ # Tokenizer loaders
243
+ # ─────────────────────────────────────────────────────────────────
244
+
245
+ def load_hf_tokenizer(model_id: str):
246
+ from transformers import AutoTokenizer
247
+ tok = AutoTokenizer.from_pretrained(model_id)
248
+ class W:
249
+ def encode(self, text):
250
+ return tok.encode(text, add_special_tokens=False)
251
+ def decode(self, ids):
252
+ return tok.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
253
+ return W()
254
+
255
+ def load_tiktoken(model: str):
256
+ import tiktoken
257
+ enc = tiktoken.get_encoding(model)
258
+ class W:
259
+ def encode(self, text): return enc.encode(text)
260
+ def decode(self, ids): return enc.decode(ids)
261
+ return W()
262
+
263
+ # ─────────────────────────────────────────────────────────────────
264
+ # Plots
265
+ # ─────────────────────────────────────────────────────────────────
266
+
267
+ PALETTE = ["#3b82f6", "#8b5cf6", "#ec4899", "#f59e0b", "#10b981",
268
+ "#ef4444", "#06b6d4", "#84cc16"]
269
+
270
+ def fig_to_pil(fig):
271
+ buf = io.BytesIO()
272
+ fig.savefig(buf, format="png", dpi=130, bbox_inches="tight",
273
+ facecolor=fig.get_facecolor())
274
+ buf.seek(0)
275
+ from PIL import Image
276
+ return Image.open(buf).copy()
277
+
278
+ def plot_fertility_heatmap(result: dict, title: str):
279
+ cats = [c for c in result if not c.startswith("__") and isinstance(result[c], dict)]
280
+ if not cats:
281
+ return None
282
+ data = {}
283
+ for cat in cats:
284
+ data[cat] = {sub: v.get("avg_fertility", 0)
285
+ for sub, v in result[cat].items() if isinstance(v, dict)}
286
+ df = pd.DataFrame(data).T.fillna(0)
287
+ fig, ax = plt.subplots(figsize=(max(10, len(df.columns)*0.8), max(4, len(df)*0.6)),
288
+ facecolor="#0f1117")
289
+ ax.set_facecolor("#0f1117")
290
+ import seaborn as sns
291
+ sns.heatmap(df, ax=ax, cmap="YlOrRd", annot=True, fmt=".2f",
292
+ linewidths=0.5, linecolor="#1e2130",
293
+ cbar_kws={"label": "Avg fertility (tokens/word)"})
294
+ ax.set_title(f"Fertility heatmap — {title}", fontsize=12, color="white", pad=10)
295
+ ax.tick_params(colors="white", labelsize=8)
296
+ plt.xticks(rotation=40, ha="right", color="white")
297
+ plt.yticks(color="white")
298
+ ax.figure.axes[-1].tick_params(colors="white", labelsize=8)
299
+ ax.figure.axes[-1].yaxis.label.set_color("white")
300
+ plt.tight_layout()
301
+ img = fig_to_pil(fig)
302
+ plt.close(fig)
303
+ return img
304
+
305
+ def plot_language_fertility_bar(result: dict, title: str):
306
+ lang_data = result.get("human_languages", {})
307
+ if not lang_data:
308
+ return None
309
+ langs = {lang: v["avg_fertility"] for lang, v in lang_data.items()
310
+ if isinstance(v, dict) and "avg_fertility" in v}
311
+ langs = dict(sorted(langs.items(), key=lambda x: x[1]))
312
+ colors = ["#d73027" if v > 3 else "#fdae61" if v > 2 else "#1a9850"
313
+ for v in langs.values()]
314
+ fig, ax = plt.subplots(figsize=(9, max(4, len(langs)*0.35)), facecolor="#0f1117")
315
+ ax.set_facecolor("#0f1117")
316
+ bars = ax.barh(list(langs.keys()), list(langs.values()), color=colors, height=0.7)
317
+ for bar, val in zip(bars, langs.values()):
318
+ ax.text(val + 0.02, bar.get_y() + bar.get_height()/2,
319
+ f"{val:.2f}", va="center", fontsize=8, color="white")
320
+ ax.axvline(1.0, color="#aaa", linestyle="--", linewidth=0.8, label="Ideal (1.0)")
321
+ ax.axvline(2.0, color="#fdae61", linestyle="--", linewidth=0.8, label="Acceptable (2.0)")
322
+ ax.axvline(4.0, color="#d73027", linestyle="--", linewidth=0.8, label="Poor (≥4.0)")
323
+ ax.set_xlabel("Avg fertility (tokens/word)", color="white")
324
+ ax.set_title(f"Per-language fertility — {title}", color="white", fontsize=11)
325
+ ax.tick_params(colors="white", labelsize=9)
326
+ ax.spines[["top","right","bottom","left"]].set_color("#333")
327
+ legend = ax.legend(fontsize=8, facecolor="#1e2130", labelcolor="white")
328
+ plt.tight_layout()
329
+ img = fig_to_pil(fig)
330
+ plt.close(fig)
331
+ return img
332
+
333
+ def plot_compression_scatter(result: dict, title: str):
334
+ xs, ys, labels, cat_list = [], [], [], []
335
+ cat_colors = {}
336
+ cats = [c for c in result if not c.startswith("__") and isinstance(result[c], dict)]
337
+ for i, cat in enumerate(cats):
338
+ cat_colors[cat] = PALETTE[i % len(PALETTE)]
339
+ for sub, vals in result[cat].items():
340
+ if not isinstance(vals, dict):
341
+ continue
342
+ f = vals.get("avg_fertility"); c = vals.get("avg_byte_compression")
343
+ if f is not None and c is not None:
344
+ xs.append(c); ys.append(f)
345
+ labels.append(sub); cat_list.append(cat)
346
+ if not xs:
347
+ return None
348
+ fig, ax = plt.subplots(figsize=(9, 6), facecolor="#0f1117")
349
+ ax.set_facecolor("#0f1117")
350
+ for cat in set(cat_list):
351
+ idxs = [i for i, c in enumerate(cat_list) if c == cat]
352
+ ax.scatter([xs[i] for i in idxs], [ys[i] for i in idxs],
353
+ color=cat_colors[cat], label=CATEGORY_LABELS.get(cat, cat),
354
+ alpha=0.85, s=70, edgecolors="white", linewidths=0.3)
355
+ for i, lbl in enumerate(labels):
356
+ ax.annotate(lbl, (xs[i], ys[i]), fontsize=6.5, color="#ccc",
357
+ xytext=(4, 3), textcoords="offset points")
358
+ ax.axhline(1.0, color="#aaa", linestyle="--", linewidth=0.8, label="Fertility=1.0")
359
+ ax.axhline(2.0, color="#fdae61", linestyle="--", linewidth=0.8, label="Fertility=2.0")
360
+ ax.set_xlabel("Byte compression (tokens/byte) — lower is better", color="white")
361
+ ax.set_ylabel("Fertility (tokens/word) — lower is better", color="white")
362
+ ax.set_title(f"Fertility vs byte compression — {title}", color="white", fontsize=11)
363
+ ax.tick_params(colors="white")
364
+ ax.spines[["top","right","bottom","left"]].set_color("#333")
365
+ ax.legend(fontsize=8, facecolor="#1e2130", labelcolor="white")
366
+ plt.tight_layout()
367
+ img = fig_to_pil(fig)
368
+ plt.close(fig)
369
+ return img
370
+
371
+ def plot_comparison_bar(results_dict: dict, metric: str = "avg_fertility"):
372
+ if not results_dict:
373
+ return None
374
+ cats = set()
375
+ data: dict[str, dict[str, float]] = {}
376
+ for tok_name, result in results_dict.items():
377
+ data[tok_name] = {}
378
+ for cat, subcats in result.items():
379
+ if cat.startswith("__") or not isinstance(subcats, dict):
380
+ continue
381
+ vals = [v.get(metric, 0) for v in subcats.values()
382
+ if isinstance(v, dict) and metric in v]
383
+ if vals:
384
+ data[tok_name][cat] = round(sum(vals)/len(vals), 4)
385
+ cats.add(cat)
386
+ cats = sorted(cats)
387
+ tok_names = list(data.keys())
388
+ x = np.arange(len(cats))
389
+ width = 0.75 / max(len(tok_names), 1)
390
+ fig, ax = plt.subplots(figsize=(max(9, len(cats)*1.8), 5.5), facecolor="#0f1117")
391
+ ax.set_facecolor("#0f1117")
392
+ for i, name in enumerate(tok_names):
393
+ vals = [data[name].get(cat, 0) for cat in cats]
394
+ offset = x + i*width - (len(tok_names)-1)*width/2
395
+ bars = ax.bar(offset, vals, width*0.9, label=name,
396
+ color=PALETTE[i % len(PALETTE)], alpha=0.88)
397
+ for bar, val in zip(bars, vals):
398
+ ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01,
399
+ f"{val:.2f}", ha="center", va="bottom", fontsize=7.5, color="white")
400
+ cat_labels = [CATEGORY_LABELS.get(c, c) for c in cats]
401
+ ax.set_xticks(x)
402
+ ax.set_xticklabels(cat_labels, rotation=20, ha="right", color="white", fontsize=9)
403
+ ax.set_ylabel(metric.replace("_", " ").title(), color="white")
404
+ ax.set_title(f"Tokenizer comparison — {metric.replace('_',' ').title()}", color="white", fontsize=11)
405
+ ax.tick_params(colors="white")
406
+ ax.spines[["top","right","bottom","left"]].set_color("#333")
407
+ ax.legend(fontsize=9, facecolor="#1e2130", labelcolor="white")
408
+ plt.tight_layout()
409
+ img = fig_to_pil(fig)
410
+ plt.close(fig)
411
+ return img
412
+
413
+ def plot_fidelity_summary(results_dict: dict):
414
+ names = list(results_dict.keys())
415
+ failures = [r.get("__summary__", {}).get("fidelity_failure_count", 0)
416
+ for r in results_dict.values()]
417
+ fig, ax = plt.subplots(figsize=(max(5, len(names)*1.4), 4.5), facecolor="#0f1117")
418
+ ax.set_facecolor("#0f1117")
419
+ colors = ["#d73027" if f > 0 else "#1a9850" for f in failures]
420
+ bars = ax.bar(names, failures, color=colors, width=0.5)
421
+ for bar, val in zip(bars, failures):
422
+ ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05,
423
+ str(val), ha="center", va="bottom", fontsize=10,
424
+ color="#d73027" if val > 0 else "#1a9850")
425
+ ax.set_ylabel("Fidelity failure count", color="white")
426
+ ax.set_title("Roundtrip fidelity failures", color="white", fontsize=11)
427
+ ax.tick_params(colors="white")
428
+ ax.spines[["top","right","bottom","left"]].set_color("#333")
429
+ ax.set_ylim(bottom=0)
430
+ green_patch = mpatches.Patch(color="#1a9850", label="0 failures (pass)")
431
+ red_patch = mpatches.Patch(color="#d73027", label="Has failures")
432
+ ax.legend(handles=[green_patch, red_patch], fontsize=8,
433
+ facecolor="#1e2130", labelcolor="white")
434
+ plt.tight_layout()
435
+ img = fig_to_pil(fig)
436
+ plt.close(fig)
437
+ return img
438
+
439
+ # ─────────────────────────────────────────────────────────────────
440
+ # Core Gradio logic
441
+ # ─────────────────────────────────────────────────────────────────
442
+
443
+ def run_single_eval(model_id: str, tok_type: str, categories: list[str]):
444
+ if not model_id.strip():
445
+ return "⚠️ Please enter a model name.", None, None, None, None
446
+
447
+ status = ""
448
+ try:
449
+ if tok_type == "HuggingFace (AutoTokenizer)":
450
+ tok = load_hf_tokenizer(model_id.strip())
451
+ else:
452
+ tok = load_tiktoken(model_id.strip())
453
+ except Exception as e:
454
+ return f"❌ Failed to load tokenizer:\n{traceback.format_exc()}", None, None, None, None
455
+
456
+ dataset_subset = {k: v for k, v in DATASET.items() if k in categories} if categories else DATASET
457
+ if not dataset_subset:
458
+ return "⚠️ Please select at least one dataset category.", None, None, None, None
459
+
460
+ try:
461
+ result = evaluate_tokenizer(tok, dataset_subset)
462
+ except Exception as e:
463
+ return f"❌ Evaluation error:\n{traceback.format_exc()}", None, None, None, None
464
+
465
+ s = result["__summary__"]
466
+ status = (
467
+ f"✅ **{model_id.strip()}** evaluated on {s['total_samples']} samples\n\n"
468
+ f"| Metric | Value |\n|--------|-------|\n"
469
+ f"| Overall avg fertility | `{s['overall_avg_fertility']}` |\n"
470
+ f"| Overall avg compression | `{s['overall_avg_compression']}` |\n"
471
+ f"| Fidelity failures | `{s['fidelity_failure_count']}` |"
472
+ )
473
+
474
+ heatmap = plot_fertility_heatmap(result, model_id.strip())
475
+ lang_bar = plot_language_fertility_bar(result, model_id.strip()) if "human_languages" in dataset_subset else None
476
+ scatter = plot_compression_scatter(result, model_id.strip())
477
+
478
+ rows = []
479
+ for cat, subcats in result.items():
480
+ if cat.startswith("__") or not isinstance(subcats, dict):
481
+ continue
482
+ for sub, vals in subcats.items():
483
+ if isinstance(vals, dict):
484
+ rows.append({
485
+ "Category": CATEGORY_LABELS.get(cat, cat),
486
+ "Subcategory": sub,
487
+ "Avg tokens": vals.get("avg_tokens", 0),
488
+ "Avg fertility": vals.get("avg_fertility", 0),
489
+ "Avg compression": vals.get("avg_compression_ratio", 0),
490
+ "Fidelity fails": vals.get("fidelity_failures", 0),
491
+ })
492
+ df = pd.DataFrame(rows)
493
+
494
+ return status, heatmap, lang_bar, scatter, df
495
+
496
+
497
+ def run_compare_eval(
498
+ model_a: str, type_a: str,
499
+ model_b: str, type_b: str,
500
+ metric: str, categories: list[str],
501
+ ):
502
+ models = [(model_a.strip(), type_a), (model_b.strip(), type_b)]
503
+ models = [(m, t) for m, t in models if m]
504
+ if len(models) < 2:
505
+ return "⚠️ Please enter at least 2 model names.", None, None, None
506
+
507
+ tokenizers = {}
508
+ for model_id, tok_type in models:
509
+ try:
510
+ if tok_type == "HuggingFace (AutoTokenizer)":
511
+ tokenizers[model_id] = load_hf_tokenizer(model_id)
512
+ else:
513
+ tokenizers[model_id] = load_tiktoken(model_id)
514
+ except Exception:
515
+ return f"❌ Failed to load `{model_id}`:\n{traceback.format_exc()}", None, None, None
516
+
517
+ dataset_subset = {k: v for k, v in DATASET.items() if k in categories} if categories else DATASET
518
+
519
+ results_dict = {}
520
+ for name, tok in tokenizers.items():
521
+ try:
522
+ results_dict[name] = evaluate_tokenizer(tok, dataset_subset)
523
+ except Exception:
524
+ return f"❌ Evaluation failed for `{name}`:\n{traceback.format_exc()}", None, None, None
525
+
526
+ metric_key = {
527
+ "Fertility (lower = better)": "avg_fertility",
528
+ "Compression ratio": "avg_compression_ratio",
529
+ "Byte compression": "avg_byte_compression",
530
+ }.get(metric, "avg_fertility")
531
+
532
+ cmp_bar = plot_comparison_bar(results_dict, metric_key)
533
+ fid_bar = plot_fidelity_summary(results_dict)
534
+
535
+ rows = []
536
+ for name, result in results_dict.items():
537
+ s = result.get("__summary__", {})
538
+ rows.append({
539
+ "Tokenizer": name,
540
+ "Avg fertility": s.get("overall_avg_fertility"),
541
+ "Avg compression": s.get("overall_avg_compression"),
542
+ "Samples evaluated": s.get("total_samples"),
543
+ "Fidelity failures": s.get("fidelity_failure_count"),
544
+ })
545
+ df = pd.DataFrame(rows).sort_values("Avg fertility")
546
+
547
+ status = "✅ Comparison complete.\n\n**Leaderboard (lower fertility = better)**\n\n"
548
+ for _, row in df.iterrows():
549
+ status += f"- **{row['Tokenizer']}** — fertility `{row['Avg fertility']}`, failures `{row['Fidelity failures']}`\n"
550
+
551
+ return status, cmp_bar, fid_bar, df
552
+
553
+
554
+ def tokenize_live(model_id: str, tok_type: str, text: str):
555
+ if not model_id.strip() or not text.strip():
556
+ return "Enter a model name and some text above.", ""
557
+ try:
558
+ if tok_type == "HuggingFace (AutoTokenizer)":
559
+ tok = load_hf_tokenizer(model_id.strip())
560
+ else:
561
+ tok = load_tiktoken(model_id.strip())
562
+ ids = tok.encode(text)
563
+ decoded = tok.decode(ids)
564
+ fid = "✅ Roundtrip OK" if text.strip() == decoded.strip() else "⚠️ Roundtrip mismatch"
565
+ info = (
566
+ f"**Token count:** {len(ids)} | "
567
+ f"**Fertility:** {len(ids)/max(1,len(text.split())):.2f} | "
568
+ f"**Compression:** {len(ids)/max(1,len(text)):.3f} | "
569
+ f"**Fidelity:** {fid}"
570
+ )
571
+ ids_str = " ".join(str(i) for i in ids[:100])
572
+ if len(ids) > 100:
573
+ ids_str += f" … (+{len(ids)-100} more)"
574
+ return info, ids_str
575
+ except Exception:
576
+ return f"❌ Error:\n{traceback.format_exc()}", ""
577
+
578
+ # ─────────────────────────────────────────────────────────────────
579
+ # Gradio UI
580
+ # ─────────────────────────────────────────────────────────────────
581
+
582
+ CATEGORY_CHOICES = list(DATASET.keys())
583
+ CATEGORY_DEFAULT = CATEGORY_CHOICES
584
+
585
+ TYPE_CHOICES = ["HuggingFace (AutoTokenizer)", "tiktoken"]
586
+
587
+ EXAMPLE_HF = ["bert-base-multilingual-cased", "xlm-roberta-base",
588
+ "google/mt5-base", "facebook/mbart-large-50"]
589
+ EXAMPLE_TIKTOKEN = ["cl100k_base", "o200k_base", "p50k_base"]
590
+
591
+ with gr.Blocks(title="TokenizerBench", theme=gr.themes.Soft()) as demo:
592
+ gr.Markdown(
593
+ """# 🤗 TokenizerBench
594
+ Evaluate and compare tokenizers on multilingual text, code, scientific formulas, and edge cases.
595
+ Built on the [TokenizerBench dataset](https://huggingface.co/datasets).
596
+ """
597
+ )
598
+
599
+ with gr.Tabs():
600
+
601
+ # ── Tab 1: Playground ────────────────────────────────────
602
+ with gr.Tab("🧪 Playground"):
603
+ gr.Markdown("### Live tokenization — try any text")
604
+ with gr.Row():
605
+ with gr.Column(scale=1):
606
+ live_model = gr.Textbox(label="Model name / encoding",
607
+ placeholder="bert-base-multilingual-cased",
608
+ value="bert-base-multilingual-cased")
609
+ live_type = gr.Dropdown(TYPE_CHOICES, value=TYPE_CHOICES[0],
610
+ label="Tokenizer type")
611
+ with gr.Column(scale=2):
612
+ live_text = gr.Textbox(
613
+ label="Input text",
614
+ placeholder="Type or paste anything…",
615
+ lines=4,
616
+ value="The quick brown fox jumps over the lazy dog. 快速的棕色狐狸跳过了懒狗。",
617
+ )
618
+ live_btn = gr.Button("Tokenize", variant="primary")
619
+ live_info = gr.Markdown("Metrics will appear here.")
620
+ live_ids = gr.Textbox(label="Token IDs", lines=2, interactive=False)
621
+ live_btn.click(tokenize_live, [live_model, live_type, live_text],
622
+ [live_info, live_ids])
623
+
624
+ gr.Markdown("---\n### Dataset samples — click to load into the text box")
625
+ for cat_key, cat_label in CATEGORY_LABELS.items():
626
+ with gr.Accordion(cat_label, open=False):
627
+ for sub, samples in DATASET[cat_key].items():
628
+ with gr.Row():
629
+ for s in samples[:3]:
630
+ btn = gr.Button(s[:60] + ("…" if len(s) > 60 else ""),
631
+ size="sm")
632
+ btn.click(lambda t=s: t, outputs=live_text)
633
+
634
+ # ── Tab 2: Evaluate ──────────────────────────────────────
635
+ with gr.Tab("📊 Evaluate"):
636
+ gr.Markdown("### Evaluate a single tokenizer against the full dataset")
637
+ with gr.Row():
638
+ with gr.Column(scale=1):
639
+ eval_model = gr.Textbox(label="Model name / encoding",
640
+ placeholder="xlm-roberta-base",
641
+ value="bert-base-multilingual-cased")
642
+ eval_type = gr.Dropdown(TYPE_CHOICES, value=TYPE_CHOICES[0],
643
+ label="Tokenizer type")
644
+ eval_cats = gr.CheckboxGroup(
645
+ CATEGORY_CHOICES, value=CATEGORY_DEFAULT,
646
+ label="Dataset categories to evaluate",
647
+ )
648
+ eval_btn = gr.Button("Run evaluation", variant="primary")
649
+ with gr.Column(scale=2):
650
+ eval_status = gr.Markdown("Results will appear here.")
651
+
652
+ eval_table = gr.Dataframe(label="Per-subcategory results", wrap=True)
653
+
654
+ with gr.Tabs():
655
+ with gr.Tab("Fertility heatmap"):
656
+ eval_heatmap = gr.Image(label="Heatmap", type="pil")
657
+ with gr.Tab("Language fertility bar"):
658
+ eval_langbar = gr.Image(label="Language fertility", type="pil")
659
+ with gr.Tab("Fertility vs compression"):
660
+ eval_scatter = gr.Image(label="Scatter", type="pil")
661
+
662
+ eval_btn.click(
663
+ run_single_eval,
664
+ [eval_model, eval_type, eval_cats],
665
+ [eval_status, eval_heatmap, eval_langbar, eval_scatter, eval_table],
666
+ )
667
+
668
+ # ── Tab 3: Compare ───────────────────────────────────────
669
+ with gr.Tab("⚖️ Compare"):
670
+ gr.Markdown("### Compare two tokenizers side-by-side")
671
+ with gr.Row():
672
+ with gr.Column():
673
+ gr.Markdown("**Tokenizer A**")
674
+ cmp_model_a = gr.Textbox(label="Model A", value="bert-base-multilingual-cased")
675
+ cmp_type_a = gr.Dropdown(TYPE_CHOICES, value=TYPE_CHOICES[0], label="Type A")
676
+ with gr.Column():
677
+ gr.Markdown("**Tokenizer B**")
678
+ cmp_model_b = gr.Textbox(label="Model B", value="xlm-roberta-base")
679
+ cmp_type_b = gr.Dropdown(TYPE_CHOICES, value=TYPE_CHOICES[0], label="Type B")
680
+
681
+ with gr.Row():
682
+ cmp_metric = gr.Dropdown(
683
+ ["Fertility (lower = better)", "Compression ratio", "Byte compression"],
684
+ value="Fertility (lower = better)",
685
+ label="Comparison metric",
686
+ )
687
+ cmp_cats = gr.CheckboxGroup(
688
+ CATEGORY_CHOICES, value=CATEGORY_DEFAULT,
689
+ label="Dataset categories",
690
+ )
691
+
692
+ cmp_btn = gr.Button("Compare", variant="primary")
693
+ cmp_status = gr.Markdown("Results will appear here.")
694
+ cmp_table = gr.Dataframe(label="Summary leaderboard", wrap=True)
695
+
696
+ with gr.Tabs():
697
+ with gr.Tab("Category comparison bar"):
698
+ cmp_bar_img = gr.Image(label="Grouped bar", type="pil")
699
+ with gr.Tab("Fidelity failures"):
700
+ cmp_fid_img = gr.Image(label="Fidelity", type="pil")
701
+
702
+ cmp_btn.click(
703
+ run_compare_eval,
704
+ [cmp_model_a, cmp_type_a, cmp_model_b, cmp_type_b, cmp_metric, cmp_cats],
705
+ [cmp_status, cmp_bar_img, cmp_fid_img, cmp_table],
706
+ )
707
+
708
+ gr.Markdown(
709
+ """---
710
+ **Dataset categories:** Human languages (8 languages) · Programming languages (Python, JS, SQL, Rust) · Scientific formulas (algebra, calculus, physics, stats) · Edge cases (whitespace, long tokens, mixed scripts)
711
+
712
+ **Metrics explained:**
713
+ - **Fertility** — tokens per word (lower = more efficient; ≥4 = poor coverage)
714
+ - **Compression ratio** — tokens per character
715
+ - **Fidelity** — roundtrip encode→decode produces identical text (must be 1.0)
716
+ """
717
+ )
718
+
719
+ if __name__ == "__main__":
720
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=4.0.0
2
+ transformers>=4.38.0
3
+ tiktoken>=0.6.0
4
+ torch>=2.0.0
5
+ sentencepiece>=0.1.99
6
+ matplotlib>=3.8.0
7
+ seaborn>=0.13.0
8
+ pandas>=2.0.0
9
+ numpy>=1.26.0
10
+ Pillow>=10.0.0