File size: 22,008 Bytes
eaf3f42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
---
language: bcl
language_name: BCL
language_family: austronesian_philippine_central
tags:
  - wikilangs
  - nlp
  - tokenizer
  - embeddings
  - n-gram
  - markov
  - wikipedia
  - monolingual
  - family-austronesian_philippine_central
license: mit
library_name: wikilangs
pipeline_tag: feature-extraction
datasets:
  - omarkamali/wikipedia-monthly
dataset_info:
  name: wikipedia-monthly
  description: Monthly snapshots of Wikipedia articles across 300+ languages
metrics:
  - name: best_compression_ratio
    type: compression
    value: 4.640
  - name: best_isotropy
    type: isotropy
    value: 0.8200
  - name: vocabulary_size
    type: vocab
    value: 139464
generated: 2025-12-28
---

# BCL - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study

This repository contains NLP models trained and evaluated by Wikilangs, specifically on **BCL** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.

## 📋 Repository Contents

### Models & Assets

- Tokenizers (8k, 16k, 32k, 64k)
- N-gram models (2, 3, 4-gram)
- Markov chains (context of 1, 2, 3 and 4)
- Subword N-gram and Markov chains
- Embeddings in various sizes and dimensions
- Language Vocabulary
- Language Statistics
![Performance Dashboard](visualizations/performance_dashboard.png)

### Analysis and Evaluation

- [1. Tokenizer Evaluation](#1-tokenizer-evaluation)
- [2. N-gram Model Evaluation](#2-n-gram-model-evaluation)
- [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
- [4. Vocabulary Analysis](#4-vocabulary-analysis)
- [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
- [6. Summary & Recommendations](#6-summary--recommendations)
- [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
- [Visualizations Index](#visualizations-index)

---
## 1. Tokenizer Evaluation

![Tokenizer Compression](visualizations/tokenizer_compression.png)

### Results

| Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
|------------|-------------|---------------|----------|--------------|
| **8k** | 3.849x | 3.74 | 0.0148% | 391,873 |
| **16k** | 4.154x | 4.04 | 0.0160% | 363,086 |
| **32k** | 4.421x | 4.30 | 0.0170% | 341,132 |
| **64k** | 4.640x 🏆 | 4.51 | 0.0178% | 325,066 |

### Tokenization Examples

Below are sample sentences tokenized with each vocabulary size:

**Sample 1:** `REDIRECT An Sanduguan`

| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁re dire ct ▁an ▁sand ug uan` | 7 |
| 16k | `▁re dire ct ▁an ▁sand ug uan` | 7 |
| 32k | `▁re direct ▁an ▁sand uguan` | 5 |
| 64k | `▁re direct ▁an ▁sand uguan` | 5 |

**Sample 2:** `An  sarong komyun asin banwaan sa Provincia nin Cosenza sa rehiyon Calabria kan ...`

| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁an ▁sarong ▁komyun ▁asin ▁banwaan ▁sa ▁provincia ▁nin ▁cos enza ... (+6 more)` | 16 |
| 16k | `▁an ▁sarong ▁komyun ▁asin ▁banwaan ▁sa ▁provincia ▁nin ▁cosenza ▁sa ... (+5 more)` | 15 |
| 32k | `▁an ▁sarong ▁komyun ▁asin ▁banwaan ▁sa ▁provincia ▁nin ▁cosenza ▁sa ... (+5 more)` | 15 |
| 64k | `▁an ▁sarong ▁komyun ▁asin ▁banwaan ▁sa ▁provincia ▁nin ▁cosenza ▁sa ... (+5 more)` | 15 |

**Sample 3:** `An  sarong taon sa Gregoryanong kalendaryo.

 Enero 
Pebrero
 Marso 
Abril
 Mayo...`

| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁an ▁sarong ▁taon ▁sa ▁gregoryanong ▁kalendaryo . ▁enero ▁pebrero ▁marso ... (+9 more)` | 19 |
| 16k | `▁an ▁sarong ▁taon ▁sa ▁gregoryanong ▁kalendaryo . ▁enero ▁pebrero ▁marso ... (+9 more)` | 19 |
| 32k | `▁an ▁sarong ▁taon ▁sa ▁gregoryanong ▁kalendaryo . ▁enero ▁pebrero ▁marso ... (+9 more)` | 19 |
| 64k | `▁an ▁sarong ▁taon ▁sa ▁gregoryanong ▁kalendaryo . ▁enero ▁pebrero ▁marso ... (+9 more)` | 19 |


### Key Findings

- **Best Compression:** 64k achieves 4.640x compression
- **Lowest UNK Rate:** 8k with 0.0148% unknown tokens
- **Trade-off:** Larger vocabularies improve compression but increase model size
- **Recommendation:** 32k vocabulary provides optimal balance for production use

---
## 2. N-gram Model Evaluation

![N-gram Perplexity](visualizations/ngram_perplexity.png)

![N-gram Coverage](visualizations/ngram_coverage.png)

### Results

| N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
|--------|------------|---------|----------------|------------------|-------------------|
| **2-gram** | 31,343 🏆 | 14.94 | 180,870 | 14.8% | 31.9% |
| **2-gram** | 262 🏆 | 8.03 | 8,566 | 68.4% | 98.8% |
| **3-gram** | 108,578 | 16.73 | 332,655 | 6.5% | 18.2% |
| **3-gram** | 2,285 | 11.16 | 64,437 | 30.5% | 69.8% |
| **4-gram** | 210,030 | 17.68 | 511,491 | 6.6% | 14.4% |
| **4-gram** | 13,379 | 13.71 | 345,622 | 17.2% | 41.0% |

### Top 5 N-grams by Size

**2-grams:**

| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `. an` | 41,934 |
| 2 | `sa mga` | 30,441 |
| 3 | `an mga` | 27,397 |
| 4 | `, asin` | 26,685 |
| 5 | `, an` | 24,473 |

**3-grams:**

| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `kategorya : mga` | 16,293 |
| 2 | `. an mga` | 6,827 |
| 3 | `panluwas na takod` | 5,537 |
| 4 | `mga panluwas na` | 4,931 |
| 5 | `toltolan kategorya :` | 4,124 |

**4-grams:**

| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `mga panluwas na takod` | 4,635 |
| 2 | `toltolan kategorya : mga` | 2,861 |
| 3 | `toltolan mga panluwas na` | 2,801 |
| 4 | `— — — —` | 2,785 |
| 5 | `. igwa ining sukol` | 2,225 |


### Key Findings

- **Best Perplexity:** 2-gram with 262
- **Entropy Trend:** Decreases with larger n-grams (more predictable)
- **Coverage:** Top-1000 patterns cover ~41% of corpus
- **Recommendation:** 4-gram or 5-gram for best predictive performance

---
## 3. Markov Chain Evaluation

![Markov Entropy](visualizations/markov_entropy.png)

![Markov Branching](visualizations/markov_branching.png)

### Results

| Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
|---------|-------------|------------|------------------|-----------------|----------------|
| **1** | 0.6497 | 1.569 | 5.59 | 379,065 | 35.0% |
| **1** | 1.0949 | 2.136 | 6.69 | 6,611 | 0.0% |
| **2** | 0.3654 | 1.288 | 2.19 | 2,116,590 | 63.5% |
| **2** | 0.6035 | 1.519 | 3.87 | 44,194 | 39.6% |
| **3** | 0.1662 | 1.122 | 1.36 | 4,629,958 | 83.4% |
| **3** | 0.7134 | 1.640 | 3.84 | 171,168 | 28.7% |
| **4** | 0.0685 🏆 | 1.049 | 1.12 | 6,293,312 | 93.1% |
| **4** | 0.6518 🏆 | 1.571 | 2.96 | 656,881 | 34.8% |

### Generated Text Samples

Below are text samples generated from each Markov chain model:

**Context Size 1:**

1. `, sarong law jack white house of eastern europe award hale sa ' affaire jean nabiribid`
2. `sa mga padalian na english rosalía nagpirma sa banwaan kan ikasampulong kabilogan nin edukasyon si a...`
3. `na mga pagpreparar nin mayor na pigtuturing kan huring pararawitdawit , o tungkod " ) .`

**Context Size 2:**

1. `. an designadong zip code kaini iyo . susog ki milagros perfecto sanchez sa halipot na usipon`
2. `sa mga osipon sa pilipino na may titulong paghinanyog man , siya nagpoon na mag - audition`
3. `an mga botelya , pakete nin kakanon asin an responsibilidad . sa ibaba sa kabtang kaini .`

**Context Size 3:**

1. `kategorya : mga 2016 na kagadanan kategorya : mga tataramon na mansakan , iyo an pinagrekonstruhir n...`
2. `. an mga bitis nin manok sarong seryosong peligro nin pagkahilo sa susunod na taon huli sa iyo`
3. `panluwas na takod philatlas . com philippine standard geographic code local governance performance m...`

**Context Size 4:**

1. `mga panluwas na takod inactive volcanoes page ( arkibo ) kategorya : mga unibersidad asin kolehiyo s...`
2. `toltolan kategorya : mga armadong sanga kan mga partido pulitika kategorya : mga organisasyon natugd...`
3. `toltolan mga panluwas na takod philatlas . com philippine standard geographic code local governance ...`


### Key Findings

- **Best Predictability:** Context-4 with 93.1% predictability
- **Branching Factor:** Decreases with context size (more deterministic)
- **Memory Trade-off:** Larger contexts require more storage (656,881 contexts)
- **Recommendation:** Context-3 or Context-4 for text generation

---
## 4. Vocabulary Analysis

![Zipf's Law](visualizations/zipf_law.png)

![Top Words](visualizations/top20_words.png)

![Coverage Curve](visualizations/vocab_coverage.png)

### Statistics

| Metric | Value |
|--------|-------|
| Vocabulary Size | 139,464 |
| Total Tokens | 6,306,562 |
| Mean Frequency | 45.22 |
| Median Frequency | 4 |
| Frequency Std Dev | 1750.04 |

### Most Common Words

| Rank | Word | Frequency |
|------|------|-----------|
| 1 | sa | 340,332 |
| 2 | na | 337,956 |
| 3 | an | 230,638 |
| 4 | kan | 226,231 |
| 5 | mga | 183,688 |
| 6 | nin | 132,320 |
| 7 | asin | 125,887 |
| 8 | sarong | 62,639 |
| 9 | si | 54,499 |
| 10 | the | 44,508 |

### Least Common Words (from vocabulary)

| Rank | Word | Frequency |
|------|------|-----------|
| 1 | zhaparova | 2 |
| 2 | altynbekov | 2 |
| 3 | wanatabe | 2 |
| 4 | megapaniki | 2 |
| 5 | kordon | 2 |
| 6 | sobringaran | 2 |
| 7 | khanid | 2 |
| 8 | ganish | 2 |
| 9 | archdioceseofcaceres | 2 |
| 10 | niceno | 2 |

### Zipf's Law Analysis

| Metric | Value |
|--------|-------|
| Zipf Coefficient | 1.0291 |
| R² (Goodness of Fit) | 0.993065 |
| Adherence Quality | **excellent** |

### Coverage Analysis

| Top N Words | Coverage |
|-------------|----------|
| Top 100 | 41.8% |
| Top 1,000 | 62.8% |
| Top 5,000 | 79.1% |
| Top 10,000 | 85.2% |

### Key Findings

- **Zipf Compliance:** R²=0.9931 indicates excellent adherence to Zipf's law
- **High Frequency Dominance:** Top 100 words cover 41.8% of corpus
- **Long Tail:** 129,464 words needed for remaining 14.8% coverage

---
## 5. Word Embeddings Evaluation

![Embedding Isotropy](visualizations/embedding_isotropy.png)

![Similarity Matrix](visualizations/embedding_similarity.png)

![t-SNE Words](visualizations/tsne_words.png)

![t-SNE Sentences](visualizations/tsne_sentences.png)

### Model Comparison

| Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
|-------|------------|-----------|----------|----------|----------|
| **mono_32d** | 78,307 | 32 | 3.325 | 0.855 | 0.8200 🏆 |
| **mono_64d** | 78,307 | 64 | 3.871 | 0.899 | 0.8194 |
| **mono_128d** | 78,307 | 128 | 4.639 | 0.920 | 0.8065 |
| **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |

### Key Findings

- **Best Isotropy:** mono_32d with 0.8200 (more uniform distribution)
- **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
- **Vocabulary Coverage:** All models cover 78,307 words
- **Recommendation:** 100d for balanced semantic capture and efficiency

---
## 6. Summary & Recommendations

![Performance Dashboard](visualizations/performance_dashboard.png)

### Production Recommendations

| Component | Recommended | Rationale |
|-----------|-------------|-----------|
| Tokenizer | **32k BPE** | Best compression (4.64x) with low UNK rate |
| N-gram | **5-gram** | Lowest perplexity (262) |
| Markov | **Context-4** | Highest predictability (93.1%) |
| Embeddings | **100d** | Balanced semantic capture and isotropy |

---
## Appendix: Metrics Glossary & Interpretation Guide

This section provides definitions, intuitions, and guidance for interpreting the metrics used throughout this report.

### Tokenizer Metrics

**Compression Ratio**
> *Definition:* The ratio of characters to tokens (chars/token). Measures how efficiently the tokenizer represents text.
>
> *Intuition:* Higher compression means fewer tokens needed to represent the same text, reducing sequence lengths for downstream models. A 3x compression means ~3 characters per token on average.
>
> *What to seek:* Higher is generally better for efficiency, but extremely high compression may indicate overly aggressive merging that loses morphological information.

**Average Token Length (Fertility)**
> *Definition:* Mean number of characters per token produced by the tokenizer.
>
> *Intuition:* Reflects the granularity of tokenization. Longer tokens capture more context but may struggle with rare words; shorter tokens are more flexible but increase sequence length.
>
> *What to seek:* Balance between 2-5 characters for most languages. Arabic/morphologically-rich languages may benefit from slightly longer tokens.

**Unknown Token Rate (OOV Rate)**
> *Definition:* Percentage of tokens that map to the unknown/UNK token, indicating words the tokenizer cannot represent.
>
> *Intuition:* Lower OOV means better vocabulary coverage. High OOV indicates the tokenizer encounters many unseen character sequences.
>
> *What to seek:* Below 1% is excellent; below 5% is acceptable. BPE tokenizers typically achieve very low OOV due to subword fallback.

### N-gram Model Metrics

**Perplexity**
> *Definition:* Measures how "surprised" the model is by test data. Mathematically: 2^(cross-entropy). Lower values indicate better prediction.
>
> *Intuition:* If perplexity is 100, the model is as uncertain as if choosing uniformly among 100 options at each step. A perplexity of 10 means effectively choosing among 10 equally likely options.
>
> *What to seek:* Lower is better. Perplexity decreases with larger n-grams (more context). Values vary widely by language and corpus size.

**Entropy**
> *Definition:* Average information content (in bits) needed to encode the next token given the context. Related to perplexity: perplexity = 2^entropy.
>
> *Intuition:* High entropy means high uncertainty/randomness; low entropy means predictable patterns. Natural language typically has entropy between 1-4 bits per character.
>
> *What to seek:* Lower entropy indicates more predictable text patterns. Entropy should decrease as n-gram size increases.

**Coverage (Top-K)**
> *Definition:* Percentage of corpus occurrences explained by the top K most frequent n-grams.
>
> *Intuition:* High coverage with few patterns indicates repetitive/formulaic text; low coverage suggests diverse vocabulary usage.
>
> *What to seek:* Depends on use case. For language modeling, moderate coverage (40-60% with top-1000) is typical for natural text.

### Markov Chain Metrics

**Average Entropy**
> *Definition:* Mean entropy across all contexts, measuring average uncertainty in next-word prediction.
>
> *Intuition:* Lower entropy means the model is more confident about what comes next. Context-1 has high entropy (many possible next words); Context-4 has low entropy (few likely continuations).
>
> *What to seek:* Decreasing entropy with larger context sizes. Very low entropy (<0.1) indicates highly deterministic transitions.

**Branching Factor**
> *Definition:* Average number of unique next tokens observed for each context.
>
> *Intuition:* High branching = many possible continuations (flexible but uncertain); low branching = few options (predictable but potentially repetitive).
>
> *What to seek:* Branching factor should decrease with context size. Values near 1.0 indicate nearly deterministic chains.

**Predictability**
> *Definition:* Derived metric: (1 - normalized_entropy) × 100%. Indicates how deterministic the model's predictions are.
>
> *Intuition:* 100% predictability means the next word is always certain; 0% means completely random. Real text falls between these extremes.
>
> *What to seek:* Higher predictability for text generation quality, but too high (>98%) may produce repetitive output.

### Vocabulary & Zipf's Law Metrics

**Zipf's Coefficient**
> *Definition:* The slope of the log-log plot of word frequency vs. rank. Zipf's law predicts this should be approximately -1.
>
> *Intuition:* A coefficient near -1 indicates the corpus follows natural language patterns where a few words are very common and most words are rare.
>
> *What to seek:* Values between -0.8 and -1.2 indicate healthy natural language distribution. Deviations may suggest domain-specific or artificial text.

**R² (Coefficient of Determination)**
> *Definition:* Measures how well the linear fit explains the frequency-rank relationship. Ranges from 0 to 1.
>
> *Intuition:* R² near 1.0 means the data closely follows Zipf's law; lower values indicate deviation from expected word frequency patterns.
>
> *What to seek:* R² > 0.95 is excellent; > 0.99 indicates near-perfect Zipf adherence typical of large natural corpora.

**Vocabulary Coverage**
> *Definition:* Cumulative percentage of corpus tokens accounted for by the top N words.
>
> *Intuition:* Shows how concentrated word usage is. If top-100 words cover 50% of text, the corpus relies heavily on common words.
>
> *What to seek:* Top-100 covering 30-50% is typical. Higher coverage indicates more repetitive text; lower suggests richer vocabulary.

### Word Embedding Metrics

**Isotropy**
> *Definition:* Measures how uniformly distributed vectors are in the embedding space. Computed as the ratio of minimum to maximum singular values.
>
> *Intuition:* High isotropy (near 1.0) means vectors spread evenly in all directions; low isotropy means vectors cluster in certain directions, reducing expressiveness.
>
> *What to seek:* Higher isotropy generally indicates better-quality embeddings. Values > 0.1 are reasonable; > 0.3 is good. Lower-dimensional embeddings tend to have higher isotropy.

**Average Norm**
> *Definition:* Mean magnitude (L2 norm) of word vectors in the embedding space.
>
> *Intuition:* Indicates the typical "length" of vectors. Consistent norms suggest stable training; high variance may indicate some words are undertrained.
>
> *What to seek:* Relatively consistent norms across models. The absolute value matters less than consistency (low std deviation).

**Cosine Similarity**
> *Definition:* Measures angular similarity between vectors, ranging from -1 (opposite) to 1 (identical direction).
>
> *Intuition:* Words with similar meanings should have high cosine similarity. This is the standard metric for semantic relatedness in embeddings.
>
> *What to seek:* Semantically related words should score > 0.5; unrelated words should be near 0. Synonyms often score > 0.7.

**t-SNE Visualization**
> *Definition:* t-Distributed Stochastic Neighbor Embedding - a dimensionality reduction technique that preserves local structure for visualization.
>
> *Intuition:* Clusters in t-SNE plots indicate groups of semantically related words. Spread indicates vocabulary diversity; tight clusters suggest semantic coherence.
>
> *What to seek:* Meaningful clusters (e.g., numbers together, verbs together). Avoid over-interpreting distances - t-SNE preserves local, not global, structure.

### General Interpretation Guidelines

1. **Compare within model families:** Metrics are most meaningful when comparing models of the same type (e.g., 8k vs 64k tokenizer).
2. **Consider trade-offs:** Better performance on one metric often comes at the cost of another (e.g., compression vs. OOV rate).
3. **Context matters:** Optimal values depend on downstream tasks. Text generation may prioritize different metrics than classification.
4. **Corpus influence:** All metrics are influenced by corpus characteristics. Wikipedia text differs from social media or literature.
5. **Language-specific patterns:** Morphologically rich languages (like Arabic) may show different optimal ranges than analytic languages.


### Visualizations Index

| Visualization | Description |
|---------------|-------------|
| Tokenizer Compression | Compression ratios by vocabulary size |
| Tokenizer Fertility | Average token length by vocabulary |
| Tokenizer OOV | Unknown token rates |
| Tokenizer Total Tokens | Total tokens by vocabulary |
| N-gram Perplexity | Perplexity by n-gram size |
| N-gram Entropy | Entropy by n-gram size |
| N-gram Coverage | Top pattern coverage |
| N-gram Unique | Unique n-gram counts |
| Markov Entropy | Entropy by context size |
| Markov Branching | Branching factor by context |
| Markov Contexts | Unique context counts |
| Zipf's Law | Frequency-rank distribution with fit |
| Vocab Frequency | Word frequency distribution |
| Top 20 Words | Most frequent words |
| Vocab Coverage | Cumulative coverage curve |
| Embedding Isotropy | Vector space uniformity |
| Embedding Norms | Vector magnitude distribution |
| Embedding Similarity | Word similarity heatmap |
| Nearest Neighbors | Similar words for key terms |
| t-SNE Words | 2D word embedding visualization |
| t-SNE Sentences | 2D sentence embedding visualization |
| Position Encoding | Encoding method comparison |
| Model Sizes | Storage requirements |
| Performance Dashboard | Comprehensive performance overview |

---
## About This Project

### Data Source

Models trained on [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly) - a monthly snapshot of Wikipedia articles across 300+ languages.

### Project

A project by **[Wikilangs](https://wikilangs.org)** - Open-source NLP models for every Wikipedia language.

### Maintainer

[Omar Kamali](https://omarkamali.com) - [Omneity Labs](https://omneitylabs.com)

### Citation

If you use these models in your research, please cite:

```bibtex
@misc{wikilangs2025,
  author = {Kamali, Omar},
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/wikilangs}
  institution = {Omneity Labs}
}
```

### License

MIT License - Free for academic and commercial use.

### Links

- 🌐 Website: [wikilangs.org](https://wikilangs.org)
- 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
- 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
- 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
---
*Generated by Wikilangs Models Pipeline*

*Report Date: 2025-12-28 00:25:48*