Karachay-Balkar - Wikilangs Models
Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on Karachay-Balkar Wikipedia data. We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
π Repository Contents
Models & Assets
- Tokenizers (8k, 16k, 32k, 64k)
- N-gram models (2, 3, 4, 5-gram)
- Markov chains (context of 1, 2, 3, 4 and 5)
- Subword N-gram and Markov chains
- Embeddings in various sizes and dimensions (aligned and unaligned)
- Language Vocabulary
- Language Statistics
Analysis and Evaluation
- 1. Tokenizer Evaluation
- 2. N-gram Model Evaluation
- 3. Markov Chain Evaluation
- 4. Vocabulary Analysis
- 5. Word Embeddings Evaluation
- 6. Morphological Analysis (Experimental)
- 7. Summary & Recommendations
- Metrics Glossary
- Visualizations Index
1. Tokenizer Evaluation
Results
| Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
|---|---|---|---|---|
| 8k | 3.832x | 3.84 | 0.1001% | 359,596 |
| 16k | 4.195x | 4.20 | 0.1096% | 328,464 |
| 32k | 4.446x | 4.45 | 0.1162% | 309,925 |
| 64k | 4.721x π | 4.72 | 0.1233% | 291,915 |
Tokenization Examples
Below are sample sentences tokenized with each vocabulary size:
Sample 1: .va β ΠΠ°ΡΠΈΠΊΠ°Π½Π½Ρ ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ Π΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ. Π΄ΠΎΠΌΠ΅Π½Π»Π΅ sv:ToppdomΓ€n#V
| Vocab | Tokens | Count |
|---|---|---|
| 8k | β. va ββ βΠ²Π°Ρ ΠΈΠΊ Π°Π½Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ ... (+7 more) |
17 |
| 16k | β. va ββ βΠ²Π°Ρ ΠΈΠΊΠ°Π½Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ . ... (+6 more) |
16 |
| 32k | β. va ββ βΠ²Π°ΡΠΈΠΊΠ°Π½Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+5 more) |
15 |
| 64k | β. va ββ βΠ²Π°ΡΠΈΠΊΠ°Π½Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+5 more) |
15 |
Sample 2: .cu β ΠΡΠ±Π°Π½Ρ ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ Π΄ΠΎΠΌΠ΅Π½ΠΈ. Π΄ΠΎΠΌΠ΅Π½Π»Π΅ sv:ToppdomΓ€n#C
| Vocab | Tokens | Count |
|---|---|---|
| 8k | β. c u ββ βΠΊΡΠ± Π°Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ ... (+7 more) |
17 |
| 16k | β. cu ββ βΠΊΡΠ±Π°Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+5 more) |
15 |
| 32k | β. cu ββ βΠΊΡΠ±Π°Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+5 more) |
15 |
| 64k | β. cu ββ βΠΊΡΠ±Π°Π½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+5 more) |
15 |
Sample 3: .it β ΠΡΠ°Π»ΠΈΡΠ½Ρ ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ Π΄ΠΎΠΌΠ΅Π½ΠΈ. Π΄ΠΎΠΌΠ΅Π½Π»Π΅ he:Χ‘ΧΧΧΧͺ ΧΧΧ ΧΧ¨Χ Χ#ΧΧΧΧͺ Χ‘...
| Vocab | Tokens | Count |
|---|---|---|
| 8k | β. it ββ βΠΈΡΠ°Π»ΠΈΡΠ½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+13 more) |
23 |
| 16k | β. it ββ βΠΈΡΠ°Π»ΠΈΡΠ½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+13 more) |
23 |
| 32k | β. it ββ βΠΈΡΠ°Π»ΠΈΡΠ½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+13 more) |
23 |
| 64k | β. it ββ βΠΈΡΠ°Π»ΠΈΡΠ½Ρ βΠΎΠ³ΡΠ°ΡΡ βΠ΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ βΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ βΠ΄ΠΎΠΌΠ΅Π½ΠΈ . βΠ΄ΠΎΠΌΠ΅Π½Π»Π΅ ... (+13 more) |
23 |
Key Findings
- Best Compression: 64k achieves 4.721x compression
- Lowest UNK Rate: 8k with 0.1001% unknown tokens
- Trade-off: Larger vocabularies improve compression but increase model size
- Recommendation: 32k vocabulary provides optimal balance for production use
2. N-gram Model Evaluation
Results
| N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
|---|---|---|---|---|---|---|
| 2-gram | Word | 4,346 | 12.09 | 7,787 | 17.8% | 47.9% |
| 2-gram | Subword | 391 π | 8.61 | 3,511 | 58.8% | 97.5% |
| 3-gram | Word | 3,291 | 11.68 | 5,584 | 20.4% | 49.5% |
| 3-gram | Subword | 2,989 | 11.55 | 26,299 | 24.2% | 65.9% |
| 4-gram | Word | 5,701 | 12.48 | 8,855 | 16.2% | 35.7% |
| 4-gram | Subword | 13,131 | 13.68 | 110,221 | 13.2% | 39.9% |
| 5-gram | Word | 3,634 | 11.83 | 5,566 | 18.4% | 42.8% |
| 5-gram | Subword | 33,332 | 15.02 | 206,967 | 8.3% | 27.6% |
Top 5 N-grams by Size
2-grams (Word):
| Rank | N-gram | Count |
|---|---|---|
| 1 | Π°Π»Π°ΠΉ Π° |
1,099 |
| 2 | ΡΠΌ ΡΠ»Π»Ρ |
508 |
| 3 | Π°Π±Ρ Π½Ρ |
438 |
| 4 | Π±Π»Π° Π±ΠΈΡΠ³Π΅ |
404 |
| 5 | Ρ
Π°Π»ΠΊΡΠ»Π° Π°ΡΠ°ΡΡ |
386 |
3-grams (Word):
| Rank | N-gram | Count |
|---|---|---|
| 1 | ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ |
255 |
| 2 | Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° |
240 |
| 3 | Π³ΡΠΈΠ³ΠΎΡΠΈΠ°Π½ ΠΎΡΡΠ·Π»Π°ΠΌΠ°Π΄Π° Π΄ΠΆΡΠ»Π½Ρ |
236 |
| 4 | Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ |
236 |
| 5 | Π΄ΠΆΡΠ»Π½Ρ Π°Ρ
ΡΡΡΠ½Π° Π΄Π΅ΡΠΈ |
235 |
4-grams (Word):
| Rank | N-gram | Count |
|---|---|---|
| 1 | ΠΊΡΠ½ΡΠ΄Ρ Π΄ΠΆΡΠ»Π½Ρ Π°Ρ
ΡΡΡΠ½Π° Π΄Π΅ΡΠΈ |
235 |
| 2 | ΠΊΡΠ½ ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ |
234 |
| 3 | ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ |
234 |
| 4 | Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° |
229 |
| 5 | Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ΡΠ»Π³Π΅Π½Π»Π΅ |
228 |
5-grams (Word):
| Rank | N-gram | Count |
|---|---|---|
| 1 | ΠΊΡΠ½ ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ |
234 |
| 2 | ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° |
227 |
| 3 | Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ΡΠ»Π³Π΅Π½Π»Π΅ |
224 |
| 4 | ΡΠΈ ΠΊΡΠ½ΡΠ΄Ρ Π΄ΠΆΡΠ»Π½Ρ Π°Ρ
ΡΡΡΠ½Π° Π΄Π΅ΡΠΈ |
117 |
| 5 | ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ Π΄ΠΎΠΌΠ΅Π½ΠΈΠ΄ΠΈ Π΄ΠΎΠΌΠ΅Π½Π»Π΅ |
91 |
2-grams (Subword):
| Rank | N-gram | Count |
|---|---|---|
| 1 | Π° _ |
83,938 |
| 2 | Π° Π½ |
76,834 |
| 3 | Π» Π° |
72,803 |
| 4 | _ Π± |
61,892 |
| 5 | _ ΠΊ |
60,105 |
3-grams (Subword):
| Rank | N-gram | Count |
|---|---|---|
| 1 | Π³ Ρ Π° |
32,934 |
| 2 | Π½ Ρ _ |
32,399 |
| 3 | Π΄ Π° _ |
31,775 |
| 4 | _ Π΄ ΠΆ |
26,820 |
| 5 | _ ΠΊ Ρ |
25,061 |
4-grams (Subword):
| Rank | N-gram | Count |
|---|---|---|
| 1 | Π³ Ρ Π° Π½ |
18,270 |
| 2 | Π° Π½ Ρ _ |
14,240 |
| 3 | Π» Π³ Ρ Π° |
12,066 |
| 4 | _ Π± ΠΎ Π» |
11,397 |
| 5 | _ Π± Π» Π° |
11,168 |
5-grams (Subword):
| Rank | N-gram | Count |
|---|---|---|
| 1 | Π» Π³ Ρ Π° Π½ |
10,519 |
| 2 | _ Π± Π» Π° _ |
10,384 |
| 3 | Π³ Ρ Π° Π½ Π΄ |
8,413 |
| 4 | _ Π΄ ΠΆ Ρ Π» |
8,226 |
| 5 | Ρ Π° Π½ Π΄ Ρ |
8,219 |
Key Findings
- Best Perplexity: 2-gram (subword) with 391
- Entropy Trend: Decreases with larger n-grams (more predictable)
- Coverage: Top-1000 patterns cover ~28% of corpus
- Recommendation: 4-gram or 5-gram for best predictive performance
3. Markov Chain Evaluation
Results
| Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
|---|---|---|---|---|---|---|
| 1 | Word | 0.7669 | 1.702 | 4.45 | 81,464 | 23.3% |
| 1 | Subword | 0.8973 | 1.863 | 7.38 | 1,256 | 10.3% |
| 2 | Word | 0.1558 | 1.114 | 1.29 | 361,983 | 84.4% |
| 2 | Subword | 0.9642 | 1.951 | 5.73 | 9,247 | 3.6% |
| 3 | Word | 0.0339 | 1.024 | 1.05 | 465,485 | 96.6% |
| 3 | Subword | 0.8243 | 1.771 | 3.79 | 52,874 | 17.6% |
| 4 | Word | 0.0094 π | 1.007 | 1.01 | 486,649 | 99.1% |
| 4 | Subword | 0.5763 | 1.491 | 2.38 | 200,334 | 42.4% |
Generated Text Samples (Word-based)
Below are text samples generated from each word-based Markov chain model:
Context Size 1:
Π±Π»Π° Π΄ΠΆΠ°ΠΊΡΠ»Π°Π½Π½Π³Π°Π½Π΄Ρ Π΄ΠΆΡΠ» ΡΡΠΉΠ»Ρ ΠΎΠΊΡΡ ΠΏΠΈΡΡΠΌΠΎ diwan press isbn Π³Π» ΡΠ΅Π΄ Π² 3 de sΙΛΚΙl ΡΠ΅ΠΉΡΡΠΌΠ΄Π° ΡΡΠΌΠΎΠ΄Π° ΠΈΠ³ΠΈ ΡΡΠ±Π΅ΠΉΠ΄ΠΈΠ»Π΅ ΡΠΌΠ΄Π° Π΄ΠΆΠ΅ΡΠ»ΠΈ ΡΠΌΠ΄Π° ΡΠ°ΠΌΠ°Π»Π»Π°Π΄Π°Π½ Ρ Π°Π»ΠΊΡΠ»Π° Π°ΡΠ°ΡΡ ΠΈΠ»ΠΈΡΠΊΠΈΠ»Π΅ Π΄ΠΆΡΠ»Π΄Π° 0 0 3 2Π΄Π° ΡΡΡΡΡΠΊΡΠ±ΡΠ· ΠΈΠ·ΡΠ°ΠΈΠ»Π³Π΅ ΠΌΠΈΡΠΈΡΠ½ΠΈ ΡΠ΅Π³ΠΈΠ· ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΡ ΠΈΠ½Π³ΠΈΠ»ΠΈΠ·Π»ΠΈΠ»Π΅ ΠΊΡΡΠ±ΡΠ»Π° ΠΊΡΠ½Π±Π°ΡΡΡ ΠΎΡΡΡ Π°Π»ΠΈΠΌ ΠΏΡΠ±Π»ΠΈΡΠΈΡΡ Π±Π°ΠΉΡΠ°...
Context Size 2:
Π°Π»Π°ΠΉ Π° ΠΎΠ» Ρ Π°ΠΊΡΠ»Π° Π±Π΅ΠΊ Π°Π΄Π°ΡΠ³Ρ Π±ΠΎΠ»Π³ΡΠ°Π½Π΄ΡΠ»Π° ΠΊΡΡΠ»Π½Ρ ΠΊΡΠ°ΠΉΠ½Π°Π³ΡΡ Π΄ΠΆΠ°Π½Π³Ρ ΠΊΡΠ°Π·Π°ΡΠ°Ρ Π»ΡΠ΄ΠΎΠ²ΠΈΠΊΠ½ΠΈ Ρ ΠΎΡΠ»Π°ΠΌΡ Π±Π»Π° Π±ΠΈΡΠ΅Π΄...ΡΠΌ ΡΠ»Π»Ρ ΡΠΌΠ΄Π° Π°ΡΠ° Ρ ΡΠ½ΡΠ°Π³ΡΠ° 150 Π±Π΅Π»Π³ΠΈΠ»ΠΈ Π°Π΄Π°ΠΌΠ»Π°Π΄Π°Π½ ΠΊΡΡΡΠ°Π»Π³ΡΠ°Π½ ΡΠ°ΠΌΠ°Π» Π΄Π΅ΠΏΡΡΠ°ΡΡΠΈΡΡΡΠ½ Π΄ΠΆΡΡΡΠ³ΡΠ° Π±ΡΠΉΡΡΠΊΡ Π±Π΅ΡΠ³...Π°Π±Ρ Π½Ρ ΠΊΡΡΡΠ°Π»Π³ΡΠ°Π½ΡΠ½Π΄Π°Π½ Π΄ΠΆΡΠ· Π΄ΠΆΡΠ»Π΄Π°Π½ Π°ΡΡΡΠΊΡΠ½Ρ ΡΡΡΠ³ΡΠ°Π½Π΄Ρ Π΄ΠΆΡΠ» ΠΊΡΡΠ±ΡΠ»Π° ΠΊΠ°ΡΠΎΠ»ΠΈΠ½Π° ΠΊΡΡΠ±ΡΠ»Π°Π΄Π° ΡΠ»ΠΎΡΠΈΠ΄Π° Π°ΡΡΠΊΡ...
Context Size 3:
ΠΎΠ³ΡΠ°ΡΡ Π΄Π°ΡΠ°Π΄ΠΆΠ°Π½Ρ ΠΈΠ½ΡΠ΅ΡΠ½Π΅Ρ Π΄ΠΎΠΌΠ΅Π½ΠΈ Π΄ΠΎΠΌΠ΅Π½Π»Π΅ sv toppdomΓ€n nΠ±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ΡΠ»Π³Π΅Π½Π»Π΅ Π°09Π³ΡΠΈΠ³ΠΎΡΠΈΠ°Π½ ΠΎΡΡΠ·Π»Π°ΠΌΠ°Π΄Π° Π΄ΠΆΡΠ»Π½Ρ 58 ΡΠΈ ΠΊΡΠ½ΡΠ΄Ρ Π΄ΠΆΡΠ»Π½Ρ Π°Ρ ΡΡΡΠ½Π° Π΄Π΅ΡΠΈ 216 ΠΊΡΠ½ ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ Ρ...
Context Size 4:
ΠΊΡΠ½ΡΠ΄Ρ Π΄ΠΆΡΠ»Π½Ρ Π°Ρ ΡΡΡΠ½Π° Π΄Π΅ΡΠΈ 364 ΠΊΡΠ½ Π²ΠΈΡΠΎΠΊΠΎΡ Π΄ΠΆΡΠ»Π»Π°Π΄Π° 365 ΠΊΡΠ½ ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ...ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ΡΠ»Π³Π΅Π½Π»Π΅ Π±09ΠΊΡΠ½ ΠΊΡΠ°Π»Π°Π΄Ρ Π±Π°ΠΉΡΠ°ΠΌΠ»Π° Π±ΠΎΠ»Π³ΡΠ°Π½ ΠΈΡΠ»Π΅ ΡΡΡΠ³ΡΠ°Π½Π»Π° ΡΠ»Π³Π΅Π½Π»Π΅ Π°09
Generated Text Samples (Subword-based)
Below are text samples generated from each subword-based Markov chain model:
Context Size 1:
_ΡΠ³Π°Π½_1_ghat._Π³Π΅Π°_Π°Π³ΡΠ°ΡΠ΅ΠΊΠ°ΡΠ°ΠΉΡΠ°ΡΠ½ΡΠ³Π΅Π»ΡΠ½Ρ_ΠΎΠ»Π΅Π½Π΅_Ρ
Context Size 2:
Π°_Π½Ρ_Π±Π»Π°ΠΉ_ΠΏΡΡΠ½Π΄ΡΠ»Π°Π½_ΡΠΎΠ»ΠΎΠΌΠΎΠ΄_Π±Π»Π°ΠΌΠ°Π»Π»Π°ΡΡΠ½_Π΄ΠΆΠΎΠΌΠΎΠ½ΠΈ_ΠΈΠ·Π΄
Context Size 3:
Π³ΡΠ°ΡΠ΄Π°,_Π°ΡΡ ΠΈΡΠ΅Π»ΡΠ²ΠΈΠ½Ρ_Π΄ΠΆΡΠ»Π³ΡΠ°_ΠΊΠ΅Π½Π³Π΅_Π±Π΄Π°_ΠΏΠΎΠ»ΠΈΡΠΈΠΊΠ°Π½Ρ_ΡΠΈΠΌΠ°
Context Size 4:
Π³ΡΠ°Π½Ρ_ΠΌΠΈΠΉΠΈΠΊ_ΡΠΈΠ»Π΄Π΅_Π΄Π°Π½Ρ_Π±ΠΈΡΠΈΠ½ΡΠΈΡΠΈ_Π±ΠΎΠ»Π°Π΄Π»Π³ΡΠ°Π½_Π΄ΠΆΡΠ»Π΄Π°_Π΄ΠΆΠ΅ΡΠ»Π΅
Key Findings
- Best Predictability: Context-4 (word) with 99.1% predictability
- Branching Factor: Decreases with context size (more deterministic)
- Memory Trade-off: Larger contexts require more storage (200,334 contexts)
- Recommendation: Context-3 or Context-4 for text generation
4. Vocabulary Analysis
Statistics
| Metric | Value |
|---|---|
| Vocabulary Size | 31,984 |
| Total Tokens | 462,833 |
| Mean Frequency | 14.47 |
| Median Frequency | 3 |
| Frequency Std Dev | 100.73 |
Most Common Words
| Rank | Word | Frequency |
|---|---|---|
| 1 | Π±Π»Π° | 11,098 |
| 2 | ΡΠΌΠ΄Π° | 6,281 |
| 3 | Π΄Π° | 3,753 |
| 4 | ΡΠΌ | 2,789 |
| 5 | Π΄ΠΆΡΠ»Π½Ρ | 2,622 |
| 6 | Π±ΠΈΡ | 2,539 |
| 7 | Π±ΠΎΠ»Π³ΡΠ°Π½Π΄Ρ | 2,365 |
| 8 | ΠΎΠ» | 2,214 |
| 9 | ΡΠ»Π»Ρ | 2,174 |
| 10 | Π°Π½Ρ | 2,033 |
Least Common Words (from vocabulary)
| Rank | Word | Frequency |
|---|---|---|
| 1 | ΡΠΎΡΠ΅Ρ | 2 |
| 2 | ΠΊΠΈΠ»Π±ΡΠ°ΠΉΠ΄ | 2 |
| 3 | ΠΊΠ°ΠΌΠ±Π΅ΡΠ½ΠΎΠ»Π΄ | 2 |
| 4 | ΡΠ°ΠΉΠ»Π°Π½Π³ΡΠ°Π½Π΄Ρ | 2 |
| 5 | ΡΡΠΈΠ² | 2 |
| 6 | Π·ΠΎΡ ΡΠ°Π½ | 2 |
| 7 | ΠΌΠ°ΠΌΠ΄Π°Π½ΠΈ | 2 |
| 8 | mamdani | 2 |
| 9 | ΠΏΠ»Π΅ΠΉΠ½Ρ | 2 |
| 10 | Π΄ΠΆΠ΅ΡΠ°Π»ΡΠ΄ | 2 |
Zipf's Law Analysis
| Metric | Value |
|---|---|
| Zipf Coefficient | 0.9853 |
| RΒ² (Goodness of Fit) | 0.993593 |
| Adherence Quality | excellent |
Coverage Analysis
| Top N Words | Coverage |
|---|---|
| Top 100 | 25.2% |
| Top 1,000 | 54.9% |
| Top 5,000 | 77.2% |
| Top 10,000 | 86.3% |
Key Findings
- Zipf Compliance: RΒ²=0.9936 indicates excellent adherence to Zipf's law
- High Frequency Dominance: Top 100 words cover 25.2% of corpus
- Long Tail: 21,984 words needed for remaining 13.7% coverage
5. Word Embeddings Evaluation
5.1 Cross-Lingual Alignment
5.2 Model Comparison
| Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
|---|---|---|---|---|---|
| mono_32d | 32 | 0.8818 | 0.2934 | N/A | N/A |
| mono_64d | 64 | 0.6138 | 0.2510 | N/A | N/A |
| mono_128d | 128 | 0.1461 | 0.2598 | N/A | N/A |
| aligned_32d | 32 | 0.8818 π | 0.2916 | 0.0080 | 0.1040 |
| aligned_64d | 64 | 0.6138 | 0.2543 | 0.0200 | 0.1400 |
| aligned_128d | 128 | 0.1461 | 0.2580 | 0.0360 | 0.1920 |
Key Findings
- Best Isotropy: aligned_32d with 0.8818 (more uniform distribution)
- Semantic Density: Average pairwise similarity of 0.2680. Lower values indicate better semantic separation.
- Alignment Quality: Aligned models achieve up to 3.6% R@1 in cross-lingual retrieval.
- Recommendation: 128d aligned for best cross-lingual performance
6. Morphological Analysis (Experimental)
This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
6.1 Productivity & Complexity
| Metric | Value | Interpretation | Recommendation |
|---|---|---|---|
| Productivity Index | 5.000 | High morphological productivity | Reliable analysis |
| Idiomaticity Gap | 0.553 | High formulaic/idiomatic content | - |
6.2 Affix Inventory (Productive Units)
These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
Productive Prefixes
| Prefix | Examples |
|---|---|
-ΠΊ |
ΠΊΠ΅ΡΠΈ, ΠΊΡΠ±ΡΡΠ»ΡΠΊΠ½Ρ, ΠΊΡΠ°Π»ΠΌΠ°ΠΉΠ΄Ρ |
-Π° |
Π°Π³ΡΡΠΌΠ»Π°Π΄Π°Π½, Π°ΡΠΈΠΉ, Π°Π½ΠΆΡΡΡ |
-Ρ |
ΡΡΠ°Π½Π΄Π°ΡΡΠ»Π°Π½Ρ, ΡΠΎΠΌΠΎΠ²Π½Ρ, ΡΠΈΠ½Ρ ΡΠΎΠ½ |
-Ρ |
ΡΠΎΠΌ, ΡΡΡΠ»Π΅Π½ΠΈΡΠ΄Π΅, ΡΠΈΠ·Π³ΠΈΠ½Π΄Π΅ |
-Π± |
Π±ΠΊΡΠ±, Π±ΠΈΠ»Π΄ΠΈΡΠΈΡΠ³Π΅Π½Π΄ΠΈ, Π±Π΅ΡΠ»ΠΈΠ½Π³ΡΠΎΠ½ |
-ΠΌ |
ΠΌΠΈΡΠΈΡΠ½ΠΈ, ΠΌΠ΅ΡΠΈΠ΄ΠΈΠ°Π½ΡΠ½Ρ, ΠΌΠ°Π΄Π°ΠΌ |
-Π΄ |
Π΄ΠΆΠ΅ΡΠ»Π΅ΡΠ³Π΅Π½Π΄ΠΈΠ»Π΅, Π΄ΠΆΠ°ΡΠ»Π°Π΄Π°Π½, Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°Π»ΡΠ½ΡΠΉ |
-Π΄ΠΆ |
Π΄ΠΆΠ΅ΡΠ»Π΅ΡΠ³Π΅Π½Π΄ΠΈΠ»Π΅, Π΄ΠΆΠ°ΡΠ»Π°Π΄Π°Π½, Π΄ΠΆΡΠΊΡΡΠ³ΡΠ° |
Productive Suffixes
| Suffix | Examples |
|---|---|
-Π° |
ΡΠ°, Π°ΡΡ Π°Π΄Π°, Π½Π°ΠΏΠΎΠ»Π΅ΠΎΠ½Π½Π³Π° |
-Ρ |
ΠΈΠ΄Π΅ΠΎΠ»ΠΎΠ³ΠΈΡΠ»Π°Π½Ρ, Π°ΠΏΠΈΠ°Π½Ρ, ΠΏΡΠΈΠ±Π°Π»ΡΠΈΠΊΠ°Π½Ρ |
-Π½Ρ |
ΠΈΠ΄Π΅ΠΎΠ»ΠΎΠ³ΠΈΡΠ»Π°Π½Ρ, Π°ΠΏΠΈΠ°Π½Ρ, ΠΏΡΠΈΠ±Π°Π»ΡΠΈΠΊΠ°Π½Ρ |
-Π½ |
Π°Π³ΡΡΠΌΠ»Π°Π΄Π°Π½, Π΄ΠΆΠ°ΡΠ»Π°Π΄Π°Π½, Π±Π΅ΡΠ»ΠΈΠ½Π³ΡΠΎΠ½ |
-ΠΈ |
ΠΊΠ΅ΡΠΈ, ΠΌΠΈΡΠΈΡΠ½ΠΈ, ΡΡΠΏΡΠΈ |
-Π»Π° |
Π³Π°ΡΠ½ΠΈΠ·ΠΎΠ½Π»Π°, Π°Π»ΡΠ½ΠΌΠ°Π³ΡΠ°Π½Π΄ΡΠ»Π°, ΡΡΡΠ°Π΄ΡΠ»Π° |
-Π΅ |
Π΄ΠΆΠ΅ΡΠ»Π΅ΡΠ³Π΅Π½Π΄ΠΈΠ»Π΅, ΠΏΡΡΠ»Π΅, ΡΡΡΠ»Π΅Π½ΠΈΡΠ΄Π΅ |
-Π΄Π° |
Π°ΡΡ Π°Π΄Π°, ΠΌΠ°Π½ΠΈΡΠΎΠ±Π°Π΄Π°, Π³Π°Π»ΠΈΠ»Π΅ΡΠ΄Π° |
6.3 Bound Stems (Lexical Roots)
Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
| Stem | Cohesion | Substitutability | Examples |
|---|---|---|---|
Π³Π΅Π½Π΄ |
1.95x | 60 contexts | ΡΠ·Π³Π΅Π½Π΄ΠΈ, Π»Π΅Π³Π΅Π½Π΄Ρ, Π΄Π΅Π³Π΅Π½Π΄ΠΈ |
Π»Π΅Π½ΠΈ |
1.69x | 65 contexts | Π»Π΅Π½ΠΈΠ½, ΡΠ»Π΅Π½ΠΈ, ΠΈΡΠ»Π΅Π½ΠΈ |
ΡΡΠ°Π» |
2.34x | 17 contexts | ΠΊΡΡΠ°Π», ΠΊΡΡΠ°Π»Ρ, ΠΊΡΡΠ°Π»Π΄Ρ |
Π»Π³ΡΠ° |
1.59x | 67 contexts | Π°Π»Π³ΡΠ°, Π·Π°Π»Π³ΡΠ°, Π½ΠΎΠ»Π³ΡΠ° |
Π³ΡΠ°Π½ |
1.42x | 107 contexts | Π΄Π°Π³ΡΠ°Π½, ΠΎΠΉΠ³ΡΠ°Π½, ΠΎΠ·Π³ΡΠ°Π½ |
ΡΠ³ΡΠ° |
1.80x | 38 contexts | ΡΡΠ³ΡΠ°Π½, Π±Π°ΡΠ³ΡΠ°, ΠΎΡΡΠ³ΡΠ° |
ΠΊΡΡΡ |
1.99x | 26 contexts | ΠΊΡΡΡΠ΄, ΠΊΡΡΡΡ, ΠΊΡΡΡΡ |
Π»Π°Π½Ρ |
1.64x | 53 contexts | ΠΏΠ»Π°Π½Ρ, ΡΠ»Π°Π½Ρ, Π°Π»Π°Π½Ρ |
ΠΊΡΡΠ° |
2.29x | 13 contexts | ΠΊΡΡΠ°Π», ΠΊΡΡΠ°Π»Ρ, ΠΊΡΡΠ°Π»Π΄Ρ |
Π»ΡΠΊΡ |
1.67x | 36 contexts | Π±Π°Π»ΡΠΊΡ, ΠΏΠ°Π»ΡΠΊΡ, Π°ΡΠ»ΡΠΊΡ |
Π°Π»Π³Ρ |
1.56x | 34 contexts | Π°Π»Π³ΡΡ, Π°Π»Π³ΡΠ°, Π·Π°Π»Π³ΡΠ° |
Π΅Π½Π΄ΠΈ |
1.81x | 19 contexts | ΡΡΠ΅Π½Π΄ΠΈ, Π°ΡΠ΅Π½Π΄ΠΈ, Π΄ΠΎΠΌΠ΅Π½Π΄ΠΈ |
6.4 Affix Compatibility (Co-occurrence)
This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
| Prefix | Suffix | Frequency | Examples |
|---|---|---|---|
-ΠΊ |
-Π° |
215 words | ΠΊΡΠΎΠ½Π°ΠΊΡΠ³ΡΠ°, ΠΊΡΠ°Π±Π°ΡΠ»Π° |
-ΠΊ |
-Ρ |
195 words | ΠΊΡΡΡΠ°Π»Π³ΡΠ°Π½Ρ, ΠΊΡΠΎΠΉΠ³ΡΠ°Π½Π΄Ρ |
-Π° |
-Π° |
173 words | Π°ΡΠ±Π°, Π°Π·Π΄ΡΠ»Π° |
-Π° |
-Ρ |
142 words | Π°Π½ΡΡ, Π°ΠΉΡΡΠΌΠ»Π°Π½Ρ |
-Π± |
-Π° |
136 words | Π±ΡΠ»ΡΡΠ»Π°Π΄Π°, Π±ΡΠ°Π³Π°Π½ΡΠ° |
-ΠΊ |
-Π½ |
128 words | ΠΊΠ΅ΡΠ΅ΡΠΈΠ»Π³Π΅Π½, ΠΊΡΡΠ»Π΅Π΄Π΅Π½ |
-Π΄ |
-Ρ |
121 words | Π΄ΠΆΡΡΡΠΊΡΠ»Π°ΡΠ°Π΄Ρ, Π΄Π°ΡΠ°Π΄ΠΆΠ°ΡΡΠ½Ρ |
-ΠΊ |
-ΠΈ |
116 words | ΠΊΠΈΡΠ³ΠΈΠ·ΠΈΠ»Π΅Π΄ΠΈ, ΠΊΠ΅Π»Π΄ΠΈ |
-ΠΊ |
-Π΅ |
110 words | ΠΊΠΎΡΠ΅Π΅, ΠΊΠ°Π²ΠΊΠ°Π·ΡΠΊΠΈΠ΅ |
-Π΄ |
-Π° |
108 words | Π΄Π°Ρ Π°ΡΠ΄Π°, Π΄ΠΆΠ°ΡΠ°ΡΡΡΠ³ΡΠ° |
6.5 Recursive Morpheme Segmentation
Using Recursive Hierarchical Substitutability, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., prefix-prefix-root-suffix).
| Word | Suggested Split | Confidence | Stem |
|---|---|---|---|
| ΠΊΡΠΎΡΡΠ»ΠΌΠ°Ρ | ΠΊΡΠΎΡΡΠ»ΠΌ-Π°-Ρ |
7.5 | Π° |
| Π΄Π°ΡΠ°Π΄ΠΆΠ°Π΄Π° | Π΄Π°ΡΠ°Π΄ΠΆ-Π°-Π΄Π° |
7.5 | Π° |
| ΡΠΏΠ΅ΠΊΡΡΠ°Π»ΡΠ½Π°Ρ | ΡΠΏΠ΅ΠΊΡΡΠ°Π»ΡΠ½-Π°-Ρ |
7.5 | Π° |
| ΠΊΠΈΡΠΈΠ»Π»ΠΈΡΠ°Π΄Π° | ΠΊΠΈΡΠΈΠ»Π»ΠΈΡ-Π°-Π΄Π° |
7.5 | Π° |
| Π°ΡΡΡΡΠ»Π³ΡΠ°Π½Π΄Ρ | Π°ΡΡΡΡΠ»Π³Ρ-Π°Π½-Π΄Ρ |
7.5 | Π°Π½ |
| ΠΊΠ°ΡΠ΅Π΄ΡΠ°Π½Ρ | ΠΊΠ°ΡΠ΅Π΄Ρ-Π°-Π½Ρ |
7.5 | Π° |
| ΡΠ΅ΠΌΠΏΠ΅ΡΠ°ΡΡΡΠ°Π½Ρ | ΡΠ΅ΠΌΠΏΠ΅ΡΠ°ΡΡΡ-Π°-Π½Ρ |
7.5 | Π° |
| Π°ΡΠΊΡΠ»Π°Π½Π½Π³Π°Π½Π»Π° | Π°ΡΠΊΡΠ»Π°Π½Π½Π³-Π°Π½-Π»Π° |
7.5 | Π°Π½ |
| ΡΠΎΡ ΡΠ°ΡΠ°Π΄Ρ | ΡΠΎΡ
ΡΠ°Ρ-Π°-Π΄Ρ |
7.5 | Π° |
| ΡΡΠΊΡΠ³ΡΠ°Π½Π΄Π° | ΡΡΠΊΡΠ³Ρ-Π°Π½-Π΄Π° |
7.5 | Π°Π½ |
| ΠΊΡΡΡΡΠ°Π»Π°Π½Π°Π΄Ρ | ΠΊΡΡΡΡΠ°Π»Π°Π½-Π°-Π΄Ρ |
7.5 | Π° |
| ΡΠΎΠ½Π°Π»Π³ΡΠ°Π½Π΄Ρ | ΡΠΎΠ½Π°Π»Π³Ρ-Π°Π½-Π΄Ρ |
7.5 | Π°Π½ |
| ΡΠΈΠ·Π³ΠΈΠ½Π½Π³Π΅ | ΡΠΈΠ·Π³ΠΈΠ½-Π½-Π³Π΅ |
7.5 | Π½ |
| ΡΡΠ»Π΅ΡΠ΅Π»Π»Π΅ | ΡΡΠ»Π΅ΡΠ΅-Π»-Π»Π΅ |
7.5 | Π» |
| ΠΌΠ΅Ρ Π°Π½ΠΈΠΊΠ°Π½Ρ | ΠΌΠ΅Ρ
Π°Π½ΠΈΠΊ-Π°-Π½Ρ |
7.5 | Π° |
6.6 Linguistic Interpretation
Automated Insight: The language Karachay-Balkar shows high morphological productivity. The subword models are significantly more efficient than word models, suggesting a rich system of affixation or compounding.
Note on Idiomaticity: The high Idiomaticity Gap suggests a large number of frequent multi-word expressions or formulaic sequences that are statistically distinct from their component parts.
7. Summary & Recommendations
Production Recommendations
| Component | Recommended | Rationale |
|---|---|---|
| Tokenizer | 64k BPE | Best compression (4.72x) |
| N-gram | 2-gram | Lowest perplexity (391) |
| Markov | Context-4 | Highest predictability (99.1%) |
| Embeddings | 100d | Balanced semantic capture and isotropy |
Appendix: Metrics Glossary & Interpretation Guide
This section provides definitions, intuitions, and guidance for interpreting the metrics used throughout this report.
Tokenizer Metrics
Compression Ratio
Definition: The ratio of characters to tokens (chars/token). Measures how efficiently the tokenizer represents text.
Intuition: Higher compression means fewer tokens needed to represent the same text, reducing sequence lengths for downstream models. A 3x compression means ~3 characters per token on average.
What to seek: Higher is generally better for efficiency, but extremely high compression may indicate overly aggressive merging that loses morphological information.
Average Token Length (Fertility)
Definition: Mean number of characters per token produced by the tokenizer.
Intuition: Reflects the granularity of tokenization. Longer tokens capture more context but may struggle with rare words; shorter tokens are more flexible but increase sequence length.
What to seek: Balance between 2-5 characters for most languages. Arabic/morphologically-rich languages may benefit from slightly longer tokens.
Unknown Token Rate (OOV Rate)
Definition: Percentage of tokens that map to the unknown/UNK token, indicating words the tokenizer cannot represent.
Intuition: Lower OOV means better vocabulary coverage. High OOV indicates the tokenizer encounters many unseen character sequences.
What to seek: Below 1% is excellent; below 5% is acceptable. BPE tokenizers typically achieve very low OOV due to subword fallback.
N-gram Model Metrics
Perplexity
Definition: Measures how "surprised" the model is by test data. Mathematically: 2^(cross-entropy). Lower values indicate better prediction.
Intuition: If perplexity is 100, the model is as uncertain as if choosing uniformly among 100 options at each step. A perplexity of 10 means effectively choosing among 10 equally likely options.
What to seek: Lower is better. Perplexity decreases with larger n-grams (more context). Values vary widely by language and corpus size.
Entropy
Definition: Average information content (in bits) needed to encode the next token given the context. Related to perplexity: perplexity = 2^entropy.
Intuition: High entropy means high uncertainty/randomness; low entropy means predictable patterns. Natural language typically has entropy between 1-4 bits per character.
What to seek: Lower entropy indicates more predictable text patterns. Entropy should decrease as n-gram size increases.
Coverage (Top-K)
Definition: Percentage of corpus occurrences explained by the top K most frequent n-grams.
Intuition: High coverage with few patterns indicates repetitive/formulaic text; low coverage suggests diverse vocabulary usage.
What to seek: Depends on use case. For language modeling, moderate coverage (40-60% with top-1000) is typical for natural text.
Markov Chain Metrics
Average Entropy
Definition: Mean entropy across all contexts, measuring average uncertainty in next-word prediction.
Intuition: Lower entropy means the model is more confident about what comes next. Context-1 has high entropy (many possible next words); Context-4 has low entropy (few likely continuations).
What to seek: Decreasing entropy with larger context sizes. Very low entropy (<0.1) indicates highly deterministic transitions.
Branching Factor
Definition: Average number of unique next tokens observed for each context.
Intuition: High branching = many possible continuations (flexible but uncertain); low branching = few options (predictable but potentially repetitive).
What to seek: Branching factor should decrease with context size. Values near 1.0 indicate nearly deterministic chains.
Predictability
Definition: Derived metric: (1 - normalized_entropy) Γ 100%. Indicates how deterministic the model's predictions are.
Intuition: 100% predictability means the next word is always certain; 0% means completely random. Real text falls between these extremes.
What to seek: Higher predictability for text generation quality, but too high (>98%) may produce repetitive output.
Vocabulary & Zipf's Law Metrics
Zipf's Coefficient
Definition: The slope of the log-log plot of word frequency vs. rank. Zipf's law predicts this should be approximately -1.
Intuition: A coefficient near -1 indicates the corpus follows natural language patterns where a few words are very common and most words are rare.
What to seek: Values between -0.8 and -1.2 indicate healthy natural language distribution. Deviations may suggest domain-specific or artificial text.
RΒ² (Coefficient of Determination)
Definition: Measures how well the linear fit explains the frequency-rank relationship. Ranges from 0 to 1.
Intuition: RΒ² near 1.0 means the data closely follows Zipf's law; lower values indicate deviation from expected word frequency patterns.
What to seek: RΒ² > 0.95 is excellent; > 0.99 indicates near-perfect Zipf adherence typical of large natural corpora.
Vocabulary Coverage
Definition: Cumulative percentage of corpus tokens accounted for by the top N words.
Intuition: Shows how concentrated word usage is. If top-100 words cover 50% of text, the corpus relies heavily on common words.
What to seek: Top-100 covering 30-50% is typical. Higher coverage indicates more repetitive text; lower suggests richer vocabulary.
Word Embedding Metrics
Isotropy
Definition: Measures how uniformly distributed vectors are in the embedding space. Computed as the ratio of minimum to maximum singular values.
Intuition: High isotropy (near 1.0) means vectors spread evenly in all directions; low isotropy means vectors cluster in certain directions, reducing expressiveness.
What to seek: Higher isotropy generally indicates better-quality embeddings. Values > 0.1 are reasonable; > 0.3 is good. Lower-dimensional embeddings tend to have higher isotropy.
Average Norm
Definition: Mean magnitude (L2 norm) of word vectors in the embedding space.
Intuition: Indicates the typical "length" of vectors. Consistent norms suggest stable training; high variance may indicate some words are undertrained.
What to seek: Relatively consistent norms across models. The absolute value matters less than consistency (low std deviation).
Cosine Similarity
Definition: Measures angular similarity between vectors, ranging from -1 (opposite) to 1 (identical direction).
Intuition: Words with similar meanings should have high cosine similarity. This is the standard metric for semantic relatedness in embeddings.
What to seek: Semantically related words should score > 0.5; unrelated words should be near 0. Synonyms often score > 0.7.
t-SNE Visualization
Definition: t-Distributed Stochastic Neighbor Embedding - a dimensionality reduction technique that preserves local structure for visualization.
Intuition: Clusters in t-SNE plots indicate groups of semantically related words. Spread indicates vocabulary diversity; tight clusters suggest semantic coherence.
What to seek: Meaningful clusters (e.g., numbers together, verbs together). Avoid over-interpreting distances - t-SNE preserves local, not global, structure.
General Interpretation Guidelines
- Compare within model families: Metrics are most meaningful when comparing models of the same type (e.g., 8k vs 64k tokenizer).
- Consider trade-offs: Better performance on one metric often comes at the cost of another (e.g., compression vs. OOV rate).
- Context matters: Optimal values depend on downstream tasks. Text generation may prioritize different metrics than classification.
- Corpus influence: All metrics are influenced by corpus characteristics. Wikipedia text differs from social media or literature.
- Language-specific patterns: Morphologically rich languages (like Arabic) may show different optimal ranges than analytic languages.
Visualizations Index
| Visualization | Description |
|---|---|
| Tokenizer Compression | Compression ratios by vocabulary size |
| Tokenizer Fertility | Average token length by vocabulary |
| Tokenizer OOV | Unknown token rates |
| Tokenizer Total Tokens | Total tokens by vocabulary |
| N-gram Perplexity | Perplexity by n-gram size |
| N-gram Entropy | Entropy by n-gram size |
| N-gram Coverage | Top pattern coverage |
| N-gram Unique | Unique n-gram counts |
| Markov Entropy | Entropy by context size |
| Markov Branching | Branching factor by context |
| Markov Contexts | Unique context counts |
| Zipf's Law | Frequency-rank distribution with fit |
| Vocab Frequency | Word frequency distribution |
| Top 20 Words | Most frequent words |
| Vocab Coverage | Cumulative coverage curve |
| Embedding Isotropy | Vector space uniformity |
| Embedding Norms | Vector magnitude distribution |
| Embedding Similarity | Word similarity heatmap |
| Nearest Neighbors | Similar words for key terms |
| t-SNE Words | 2D word embedding visualization |
| t-SNE Sentences | 2D sentence embedding visualization |
| Position Encoding | Encoding method comparison |
| Model Sizes | Storage requirements |
| Performance Dashboard | Comprehensive performance overview |
About This Project
Data Source
Models trained on wikipedia-monthly - a monthly snapshot of Wikipedia articles across 300+ languages.
Project
A project by Wikilangs - Open-source NLP models for every Wikipedia language.
Maintainer
Citation
If you use these models in your research, please cite:
@misc{wikilangs2025,
author = {Kamali, Omar},
title = {Wikilangs: Open NLP Models for Wikipedia Languages},
year = {2025},
doi = {10.5281/zenodo.18073153},
publisher = {Zenodo},
url = {https://huggingface.co/wikilangs}
institution = {Omneity Labs}
}
License
MIT License - Free for academic and commercial use.
Links
- π Website: wikilangs.org
- π€ Models: huggingface.co/wikilangs
- π Data: wikipedia-monthly
- π€ Author: Omar Kamali
- π€ Sponsor: Featherless AI
Generated by Wikilangs Models Pipeline
Report Date: 2026-01-10 08:32:24



















