mk / README.md
omarkamali's picture
Upload all models and assets for mk (latest)
7e1ce1d verified
---
language: mk
language_name: Macedonian
language_family: slavic_south
tags:
- wikilangs
- nlp
- tokenizer
- embeddings
- n-gram
- markov
- wikipedia
- feature-extraction
- sentence-similarity
- tokenization
- n-grams
- markov-chain
- text-mining
- fasttext
- babelvec
- vocabulous
- vocabulary
- monolingual
- family-slavic_south
license: mit
library_name: wikilangs
pipeline_tag: text-generation
datasets:
- omarkamali/wikipedia-monthly
dataset_info:
name: wikipedia-monthly
description: Monthly snapshots of Wikipedia articles across 300+ languages
metrics:
- name: best_compression_ratio
type: compression
value: 4.780
- name: best_isotropy
type: isotropy
value: 0.7374
- name: vocabulary_size
type: vocab
value: 0
generated: 2026-01-10
---
# Macedonian - Wikilangs Models
## Comprehensive Research Report & Full Ablation Study
This repository contains NLP models trained and evaluated by Wikilangs, specifically on **Macedonian** Wikipedia data.
We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings.
## ๐Ÿ“‹ Repository Contents
### Models & Assets
- Tokenizers (8k, 16k, 32k, 64k)
- N-gram models (2, 3, 4, 5-gram)
- Markov chains (context of 1, 2, 3, 4 and 5)
- Subword N-gram and Markov chains
- Embeddings in various sizes and dimensions (aligned and unaligned)
- Language Vocabulary
- Language Statistics
![Performance Dashboard](visualizations/performance_dashboard.png)
### Analysis and Evaluation
- [1. Tokenizer Evaluation](#1-tokenizer-evaluation)
- [2. N-gram Model Evaluation](#2-n-gram-model-evaluation)
- [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
- [4. Vocabulary Analysis](#4-vocabulary-analysis)
- [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
- [6. Morphological Analysis (Experimental)](#6--morphological-analysis-experimental)
- [7. Summary & Recommendations](#7-summary--recommendations)
- [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
- [Visualizations Index](#visualizations-index)
---
## 1. Tokenizer Evaluation
![Tokenizer Compression](visualizations/tokenizer_compression.png)
![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
![Tokenizer OOV](visualizations/tokenizer_oov.png)
![Total Tokens](visualizations/tokenizer_total_tokens.png)
### Results
| Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
|------------|-------------|---------------|----------|--------------|
| **8k** | 3.702x | 3.70 | 0.0702% | 2,405,262 |
| **16k** | 4.123x | 4.12 | 0.0782% | 2,159,772 |
| **32k** | 4.494x | 4.49 | 0.0852% | 1,981,404 |
| **64k** | 4.780x ๐Ÿ† | 4.78 | 0.0906% | 1,862,766 |
### Tokenization Examples
Below are sample sentences tokenized with each vocabulary size:
**Sample 1:** `ะณะพะดะธะฝะฐ ะฒะพ ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ ัะพะดั€ะถะธ ะฝะตะบะพะธ ะทะฝะฐั‡ะฐั˜ะฝะธ ะฝะฐัั‚ะฐะฝะธ. ะะฐัั‚ะฐะฝะธ ะฒะพ ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ ...`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+3 more)` | 13 |
| 16k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+3 more)` | 13 |
| 32k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+3 more)` | 13 |
| 64k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+3 more)` | 13 |
**Sample 2:** `ะณะพะดะธะฝะฐ ะฒะพ ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ ัะพะดั€ะถะธ ะฝะตะบะพะธ ะทะฝะฐั‡ะฐั˜ะฝะธ ะฝะฐัั‚ะฐะฝะธ. ะะฐัั‚ะฐะฝะธ ะฒะพ ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+1 more)` | 11 |
| 16k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+1 more)` | 11 |
| 32k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+1 more)` | 11 |
| 64k | `โ–ะณะพะดะธะฝะฐ โ–ะฒะพ โ–ะฐั€ั…ะธั‚ะตะบั‚ัƒั€ะฐั‚ะฐ โ–ัะพะดั€ะถะธ โ–ะฝะตะบะพะธ โ–ะทะฝะฐั‡ะฐั˜ะฝะธ โ–ะฝะฐัั‚ะฐะฝะธ . โ–ะฝะฐัั‚ะฐะฝะธ โ–ะฒะพ ... (+1 more)` | 11 |
**Sample 3:** `31 ะผะฐั˜ โ€” 151-ะธะพั‚ ะดะตะฝ ะฒะพ ะณะพะดะธะฝะฐั‚ะฐ ัะฟะพั€ะตะด ะณั€ะตะณะพั€ะธั˜ะฐะฝัะบะธะพั‚ ะบะฐะปะตะฝะดะฐั€ (152-ะธ ะฒะพ ะฟั€ะตัั‚...`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `โ– 3 1 โ–ะผะฐั˜ โ–โ€” โ– 1 5 1 - ... (+33 more)` | 43 |
| 16k | `โ– 3 1 โ–ะผะฐั˜ โ–โ€” โ– 1 5 1 - ... (+32 more)` | 42 |
| 32k | `โ– 3 1 โ–ะผะฐั˜ โ–โ€” โ– 1 5 1 - ... (+32 more)` | 42 |
| 64k | `โ– 3 1 โ–ะผะฐั˜ โ–โ€” โ– 1 5 1 - ... (+32 more)` | 42 |
### Key Findings
- **Best Compression:** 64k achieves 4.780x compression
- **Lowest UNK Rate:** 8k with 0.0702% unknown tokens
- **Trade-off:** Larger vocabularies improve compression but increase model size
- **Recommendation:** 32k vocabulary provides optimal balance for production use
---
## 2. N-gram Model Evaluation
![N-gram Perplexity](visualizations/ngram_perplexity.png)
![N-gram Unique](visualizations/ngram_unique.png)
![N-gram Coverage](visualizations/ngram_coverage.png)
### Results
| N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
|--------|---------|------------|---------|----------------|------------------|-------------------|
| **2-gram** | Word | 148,118 | 17.18 | 1,246,589 | 7.0% | 19.9% |
| **2-gram** | Subword | 310 ๐Ÿ† | 8.28 | 17,556 | 66.9% | 98.2% |
| **3-gram** | Word | 382,752 | 18.55 | 2,398,097 | 4.7% | 17.5% |
| **3-gram** | Subword | 2,638 | 11.37 | 153,828 | 27.1% | 68.9% |
| **4-gram** | Word | 605,602 | 19.21 | 3,842,232 | 4.8% | 19.7% |
| **4-gram** | Subword | 15,114 | 13.88 | 929,390 | 13.0% | 37.7% |
| **5-gram** | Word | 281,875 | 18.10 | 2,561,910 | 6.9% | 27.5% |
| **5-gram** | Subword | 61,546 | 15.91 | 3,120,609 | 6.8% | 22.4% |
### Top 5 N-grams by Size
**2-grams (Word):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะฒะพ ะณะพะดะธะฝะฐ` | 270,904 |
| 2 | `ะดะฐ ัะต` | 185,526 |
| 3 | `ะผะพะถะต ะดะฐ` | 82,758 |
| 4 | `ะธัั‚ะพ ั‚ะฐะบะฐ` | 74,629 |
| 5 | `ะณะพะดะธะฝะฐ ะฒะพ` | 71,130 |
**3-grams (Word):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะพะด ัั‚ั€ะฐะฝะฐ ะฝะฐ` | 47,837 |
| 2 | `ะฟ ะฝ ะต` | 45,911 |
| 3 | `ะทะฐ ะฒั€ะตะผะต ะฝะฐ` | 45,528 |
| 4 | `ะฒะพ ั‚ะตะบะพั‚ ะฝะฐ` | 44,568 |
| 5 | `ะผะพะถะต ะดะฐ ัะต` | 38,713 |
**4-grams (Word):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะณ ะฟ ะฝ ะต` | 26,767 |
| 2 | `ะฒะพ ั‚ะตะบะพั‚ ะฝะฐ ะธ` | 13,167 |
| 3 | `ะณะพะดะธะฝะฐ ะพะด ัั‚ั€ะฐะฝะฐ ะฝะฐ` | 13,039 |
| 4 | `ะฑะฐะทะฐ ะฝะฐ ะฟะพะดะฐั‚ะพั†ะธ ะฝะฐ` | 10,253 |
| 5 | `ะต ะฒะบะปัƒั‡ะตะฝ ะธ ะฒะพ` | 10,177 |
**5-grams (Word):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะฝะพะฒะธะพั‚ ะพะฟัˆั‚ ะบะฐั‚ะฐะปะพะณ ะฝะฐ ะดะปะฐะฑะพะบะพะฝะตะฑะตัะฝะธ` | 10,166 |
| 2 | `ะพะฟัˆั‚ ะบะฐั‚ะฐะปะพะณ ะฝะฐ ะดะปะฐะฑะพะบะพะฝะตะฑะตัะฝะธ ั‚ะตะปะฐ` | 10,166 |
| 3 | `ั‚ะพะฐ ะต ะฒะบะปัƒั‡ะตะฝ ะธ ะฒะพ` | 10,165 |
| 4 | `ะต ะฒะบะปัƒั‡ะตะฝ ะธ ะฒะพ ะดั€ัƒะณะธ` | 10,165 |
| 5 | `ะฒั€ัˆะตะฝะพ ะพะด ะฟะพะฒะตัœะต ะธัั‚ั€ะฐะถัƒะฒะฐั‡ะธ ะฟะฐ` | 10,165 |
**2-grams (Subword):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะฐ _` | 16,220,828 |
| 2 | `ะฝ ะฐ` | 9,755,201 |
| 3 | `ะพ _` | 8,545,001 |
| 4 | `ะธ _` | 8,299,189 |
| 5 | `_ ะฝ` | 7,088,266 |
**3-grams (Subword):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะฝ ะฐ _` | 5,782,550 |
| 2 | `_ ะฝ ะฐ` | 5,471,722 |
| 3 | `_ ะฒ ะพ` | 2,895,397 |
| 4 | `ะฒ ะพ _` | 2,774,290 |
| 5 | `ะฐ ั‚ ะฐ` | 2,545,500 |
**4-grams (Subword):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `_ ะฝ ะฐ _` | 3,968,376 |
| 2 | `_ ะฒ ะพ _` | 2,496,500 |
| 3 | `ะฐ ั‚ ะฐ _` | 2,159,054 |
| 4 | `ะธ ั‚ ะต _` | 1,510,803 |
| 5 | `_ ะพ ะด _` | 1,503,838 |
**5-grams (Subword):**
| Rank | N-gram | Count |
|------|--------|-------|
| 1 | `ะฐ _ ะฝ ะฐ _` | 1,123,156 |
| 2 | `_ ะณ ะพ ะด ะธ` | 801,639 |
| 3 | `ะณ ะพ ะด ะธ ะฝ` | 793,128 |
| 4 | `ะพ ะด ะธ ะฝ ะฐ` | 717,809 |
| 5 | `ะฐ _ ะฒ ะพ _` | 641,767 |
### Key Findings
- **Best Perplexity:** 2-gram (subword) with 310
- **Entropy Trend:** Decreases with larger n-grams (more predictable)
- **Coverage:** Top-1000 patterns cover ~22% of corpus
- **Recommendation:** 4-gram or 5-gram for best predictive performance
---
## 3. Markov Chain Evaluation
![Markov Entropy](visualizations/markov_entropy.png)
![Markov Contexts](visualizations/markov_contexts.png)
![Markov Branching](visualizations/markov_branching.png)
### Results
| Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
|---------|---------|-------------|------------|------------------|-----------------|----------------|
| **1** | Word | 0.9313 | 1.907 | 11.18 | 1,397,869 | 6.9% |
| **1** | Subword | 0.9537 | 1.937 | 6.98 | 8,643 | 4.6% |
| **2** | Word | 0.3725 | 1.295 | 2.41 | 15,610,954 | 62.7% |
| **2** | Subword | 0.7745 | 1.711 | 5.49 | 60,305 | 22.6% |
| **3** | Word | 0.1516 | 1.111 | 1.34 | 37,555,740 | 84.8% |
| **3** | Subword | 0.8197 | 1.765 | 4.77 | 330,722 | 18.0% |
| **4** | Word | 0.0598 ๐Ÿ† | 1.042 | 1.11 | 50,433,239 | 94.0% |
| **4** | Subword | 0.7470 | 1.678 | 3.67 | 1,576,045 | 25.3% |
### Generated Text Samples (Word-based)
Below are text samples generated from each word-based Markov chain model:
**Context Size 1:**
1. `ะฝะฐ ะผะธะฝะพั‚ะฐัƒั€ะพั‚ ะฝะฐั˜ัั‚ะฐั€ะพั‚ะพ ะฑะพะปะฝะธั‡ะบะพ ะปะตะบัƒะฒะฐัšะต ะฝะฐ ะตะปะตะบั‚ั€ะพะฝะพั‚ ะผะพะถะต ะดะฐ ะณะพ ัั‚ะฐะฒะฐะฐั‚ ะฒะพ ะฝะพะตะผะฒั€ะธ ัะต ะพะด ะพะฒะธะต`
2. `ะฒะพ ะณ ะฟ ะฝ ะต ะฝะตะทะฐะฒะธัะฝะฐ ะดั€ะถะฐะฒะฐ ะทะฐ ะฐะฒะฐั€ะธั‚ะต ะดะฐ ัะต ัะปัƒั‡ะธะปะต ะฝะตะบะพะปะบัƒ ะผะธะฝัƒั‚ะธ ะฟะพ ะฝะตะบะพะปะบัƒ`
3. `ะธ ะฝะฐะดะณะปะตะดัƒะฒะฐั˜ัœะธ ั€ะฐะดะธะบะฐะปะฝะธ ั€ะตะฐะบั†ะธะธ ะบะฐะบะพ ะธ ะณะปะฐะฒะตะฝ ัƒะฒะพะทะฝะธะบ skycom ัะฐะด ะธ ะฒะพ ั€ะพะผะฐะฝะธั˜ะฐ ะธัั‚ะพั€ะธั˜ะฐั‚ะฐ ะบะฐะบะพ ugc`
**Context Size 2:**
1. `ะฒะพ ะณะพะดะธะฝะฐ ะฒะพ ะฟะพะปัะบะฐ ะธ ัƒะบั€ะฐะธะฝะฐ ั€ะตะบะฐั‚ะฐ ะต 117 ะบะผ2 ะดะธั‚ะผะฐั€ัˆะตะฝ 132 965 1 861 ะณะพะดะธะฝะฐ ะฟั€ะตะด`
2. `ะดะฐ ัะต ะฝะฐั‚ะฟั€ะตะฒะฐั€ัƒะฒะฐ ะฒะพะดะฐั‡ะธ ะฝะฐ ะทะตะผั˜ะฐั‚ะฐ ั€ะฐะทะฒะพั˜ะพั‚ ะฝะฐ ะฟั€ะตะฟะฐั€ะฐั‚ะธ ะทะฐ ะฐั‚ั€ะพั„ะธั‡ะฝะฐั‚ะฐ ะบะพะถะฐ ะผะฝะพะณัƒ ะฟะพัˆะธั€ะพะบ ะพะฟั„ะฐั‚ ั‚...`
3. `ะผะพะถะต ะดะฐ ะธะผะฐ ะธะทั€ะฐะทะตะฝะธ ะพะดะดะฐะฒะฐัšะฐ ะฝะฐ ัั‚ั€ะพะฝั†ะธัƒะผ ะธ ะฐะปัƒะผะธะฝะธัƒะผ ะธะทะพะฟั€ะพะพะบัะธะดะธ ัะพะพะดะฒะตั‚ะฝะพ ะฟั€ะฒะธะพั‚ ะต ะฐะฝะพะฝะธะผะฝะพั‚ะพ ัะบ...`
**Context Size 3:**
1. `ะพะด ัั‚ั€ะฐะฝะฐ ะฝะฐ ะดะฐะฝั†ะธั‚ะต ะบะพะธ ัะต ะฟะพะดะพะปะณะธ ะพะด ะฐะบัะธั˜ะฐะปะฝะฐั‚ะฐ ะฟะธั€ะธะดะธะปะฝะฐ ga n ะฒั€ัะบะฐ ัะพ ะดะพะปะถะธะฝะธ ะฝะฐ ัั‚ั€ะฐะฝะธั‚ะต ะฐ`
2. `ะทะฐ ะฒั€ะตะผะต ะฝะฐ ะฒะตั‡ะตั€ะฐั‚ะฐ ะธะฒะฐะฝ ะธะปะธั‡ ะต ะฒะตัœะต ะผะฝะพะณัƒ ะฟะธั˜ะฐะฝ ะบะพะณะฐ ะปะธะฝะดะพั€ั„ ะฒะปะตะณัƒะฒะฐ ัะพ ะฟะตั˜ะฐั‡ะบะฐั‚ะฐ ัั‚ะตะปะฐ ะธ ะณะพ`
3. `ะฒะพ ั‚ะตะบะพั‚ ะฝะฐ 367 ะธ 368 ะธัะปะฐะผัะบะฐ ะณะพะดะธะฝะฐ ะฝะฐัั‚ะฐะฝะธ 1 ั˜ะฐะฝัƒะฐั€ะธ ัััั€ ะทะฐะฟะพั‡ะฝัƒะฒะฐ ัะพ ัะฒะพั˜ะฐั‚ะฐ ั…ัƒะผะฐะฝะธั‚ะฐั€ะฝะฐ ะฐะบั‚ะธะฒะฝ...`
**Context Size 4:**
1. `ะณ ะฟ ะฝ ะต ัะฟะพั€ะตะด ะฟั€ะพะดะพะปะถะตะฝะธะพั‚ ั˜ัƒะปะธั˜ะฐะฝัะบะธ ะบะฐะปะตะฝะดะฐั€ ะธัั‚ะฐั‚ะฐ ั‚ั€ะฐะต ะฒะพ ั‚ะตะบะพั‚ ะฝะฐ ะธ ะณะพะดะธะฝะฐ ัะฟะพั€ะตะด ะฐัะธั€ัะบะธะพั‚ ะบะฐ...`
2. `ะฒะพ ั‚ะตะบะพั‚ ะฝะฐ ะธ ะณะพะดะธะฝะฐ ัะฟะพั€ะตะด ะฐัะธั€ัะบะธะพั‚ ะบะฐะปะตะฝะดะฐั€ ะฒะพ ะบะพั˜ัˆั‚ะพ ะผะตั€ะตัšะตั‚ะพ ะฝะฐ ะฒั€ะตะผะตั‚ะพ ะทะฐะฟะพั‡ะฝัƒะฒะฐ ัะพ 622 ะณะพะดะธะฝะฐ...`
3. `ะณะพะดะธะฝะฐ ะพะด ัั‚ั€ะฐะฝะฐ ะฝะฐ ะฑัƒะณะฐั€ัะบะธั‚ะต ะธัั‚ั€ะฐะถัƒะฒะฐั‡ะธ ะณะตะฝะตั€ะธั‡ะบะธั‚ะต ะปะตะบะพะฒะธ ะณะพ ั„ะพั€ะผะธั€ะฐะฐั‚ ัั‚ะพะปะฑะพั‚ ะฝะฐ ะปะพะบะฐะปะฝะฐั‚ะฐ ะตะบะพะฝ...`
### Generated Text Samples (Subword-based)
Below are text samples generated from each subword-based Markov chain model:
**Context Size 1:**
1. `_ะพ_ะทะฝะฐ_ัะบะฐะฝ_ะธะดะพ_`
2. `ะฐ._ะฒะพ_ะธ_ะฝะธะฝะฐ_ะฝ_ั`
3. `ะพะฒะฐ_ะดะธั‡ะตะผะต_ะฝะฐ_ะบะฐ`
**Context Size 2:**
1. `ะฐ_ะธะทะตะฝะธะทะฒะธะฝะต_ะฝะฐ_ะผ`
2. `ะฝะฐ_ะดะพะฝะต_ะฟะตั€ะตั‚ัะบะธ_`
3. `ะพ_ัƒะปะธะณะฝะฐ_ะฐั€ะปะฐั€ะธัั‚`
**Context Size 3:**
1. `ะฝะฐ_ั˜ะฐะฝัะบะพ-ะปะธัะบะพะฒะตะบ`
2. `_ะฝะฐ_ะพัั‚_ะฟะพะฟั€ะฐั‚ะฐ_ะถะต`
3. `_ะฒะพ_ั„ั€ะตะฝ_ะบะฐะบะฒะธะพั‚_ะฟ`
**Context Size 4:**
1. `_ะฝะฐ_ั‡ะฐัˆะบะธั‚ะต_ะทะฐะปะธะฒะพั€`
2. `_ะฒะพ_ัะปะธั‡ะฝ_ะบั€ะธะฒะฐะปะต_ะด`
3. `ะฐั‚ะฐ_ะดะพะปะฝะฐ_ัะฒะธัšะฐั€ัะบะธ`
### Key Findings
- **Best Predictability:** Context-4 (word) with 94.0% predictability
- **Branching Factor:** Decreases with context size (more deterministic)
- **Memory Trade-off:** Larger contexts require more storage (1,576,045 contexts)
- **Recommendation:** Context-3 or Context-4 for text generation
---
## 4. Vocabulary Analysis
![Zipf's Law](visualizations/zipf_law.png)
![Top Words](visualizations/top20_words.png)
![Coverage Curve](visualizations/vocab_coverage.png)
### Statistics
| Metric | Value |
|--------|-------|
| Vocabulary Size | 629,840 |
| Total Tokens | 66,539,192 |
| Mean Frequency | 105.64 |
| Median Frequency | 4 |
| Frequency Std Dev | 7439.52 |
### Most Common Words
| Rank | Word | Frequency |
|------|------|-----------|
| 1 | ะฝะฐ | 3,984,194 |
| 2 | ะฒะพ | 2,517,366 |
| 3 | ะธ | 2,001,305 |
| 4 | ะพะด | 1,514,717 |
| 5 | ัะต | 1,235,287 |
| 6 | ะทะฐ | 987,031 |
| 7 | ัะพ | 823,175 |
| 8 | ะต | 782,070 |
| 9 | ะณะพะดะธะฝะฐ | 672,383 |
| 10 | ะดะฐ | 610,844 |
### Least Common Words (from vocabulary)
| Rank | Word | Frequency |
|------|------|-----------|
| 1 | ะบะฐะปะตัƒั‡ะต | 2 |
| 2 | chiloรฉ | 2 |
| 3 | ะฟั€ะตะถะธะฒะตะฝะธะพั‚ | 2 |
| 4 | ะดะตะปะตะฒะธัˆ | 2 |
| 5 | platessoides | 2 |
| 6 | pleco | 2 |
| 7 | ะผะตั‚ะฐั€ะผะฐ | 2 |
| 8 | ะฐะปะฐะปะฐะพะฝะฐ | 2 |
| 9 | octodecimguttata | 2 |
| 10 | ะดะพะผะฑะฐัะป | 2 |
### Zipf's Law Analysis
| Metric | Value |
|--------|-------|
| Zipf Coefficient | 0.9604 |
| Rยฒ (Goodness of Fit) | 0.996757 |
| Adherence Quality | **excellent** |
### Coverage Analysis
| Top N Words | Coverage |
|-------------|----------|
| Top 100 | 37.1% |
| Top 1,000 | 56.3% |
| Top 5,000 | 72.8% |
| Top 10,000 | 79.8% |
### Key Findings
- **Zipf Compliance:** Rยฒ=0.9968 indicates excellent adherence to Zipf's law
- **High Frequency Dominance:** Top 100 words cover 37.1% of corpus
- **Long Tail:** 619,840 words needed for remaining 20.2% coverage
---
## 5. Word Embeddings Evaluation
![Embedding Isotropy](visualizations/embedding_isotropy.png)
![Similarity Matrix](visualizations/embedding_similarity.png)
![t-SNE Words](visualizations/tsne_words.png)
![t-SNE Sentences](visualizations/tsne_sentences.png)
### 5.1 Cross-Lingual Alignment
![Alignment Quality](visualizations/embedding_alignment_quality.png)
![Multilingual t-SNE](visualizations/embedding_tsne_multilingual.png)
### 5.2 Model Comparison
| Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
|-------|-----------|----------|------------------|---------------|----------------|
| **mono_32d** | 32 | 0.7374 | 0.3633 | N/A | N/A |
| **mono_64d** | 64 | 0.7024 | 0.2990 | N/A | N/A |
| **mono_128d** | 128 | 0.6203 | 0.2691 | N/A | N/A |
| **aligned_32d** | 32 | 0.7374 ๐Ÿ† | 0.3635 | 0.1520 | 0.5340 |
| **aligned_64d** | 64 | 0.7024 | 0.2953 | 0.2380 | 0.6560 |
| **aligned_128d** | 128 | 0.6203 | 0.2655 | 0.3760 | 0.7180 |
### Key Findings
- **Best Isotropy:** aligned_32d with 0.7374 (more uniform distribution)
- **Semantic Density:** Average pairwise similarity of 0.3093. Lower values indicate better semantic separation.
- **Alignment Quality:** Aligned models achieve up to 37.6% R@1 in cross-lingual retrieval.
- **Recommendation:** 128d aligned for best cross-lingual performance
---
## 6. Morphological Analysis (Experimental)
This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
### 6.1 Productivity & Complexity
| Metric | Value | Interpretation | Recommendation |
|--------|-------|----------------|----------------|
| Productivity Index | **5.000** | High morphological productivity | Reliable analysis |
| Idiomaticity Gap | **0.225** | High formulaic/idiomatic content | - |
### 6.2 Affix Inventory (Productive Units)
These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
#### Productive Prefixes
| Prefix | Examples |
|--------|----------|
| `-ั` | ัั‚ะตะฝะปะธะฒะธะป, ัะตะฒะตั€ะพะธัั‚ะพั‡ะตะฝ, ัะฐะปะผะธะฝะตะฝ |
| `-ะบะฐ` | ะบะฐะฝั‚ั€ะตะป, ะบะฐั‡, ะบะฐั‚ะพะปัะบะธ |
| `-ะฐ` | ะฐั€ะณะฐั, ะฐะทะธะปะฐะฝั‚ะธั‚ะต, ะฐะบะฝะธัั‚ะต |
| `-ะผะฐ` | ะผะฐะผัƒั†ะธ, ะผะฐะถะตะฝะธั‚ะต, ะผะฐั€ะธะฝะฐะดะฐั‚ะฐ |
| `-ะฟะพ` | ะฟะพะผะพั€ะธัะบะฐ, ะฟะพะดั˜ะฐะทะธั‡ะฝะฐั‚ะฐ, ะฟะพะปะพะถะฐั‚ |
| `-ะบ` | ะบะฐะฝั‚ั€ะตะป, ะบะปะฐะดะพัˆะฝะธั†ะฐ, ะบะพะธะฝะพะฝ |
| `-ะบะพ` | ะบะพะธะฝะพะฝ, ะบะพัˆะปะฐะฝะด, ะบะพะฟั€ะพะดัƒะบั‚ |
| `-s` | sbordone, superluminal, stralsunder |
#### Productive Suffixes
| Suffix | Examples |
|--------|----------|
| `-ะฐ` | ะดัƒัˆะตะฒะธะฝะฐ, ะตะปะตะบั‚ั€ะพะฝะธะบะฐ, ะฟะธั˜ะฐะฝะฐั‚ะฐ |
| `-ะธ` | ะธะทะดะฐั‚ะพั†ะธ, ั€ัƒะดะฝะธั†ะธ, ะณะปะธะบะพะฑะตะปะบะพะฒะธะฝะธ |
| `-ะต` | ะฝะฐั˜ะฝะตะฒะพะพะฑะธั‡ะฐะตะฝะธั‚ะต, ััƒะฑะบัƒะปั‚ัƒั€ะฝะธั‚ะต, ั†ะฐั€ะธั†ะต |
| `-ั‚ะต` | ะฝะฐั˜ะฝะตะฒะพะพะฑะธั‡ะฐะตะฝะธั‚ะต, ััƒะฑะบัƒะปั‚ัƒั€ะฝะธั‚ะต, ั€ะตะณะธัั‚ั€ะฐั‚ะพั€ะธั‚ะต |
| `-ั‚ะฐ` | ะฟะธั˜ะฐะฝะฐั‚ะฐ, ะฟั€ะพะฒะตั€ะบะฐั‚ะฐ, ะฟะพะดั˜ะฐะทะธั‡ะฝะฐั‚ะฐ |
| `-ั‚` | ะตัƒะบะฐั€ะธะพั‚, ั€ะตะฒะตั€ัะพั‚, ะผะตั“ัƒะฟะฐั€ะปะฐะผะตะฝั‚ะฐั€ะฝะธะพั‚ |
| `-ะพั‚` | ะตัƒะบะฐั€ะธะพั‚, ั€ะตะฒะตั€ัะพั‚, ะผะตั“ัƒะฟะฐั€ะปะฐะผะตะฝั‚ะฐั€ะฝะธะพั‚ |
| `-ะพ` | ะฒะธั‚ะตัˆะบะพั‚ะพ, ะธะฝั‚ะธะผะฝะพ, ะฟะฐั€ะพะปะพ |
### 6.3 Bound Stems (Lexical Roots)
Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
| Stem | Cohesion | Substitutability | Examples |
|------|----------|------------------|----------|
| `ัƒะฒะฐะฐ` | 2.43x | 85 contexts | ัƒะฒะฐะฐั‚, ั‡ัƒะฒะฐะฐ, ะถัƒะฒะฐะฐั‚ |
| `ัƒะฒะฐัš` | 2.04x | 160 contexts | ะปัƒะฒะฐัšะต, ั€ัƒะฒะฐัšะต, ั‡ัƒะฒะฐัšะต |
| `ัƒะฒะฐะป` | 2.00x | 172 contexts | ัƒะฒะฐะปะฐ, ั˜ัƒะฒะฐะป, ะดัƒะฒะฐะป |
| `ะธั˜ะฐั‚` | 1.76x | 300 contexts | ะปะธั˜ะฐั‚, ั…ะธั˜ะฐั‚, ั€ะธั˜ะฐั‚ |
| `ะธั‡ะบะธ` | 1.82x | 235 contexts | ะบะธั‡ะบะธ, ะฝะธั‡ะบะธ, ะปะธั‡ะบะธ |
| `ะบะตะดะพ` | 2.77x | 33 contexts | ะผะฐะบะตะดะพ, ะฐะปะบะตะดะพ, ะผะฐะบะตะดะพะฝ |
| `ะฐัšะตั‚` | 2.27x | 71 contexts | ั€ะฐัšะตั‚ะพ, ะฒะฐัšะตั‚ะพ, ะบะฐัšะตั‚ะต |
| `ะฝัะบะธ` | 1.58x | 402 contexts | ั€ะพะฝัะบะธ, ะผะตะฝัะบะธ, ั€ะตะฝัะบะธ |
| `ะฐะฝัะบ` | 1.34x | 935 contexts | ะบะฐะฝัะบ, ะฐะฝัะบะฐ, ะดะฐะฝัะบ |
| `ะธัะบะธ` | 1.56x | 353 contexts | ะบะธัะบะธ, ั‚ะธัะบะธ, ะฟะธัะบะธ |
| `ะธะฝัะบ` | 1.39x | 722 contexts | ะฟะธะฝัะบ, ะธะฝัะบะพ, ะผะธะฝัะบ |
| `ะพะฝัะบ` | 1.41x | 510 contexts | ั€ะพะฝัะบะธ, ั˜ะพะฝัะบะพ, ัˆะพะฝัะบะธ |
### 6.4 Affix Compatibility (Co-occurrence)
This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
| Prefix | Suffix | Frequency | Examples |
|--------|--------|-----------|----------|
| `-ะฟ` | `-ะฐ` | 118 words | ะฟั€ะตัะฑะธะบัƒะทะฐ, ะฟะตะฝั‚ะตัะธะปะตั˜ะฐ |
| `-ั` | `-ะฐ` | 108 words | ัะฐะผะฑั€ะฐ, ัะบะฐัะฐ |
| `-ะฟ` | `-ะธ` | 79 words | ะฟะพะฒะตะปะฑะตะฝะธ, ะฟะพั™ะฐะฝะธ |
| `-ะฟ` | `-ะต` | 76 words | ะฟะธั‚ะธั‚ะต, ะฟะพะธะดะต |
| `-ะบ` | `-ะฐ` | 74 words | ะบะปะธั‚ะธะบะฐ, ะบัƒะธะบัะฐะผะฐ |
| `-ั` | `-ะธ` | 70 words | ััƒะบะพั‚ะฐะธ, ัะฐะฟั€ะพั„ะธั‚ะธะธ |
| `-ั` | `-ะต` | 67 words | ัะปัƒะถะธั‚ะต, ัะพั„ะธั‚ะต |
| `-ะฟะพ` | `-ะฐ` | 66 words | ะฟะพัะธั‚ะฝะฐ, ะฟะพั‡ะตัะฝะฐ |
| `-ะฐ` | `-ะฐ` | 64 words | ะฐะดะฐั€ัะฐะฝะฐ, ะฐะตั‚ะพั€ะตะผะฐ |
| `-ะฑ` | `-ะฐ` | 62 words | ะฑะตะทะปะธัะฝะฐ, ะฑะพะทะพะฝัะบะฐั‚ะฐ |
### 6.5 Recursive Morpheme Segmentation
Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
| Word | Suggested Split | Confidence | Stem |
|------|-----------------|------------|------|
| ะบะปะธะฝะตั‚ะธั‚ะต | **`ะบะปะธะฝะตั‚-ะธ-ั‚ะต`** | 7.5 | `ะธ` |
| ะบะฐั€ะฐะฝั‚ะฐะฝั†ะธั‚ะต | **`ะบะฐั€ะฐะฝั‚ะฐะฝั†-ะธ-ั‚ะต`** | 7.5 | `ะธ` |
| าซะตะผาซะตะปำ—ั…ะฟะฐะปะปะธ | **`าซะตะผาซะตะปำ—ั…ะฟะฐะป-ะป-ะธ`** | 7.5 | `ะป` |
| ะบะตะดั€ะพะฒะฐั‚ะฐ | **`ะบะตะดั€ะพะฒ-ะฐ-ั‚ะฐ`** | 7.5 | `ะฐ` |
| ั‚ั€ะบะฐั‡ะบะฐั‚ะฐ | **`ั‚ั€ะบะฐั‡-ะบะฐ-ั‚ะฐ`** | 7.5 | `ะบะฐ` |
| ะฟะฐะฝั‚ะพั‚ะตะฝะฐั‚ | **`ะฟะฐะฝั‚ะพั‚ะตะฝ-ะฐ-ั‚`** | 7.5 | `ะฐ` |
| ะฝะฐั€ะฐะฝัŸะธั‚ะพ | **`ะฝะฐั€ะฐะฝัŸ-ะธ-ั‚ะพ`** | 7.5 | `ะธ` |
| ะตะฟั€ะพัะฐั€ั‚ะฐะฝ | **`ะตะฟั€ะพัะฐั€-ั‚ะฐ-ะฝ`** | 7.5 | `ั‚ะฐ` |
| ัั‚ะธะฒะตะฝัะพะฒะธะพั‚ | **`ัั‚ะธะฒะตะฝัะพะฒ-ะธ-ะพั‚`** | 7.5 | `ะธ` |
| ะพั€ะณะฐะฝะธะทะธั€ะฐะฝะพ | **`ะพั€ะณะฐะฝะธะทะธั€-ะฐ-ะฝะพ`** | 7.5 | `ะฐ` |
| ะตะฒั€ะพะฐะทะธั˜ั†ะธั‚ะต | **`ะตะฒั€ะพะฐะทะธั˜ั†-ะธ-ั‚ะต`** | 7.5 | `ะธ` |
| ะตะฟะธัั‚ะฐะทะฐั‚ะฐ | **`ะตะฟะธัั‚ะฐะท-ะฐ-ั‚ะฐ`** | 7.5 | `ะฐ` |
| ัั‚ั€ะฐะดะฐั‡ะธั‚ะต | **`ัั‚ั€ะฐะดะฐั‡-ะธ-ั‚ะต`** | 7.5 | `ะธ` |
| ะฟะพัˆั‚ะฐั€ะธะฝะฐั‚ะฐ | **`ะฟะพัˆั‚ะฐั€ะธะฝ-ะฐ-ั‚ะฐ`** | 7.5 | `ะฐ` |
| ะดะตะฑะฐั‚ะธั€ะฐะฝะพ | **`ะดะตะฑะฐั‚ะธั€-ะฐ-ะฝะพ`** | 7.5 | `ะฐ` |
### 6.6 Linguistic Interpretation
> **Automated Insight:**
The language Macedonian shows high morphological productivity. The subword models are significantly more efficient than word models, suggesting a rich system of affixation or compounding.
---
## 7. Summary & Recommendations
![Performance Dashboard](visualizations/performance_dashboard.png)
### Production Recommendations
| Component | Recommended | Rationale |
|-----------|-------------|-----------|
| Tokenizer | **64k BPE** | Best compression (4.78x) |
| N-gram | **2-gram** | Lowest perplexity (310) |
| Markov | **Context-4** | Highest predictability (94.0%) |
| Embeddings | **100d** | Balanced semantic capture and isotropy |
---
## Appendix: Metrics Glossary & Interpretation Guide
This section provides definitions, intuitions, and guidance for interpreting the metrics used throughout this report.
### Tokenizer Metrics
**Compression Ratio**
> *Definition:* The ratio of characters to tokens (chars/token). Measures how efficiently the tokenizer represents text.
>
> *Intuition:* Higher compression means fewer tokens needed to represent the same text, reducing sequence lengths for downstream models. A 3x compression means ~3 characters per token on average.
>
> *What to seek:* Higher is generally better for efficiency, but extremely high compression may indicate overly aggressive merging that loses morphological information.
**Average Token Length (Fertility)**
> *Definition:* Mean number of characters per token produced by the tokenizer.
>
> *Intuition:* Reflects the granularity of tokenization. Longer tokens capture more context but may struggle with rare words; shorter tokens are more flexible but increase sequence length.
>
> *What to seek:* Balance between 2-5 characters for most languages. Arabic/morphologically-rich languages may benefit from slightly longer tokens.
**Unknown Token Rate (OOV Rate)**
> *Definition:* Percentage of tokens that map to the unknown/UNK token, indicating words the tokenizer cannot represent.
>
> *Intuition:* Lower OOV means better vocabulary coverage. High OOV indicates the tokenizer encounters many unseen character sequences.
>
> *What to seek:* Below 1% is excellent; below 5% is acceptable. BPE tokenizers typically achieve very low OOV due to subword fallback.
### N-gram Model Metrics
**Perplexity**
> *Definition:* Measures how "surprised" the model is by test data. Mathematically: 2^(cross-entropy). Lower values indicate better prediction.
>
> *Intuition:* If perplexity is 100, the model is as uncertain as if choosing uniformly among 100 options at each step. A perplexity of 10 means effectively choosing among 10 equally likely options.
>
> *What to seek:* Lower is better. Perplexity decreases with larger n-grams (more context). Values vary widely by language and corpus size.
**Entropy**
> *Definition:* Average information content (in bits) needed to encode the next token given the context. Related to perplexity: perplexity = 2^entropy.
>
> *Intuition:* High entropy means high uncertainty/randomness; low entropy means predictable patterns. Natural language typically has entropy between 1-4 bits per character.
>
> *What to seek:* Lower entropy indicates more predictable text patterns. Entropy should decrease as n-gram size increases.
**Coverage (Top-K)**
> *Definition:* Percentage of corpus occurrences explained by the top K most frequent n-grams.
>
> *Intuition:* High coverage with few patterns indicates repetitive/formulaic text; low coverage suggests diverse vocabulary usage.
>
> *What to seek:* Depends on use case. For language modeling, moderate coverage (40-60% with top-1000) is typical for natural text.
### Markov Chain Metrics
**Average Entropy**
> *Definition:* Mean entropy across all contexts, measuring average uncertainty in next-word prediction.
>
> *Intuition:* Lower entropy means the model is more confident about what comes next. Context-1 has high entropy (many possible next words); Context-4 has low entropy (few likely continuations).
>
> *What to seek:* Decreasing entropy with larger context sizes. Very low entropy (<0.1) indicates highly deterministic transitions.
**Branching Factor**
> *Definition:* Average number of unique next tokens observed for each context.
>
> *Intuition:* High branching = many possible continuations (flexible but uncertain); low branching = few options (predictable but potentially repetitive).
>
> *What to seek:* Branching factor should decrease with context size. Values near 1.0 indicate nearly deterministic chains.
**Predictability**
> *Definition:* Derived metric: (1 - normalized_entropy) ร— 100%. Indicates how deterministic the model's predictions are.
>
> *Intuition:* 100% predictability means the next word is always certain; 0% means completely random. Real text falls between these extremes.
>
> *What to seek:* Higher predictability for text generation quality, but too high (>98%) may produce repetitive output.
### Vocabulary & Zipf's Law Metrics
**Zipf's Coefficient**
> *Definition:* The slope of the log-log plot of word frequency vs. rank. Zipf's law predicts this should be approximately -1.
>
> *Intuition:* A coefficient near -1 indicates the corpus follows natural language patterns where a few words are very common and most words are rare.
>
> *What to seek:* Values between -0.8 and -1.2 indicate healthy natural language distribution. Deviations may suggest domain-specific or artificial text.
**Rยฒ (Coefficient of Determination)**
> *Definition:* Measures how well the linear fit explains the frequency-rank relationship. Ranges from 0 to 1.
>
> *Intuition:* Rยฒ near 1.0 means the data closely follows Zipf's law; lower values indicate deviation from expected word frequency patterns.
>
> *What to seek:* Rยฒ > 0.95 is excellent; > 0.99 indicates near-perfect Zipf adherence typical of large natural corpora.
**Vocabulary Coverage**
> *Definition:* Cumulative percentage of corpus tokens accounted for by the top N words.
>
> *Intuition:* Shows how concentrated word usage is. If top-100 words cover 50% of text, the corpus relies heavily on common words.
>
> *What to seek:* Top-100 covering 30-50% is typical. Higher coverage indicates more repetitive text; lower suggests richer vocabulary.
### Word Embedding Metrics
**Isotropy**
> *Definition:* Measures how uniformly distributed vectors are in the embedding space. Computed as the ratio of minimum to maximum singular values.
>
> *Intuition:* High isotropy (near 1.0) means vectors spread evenly in all directions; low isotropy means vectors cluster in certain directions, reducing expressiveness.
>
> *What to seek:* Higher isotropy generally indicates better-quality embeddings. Values > 0.1 are reasonable; > 0.3 is good. Lower-dimensional embeddings tend to have higher isotropy.
**Average Norm**
> *Definition:* Mean magnitude (L2 norm) of word vectors in the embedding space.
>
> *Intuition:* Indicates the typical "length" of vectors. Consistent norms suggest stable training; high variance may indicate some words are undertrained.
>
> *What to seek:* Relatively consistent norms across models. The absolute value matters less than consistency (low std deviation).
**Cosine Similarity**
> *Definition:* Measures angular similarity between vectors, ranging from -1 (opposite) to 1 (identical direction).
>
> *Intuition:* Words with similar meanings should have high cosine similarity. This is the standard metric for semantic relatedness in embeddings.
>
> *What to seek:* Semantically related words should score > 0.5; unrelated words should be near 0. Synonyms often score > 0.7.
**t-SNE Visualization**
> *Definition:* t-Distributed Stochastic Neighbor Embedding - a dimensionality reduction technique that preserves local structure for visualization.
>
> *Intuition:* Clusters in t-SNE plots indicate groups of semantically related words. Spread indicates vocabulary diversity; tight clusters suggest semantic coherence.
>
> *What to seek:* Meaningful clusters (e.g., numbers together, verbs together). Avoid over-interpreting distances - t-SNE preserves local, not global, structure.
### General Interpretation Guidelines
1. **Compare within model families:** Metrics are most meaningful when comparing models of the same type (e.g., 8k vs 64k tokenizer).
2. **Consider trade-offs:** Better performance on one metric often comes at the cost of another (e.g., compression vs. OOV rate).
3. **Context matters:** Optimal values depend on downstream tasks. Text generation may prioritize different metrics than classification.
4. **Corpus influence:** All metrics are influenced by corpus characteristics. Wikipedia text differs from social media or literature.
5. **Language-specific patterns:** Morphologically rich languages (like Arabic) may show different optimal ranges than analytic languages.
### Visualizations Index
| Visualization | Description |
|---------------|-------------|
| Tokenizer Compression | Compression ratios by vocabulary size |
| Tokenizer Fertility | Average token length by vocabulary |
| Tokenizer OOV | Unknown token rates |
| Tokenizer Total Tokens | Total tokens by vocabulary |
| N-gram Perplexity | Perplexity by n-gram size |
| N-gram Entropy | Entropy by n-gram size |
| N-gram Coverage | Top pattern coverage |
| N-gram Unique | Unique n-gram counts |
| Markov Entropy | Entropy by context size |
| Markov Branching | Branching factor by context |
| Markov Contexts | Unique context counts |
| Zipf's Law | Frequency-rank distribution with fit |
| Vocab Frequency | Word frequency distribution |
| Top 20 Words | Most frequent words |
| Vocab Coverage | Cumulative coverage curve |
| Embedding Isotropy | Vector space uniformity |
| Embedding Norms | Vector magnitude distribution |
| Embedding Similarity | Word similarity heatmap |
| Nearest Neighbors | Similar words for key terms |
| t-SNE Words | 2D word embedding visualization |
| t-SNE Sentences | 2D sentence embedding visualization |
| Position Encoding | Encoding method comparison |
| Model Sizes | Storage requirements |
| Performance Dashboard | Comprehensive performance overview |
---
## About This Project
### Data Source
Models trained on [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly) - a monthly snapshot of Wikipedia articles across 300+ languages.
### Project
A project by **[Wikilangs](https://wikilangs.org)** - Open-source NLP models for every Wikipedia language.
### Maintainer
[Omar Kamali](https://omarkamali.com) - [Omneity Labs](https://omneitylabs.com)
### Citation
If you use these models in your research, please cite:
```bibtex
@misc{wikilangs2025,
author = {Kamali, Omar},
title = {Wikilangs: Open NLP Models for Wikipedia Languages},
year = {2025},
doi = {10.5281/zenodo.18073153},
publisher = {Zenodo},
url = {https://huggingface.co/wikilangs}
institution = {Omneity Labs}
}
```
### License
MIT License - Free for academic and commercial use.
### Links
- ๐ŸŒ Website: [wikilangs.org](https://wikilangs.org)
- ๐Ÿค— Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
- ๐Ÿ“Š Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
- ๐Ÿ‘ค Author: [Omar Kamali](https://huggingface.co/omarkamali)
- ๐Ÿค Sponsor: [Featherless AI](https://featherless.ai)
---
*Generated by Wikilangs Models Pipeline*
*Report Date: 2026-01-10 18:37:02*