omarkamali commited on
Commit
16bbcc3
·
verified ·
1 Parent(s): b130b56

Upload all models and assets for ab (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +309 -140
  2. models/embeddings/monolingual/ab_128d.bin +2 -2
  3. models/embeddings/monolingual/ab_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/ab_32d.bin +2 -2
  5. models/embeddings/monolingual/ab_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/ab_64d.bin +2 -2
  7. models/embeddings/monolingual/ab_64d_metadata.json +5 -3
  8. models/subword_markov/ab_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/ab_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/ab_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/ab_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/ab_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/ab_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/ab_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/ab_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/ab_2gram_subword.parquet +2 -2
  17. models/subword_ngram/ab_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/ab_3gram_subword.parquet +2 -2
  19. models/subword_ngram/ab_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/ab_4gram_subword.parquet +2 -2
  21. models/subword_ngram/ab_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/ab_tokenizer_16k.model +2 -2
  23. models/tokenizer/ab_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/ab_tokenizer_32k.model +2 -2
  25. models/tokenizer/ab_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/ab_tokenizer_64k.model +2 -2
  27. models/tokenizer/ab_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/ab_tokenizer_8k.model +2 -2
  29. models/tokenizer/ab_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/ab_vocabulary.parquet +2 -2
  31. models/vocabulary/ab_vocabulary_metadata.json +10 -9
  32. models/word_markov/ab_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/ab_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/ab_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/ab_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/ab_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/ab_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/ab_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/ab_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/ab_2gram_word.parquet +2 -2
  41. models/word_ngram/ab_2gram_word_metadata.json +2 -2
  42. models/word_ngram/ab_3gram_word.parquet +2 -2
  43. models/word_ngram/ab_3gram_word_metadata.json +2 -2
  44. models/word_ngram/ab_4gram_word.parquet +2 -2
  45. models/word_ngram/ab_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.203
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8443
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 34914
33
- generated: 2025-12-27
34
  ---
35
 
36
  # AB - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,59 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.211x | 3.15 | 0.1756% | 257,918 |
76
- | **16k** | 3.553x | 3.49 | 0.1943% | 233,133 |
77
- | **32k** | 3.880x | 3.81 | 0.2122% | 213,462 |
78
- | **64k** | 4.203x 🏆 | 4.13 | 0.2299% | 197,072 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `Ѫ, ѫкириллтәи аҩыратә архаикатә иажәхьоу нбан.
85
-
86
- Азхьарԥшқәа
87
- Graphemica (Ѫ)
88
- ...`
89
 
90
  | Vocab | Tokens | Count |
91
  |-------|--------|-------|
92
- | 8k | `▁ ѫ , ▁ ѫ ▁— ▁кириллтәи ▁аҩыратә ▁архаикатә ▁иажәхьоу ... (+11 more)` | 21 |
93
- | 16k | `▁ ѫ , ѫ ▁— ▁кириллтәи ▁аҩыратә ▁архаикатә ▁иажәхьоу ... (+11 more)` | 21 |
94
- | 32k | `▁ ѫ , ѫ ▁— ▁кириллтәи ▁аҩыратә ▁архаикатә ▁иажәхьоу ... (+11 more)` | 21 |
95
- | 64k | `▁ ѫ , ѫ ▁— ▁кириллтәи ▁аҩыратә ▁архаикатә ▁иажәхьоу ... (+11 more)` | 21 |
96
-
97
- **Sample 2:** `Аби́а () — ҵиаа. Ашәыр. Ашәырҵла.
98
 
99
- Ахьарԥшқәа
100
-
101
- б`
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
- | 8k | `▁аби ́ а ▁() ▁— ▁ҵиаа . ▁ашәыр . ▁аш ... (+5 more)` | 15 |
106
- | 16k | `▁аби ́ а ▁() ▁— ▁ҵиаа . ▁ашәыр . ▁ашәырҵ ... (+4 more)` | 14 |
107
- | 32k | `▁аби ́а ▁() ▁— ▁ҵиаа . ▁ашәыр . ▁ашәырҵла . ... (+2 more)` | 12 |
108
- | 64k | `▁аби ́а ▁() ▁— ▁ҵиаа . ▁ашәыр . ▁ашәырҵла . ... (+2 more)` | 12 |
109
 
110
- **Sample 3:** `Ҝ, ҝ кириллтәи аҩыратә нбан.`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
- | 8k | `▁ ҝ ,ҝ ▁— ▁кириллтәи ▁аҩыратә ▁нбан .` | 10 |
115
- | 16k | `▁ ҝ ,ҝ ▁— ▁кириллтәи ▁аҩыратә ▁нбан .` | 10 |
116
- | 32k | `▁ ҝ , ҝ ▁— ▁кириллтәи ▁аҩыратә ▁нбан .` | 10 |
117
- | 64k | `▁ ҝ , ҝ ▁— ▁кириллтәи ▁аҩыратә ▁нбан .` | 10 |
118
 
119
 
120
  ### Key Findings
121
 
122
- - **Best Compression:** 64k achieves 4.203x compression
123
- - **Lowest UNK Rate:** 8k with 0.1756% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
@@ -129,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
 
 
132
  ![N-gram Coverage](visualizations/ngram_coverage.png)
133
 
134
  ### Results
135
 
136
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
137
- |--------|------------|---------|----------------|------------------|-------------------|
138
- | **2-gram** | 2,750 🏆 | 11.43 | 13,494 | 35.3% | 57.9% |
139
- | **2-gram** | 464 🏆 | 8.86 | 5,850 | 56.1% | 94.4% |
140
- | **3-gram** | 2,460 | 11.26 | 16,782 | 38.6% | 56.9% |
141
- | **3-gram** | 3,385 | 11.72 | 40,776 | 25.5% | 64.3% |
142
- | **4-gram** | 3,267 | 11.67 | 27,732 | 37.4% | 51.5% |
143
- | **4-gram** | 13,192 | 13.69 | 145,474 | 16.1% | 43.3% |
144
 
145
  ### Top 5 N-grams by Size
146
 
147
- **2-grams:**
148
 
149
  | Rank | N-gram | Count |
150
  |------|--------|-------|
151
- | 1 | `акатегориа :` | 5,231 |
152
- | 2 | `рыԥсҭазаара иалҵит` | 3,971 |
153
- | 3 | `иит рыԥсҭазаара` | 3,938 |
154
- | 4 | `нанҳәамза цәыббрамза` | 3,601 |
155
- | 5 | `жәабранмза хәажәкырамза` | 3,601 |
156
 
157
- **3-grams:**
158
 
159
  | Rank | N-gram | Count |
160
  |------|--------|-------|
161
  | 1 | `иит рыԥсҭазаара иалҵит` | 3,938 |
162
- | 2 | `ажьырныҳәамза жәабранмза хәажәкырамза` | 3,601 |
163
- | 3 | `жьҭаарамза абҵарамза ԥхынҷкәынмза` | 3,601 |
164
- | 4 | `мшаԥымза лаҵарамза рашәарамза` | 3,601 |
165
- | 5 | `ԥхынгәымза нанҳәамза цәыббрамза` | 3,601 |
 
 
 
 
 
 
 
 
 
 
166
 
167
- **4-grams:**
168
 
169
  | Rank | N-gram | Count |
170
  |------|--------|-------|
171
- | 1 | `лаҵарамза рашәарамза ԥхынгәымза нанҳәамза` | 3,601 |
172
- | 2 | `рашәарамза ԥхынгәымза нанҳәамза цәыббрамза` | 3,601 |
173
- | 3 | `ахҭысқəа ажьырныҳәамза жәабранмза хәажәкырамза` | 3,601 |
174
- | 4 | `мшаԥымза лаҵарамза рашәарамза ԥхынгәымза` | 3,601 |
175
- | 5 | `ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза` | 3,601 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
 
178
  ### Key Findings
179
 
180
- - **Best Perplexity:** 2-gram with 464
181
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
182
- - **Coverage:** Top-1000 patterns cover ~43% of corpus
183
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
184
 
185
  ---
@@ -187,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
187
 
188
  ![Markov Entropy](visualizations/markov_entropy.png)
189
 
 
 
190
  ![Markov Branching](visualizations/markov_branching.png)
191
 
192
  ### Results
193
 
194
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
195
- |---------|-------------|------------|------------------|-----------------|----------------|
196
- | **1** | 0.5772 | 1.492 | 3.62 | 99,604 | 42.3% |
197
- | **1** | 1.5567 | 2.942 | 13.88 | 876 | 0.0% |
198
- | **2** | 0.1878 | 1.139 | 1.43 | 360,470 | 81.2% |
199
- | **2** | 1.2241 | 2.336 | 6.90 | 12,157 | 0.0% |
200
- | **3** | 0.0635 | 1.045 | 1.11 | 515,280 | 93.6% |
201
- | **3** | 0.7258 | 1.654 | 3.34 | 83,923 | 27.4% |
202
- | **4** | 0.0257 🏆 | 1.018 | 1.04 | 573,219 | 97.4% |
203
- | **4** | 0.4863 🏆 | 1.401 | 2.16 | 280,678 | 51.4% |
204
 
205
- ### Generated Text Samples
206
 
207
- Below are text samples generated from each Markov chain model:
208
 
209
  **Context Size 1:**
210
 
211
- 1. `, аил - маклаи ихьӡ зху аҟәатәи аҳәынҭқарратә педагогтә институт . кёльн - рико ) ,`
212
- 2. `. алитература ахырхарҭала . уи азҵаара азыҳәан қьалышь - 1528 ашықәсқәа рзы агазет « titus andronicu...`
213
- 3. `- зшәышықәса агьама змоу акоуп азеипш гәабзиарахьчара аусхк аҿы ԥаҵаду ҳәа иашьҭан . акатегориа : в`
214
 
215
  **Context Size 2:**
216
 
217
- 1. `акатегориа : аԥсны аиҭагаҩцәа акатегориа : аҩада атерриториа атерриториа . ақалақьқәа ақалақь га...`
218
- 2. `иит рыԥсҭазаара иалҵит : друз иулии цезарь германики агриппинәи рԥа ( дыԥсит ? ? ) азхьарԥшқәа`
219
- 3. `мшаԥымза лаҵарамза рашәарамза ԥхынгәымза нанҳә��мза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит ...`
220
 
221
  **Context Size 3:**
222
 
223
- 1. `мшаԥымза лаҵарамза рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит ...`
224
- 2. `рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит ...`
225
- 3. `жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит арминии германиатә херуски аимшьҭра рхада...`
226
 
227
  **Context Size 4:**
228
 
229
- 1. `мшаԥымза лаҵарамза рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит ...`
230
- 2. `нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит арминии германиатә х...`
231
- 3. `цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит арминии – германиатә херуски аим...`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
232
 
233
 
234
  ### Key Findings
235
 
236
- - **Best Predictability:** Context-4 with 97.4% predictability
237
  - **Branching Factor:** Decreases with context size (more deterministic)
238
- - **Memory Trade-off:** Larger contexts require more storage (280,678 contexts)
239
  - **Recommendation:** Context-3 or Context-4 for text generation
240
 
241
  ---
@@ -251,64 +314,64 @@ Below are text samples generated from each Markov chain model:
251
 
252
  | Metric | Value |
253
  |--------|-------|
254
- | Vocabulary Size | 34,914 |
255
- | Total Tokens | 483,415 |
256
- | Mean Frequency | 13.85 |
257
  | Median Frequency | 3 |
258
- | Frequency Std Dev | 106.12 |
259
 
260
  ### Most Common Words
261
 
262
  | Rank | Word | Frequency |
263
  |------|------|-----------|
264
- | 1 | акатегориа | 5,263 |
265
- | 2 | уи | 4,164 |
266
- | 3 | рыԥсҭазаара | 4,025 |
267
- | 4 | иит | 3,987 |
268
- | 5 | иалҵит | 3,980 |
269
- | 6 | лаҵарамза | 3,888 |
270
- | 7 | жәабранмза | 3,837 |
271
- | 8 | хәажәкырамза | 3,833 |
272
- | 9 | ԥхынҷкәынмза | 3,805 |
273
- | 10 | абҵарамза | 3,804 |
274
 
275
  ### Least Common Words (from vocabulary)
276
 
277
  | Rank | Word | Frequency |
278
  |------|------|-----------|
279
- | 1 | адрес | 2 |
280
- | 2 | extended | 2 |
281
- | 3 | stream | 2 |
282
- | 4 | block | 2 |
283
- | 5 | stru | 2 |
284
- | 6 | compressed | 2 |
285
- | 7 | draft | 2 |
286
- | 8 | preston | 2 |
287
- | 9 | видеохәмарроуп | 2 |
288
- | 10 | авидеохәмаррақәа | 2 |
289
 
290
  ### Zipf's Law Analysis
291
 
292
  | Metric | Value |
293
  |--------|-------|
294
- | Zipf Coefficient | 0.9724 |
295
- | R² (Goodness of Fit) | 0.994461 |
296
  | Adherence Quality | **excellent** |
297
 
298
  ### Coverage Analysis
299
 
300
  | Top N Words | Coverage |
301
  |-------------|----------|
302
- | Top 100 | 30.1% |
303
- | Top 1,000 | 55.4% |
304
- | Top 5,000 | 76.6% |
305
- | Top 10,000 | 85.3% |
306
 
307
  ### Key Findings
308
 
309
- - **Zipf Compliance:** R²=0.9945 indicates excellent adherence to Zipf's law
310
- - **High Frequency Dominance:** Top 100 words cover 30.1% of corpus
311
- - **Long Tail:** 24,914 words needed for remaining 14.7% coverage
312
 
313
  ---
314
  ## 5. Word Embeddings Evaluation
@@ -321,24 +384,127 @@ Below are text samples generated from each Markov chain model:
321
 
322
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
323
 
324
- ### Model Comparison
325
 
326
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
327
- |-------|------------|-----------|----------|----------|----------|
328
- | **mono_32d** | 12,418 | 32 | 3.919 | 0.892 | 0.8443 🏆 |
329
- | **mono_64d** | 12,418 | 64 | 4.225 | 0.826 | 0.5913 |
330
- | **mono_128d** | 12,418 | 128 | 4.285 | 0.827 | 0.1726 |
331
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
332
 
333
  ### Key Findings
334
 
335
- - **Best Isotropy:** mono_32d with 0.8443 (more uniform distribution)
336
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
337
- - **Vocabulary Coverage:** All models cover 12,418 words
338
- - **Recommendation:** 100d for balanced semantic capture and efficiency
339
 
340
  ---
341
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
342
 
343
  ![Performance Dashboard](visualizations/performance_dashboard.png)
344
 
@@ -346,11 +512,12 @@ Below are text samples generated from each Markov chain model:
346
 
347
  | Component | Recommended | Rationale |
348
  |-----------|-------------|-----------|
349
- | Tokenizer | **32k BPE** | Best compression (4.20x) with low UNK rate |
350
- | N-gram | **5-gram** | Lowest perplexity (464) |
351
- | Markov | **Context-4** | Highest predictability (97.4%) |
352
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
353
 
 
354
  ---
355
  ## Appendix: Metrics Glossary & Interpretation Guide
356
 
@@ -540,7 +707,8 @@ If you use these models in your research, please cite:
540
  author = {Kamali, Omar},
541
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
542
  year = {2025},
543
- publisher = {HuggingFace},
 
544
  url = {https://huggingface.co/wikilangs}
545
  institution = {Omneity Labs}
546
  }
@@ -556,7 +724,8 @@ MIT License - Free for academic and commercial use.
556
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
557
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
558
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
559
  ---
560
  *Generated by Wikilangs Models Pipeline*
561
 
562
- *Report Date: 2025-12-27 04:31:24*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.193
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8185
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # AB - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.304x | 3.31 | 0.1494% | 223,505 |
84
+ | **16k** | 3.652x | 3.66 | 0.1652% | 202,175 |
85
+ | **32k** | 3.908x | 3.91 | 0.1768% | 188,952 |
86
+ | **64k** | 4.193x 🏆 | 4.20 | 0.1897% | 176,103 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Ψ, ψбырзентәи аҩыратә нбан. Азхьарԥшқәа Graphemica (Ψ) Graphemica (ψ) аҩыратә...`
 
 
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁ ψ , ▁ ψ ▁— ▁бырзентәи ▁аҩыратә ▁нбан . ... (+11 more)` | 21 |
97
+ | 16k | `▁ψ , ▁ψ ▁— ▁бырзентәи ▁аҩыратә ▁нбан . ▁азхьарԥшқәа ▁graphemica ... (+9 more)` | 19 |
98
+ | 32k | `▁ψ , ▁ψ ▁— ▁бырзентәи ▁аҩыратә ▁нбан . ▁азхьарԥшқәа ▁graphemica ... (+9 more)` | 19 |
99
+ | 64k | `▁ψ , ▁ψ ▁— ▁бырзентәи ▁аҩыратә ▁нбан . ▁азхьарԥшқәа ▁graphemica ... (+9 more)` | 19 |
 
 
100
 
101
+ **Sample 2:** `Амолдав бызшәа, амолдаван бызшәа (limba moldovenească, лимба молдовеняскэ) Азгәа...`
 
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁ам ол да в ▁бызшәа , ▁ам ол да ван ... (+24 more)` | 34 |
106
+ | 16k | `▁амол дав ▁бызшәа , ▁амол да ван ▁бызшәа ▁( l ... (+20 more)` | 30 |
107
+ | 32k | `▁амолдав ▁бызшәа , ▁амол да ван ▁бызшәа ▁( l imba ... (+13 more)` | 23 |
108
+ | 64k | `▁амолдав ▁бызшәа , ▁амолдаван ▁бызшәа ▁( limba ▁moldoveneasc ă , ... (+5 more)` | 15 |
109
 
110
+ **Sample 3:** `Ардешен () Ҭырқтәыла ақалақь. Иаланхо . Ахьарԥшқәа ақалақьқәа`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁ар деш ен() ▁– ▁ҭырқтәыла ▁ақалақь . ▁иаланхо ▁. ... (+2 more)` | 12 |
115
+ | 16k | `▁ар деш ен() ▁– ▁ҭырқтәыла ▁ақалақь . ▁иаланхо ▁. ... (+2 more)` | 12 |
116
+ | 32k | `▁ардешен ▁() ▁– ▁ҭырқтәыла ▁ақалақь . ▁иаланхо ▁. ▁ахьарԥшқәа ▁ақалақьқәа` | 10 |
117
+ | 64k | `▁ардешен ▁() ▁– ▁ҭырқтәыла ▁ақалақь . ▁иаланхо ▁. ▁ахьарԥшқәа ▁ақалақьқәа` | 10 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.193x compression
123
+ - **Lowest UNK Rate:** 8k with 0.1494% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 724 | 9.50 | 5,814 | 51.4% | 72.0% |
141
+ | **2-gram** | Subword | 363 | 8.50 | 4,104 | 60.3% | 96.8% |
142
+ | **3-gram** | Word | 252 🏆 | 7.98 | 5,216 | 66.5% | 80.6% |
143
+ | **3-gram** | Subword | 2,675 | 11.39 | 28,199 | 28.1% | 67.5% |
144
+ | **4-gram** | Word | 342 | 8.42 | 9,800 | 64.0% | 74.0% |
145
+ | **4-gram** | Subword | 11,090 | 13.44 | 112,541 | 16.8% | 44.7% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
 
151
  | Rank | N-gram | Count |
152
  |------|--------|-------|
153
+ | 1 | `рыԥсҭазаара иалҵит` | 3,971 |
154
+ | 2 | `иит рыԥсҭазаара` | 3,938 |
155
+ | 3 | `рашәарамза ԥхынгәымза` | 3,603 |
156
+ | 4 | `жәабранмза хәажәкырамза` | 3,603 |
157
+ | 5 | `ажьырныҳәамза жәабранмза` | 3,602 |
158
 
159
+ **3-grams (Word):**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
  | 1 | `иит рыԥсҭазаара иалҵит` | 3,938 |
164
+ | 2 | `цәыббрамза жьҭаарамза абҵарамза` | 3,602 |
165
+ | 3 | `нанҳәамза цәыббрамза жьҭаарамза` | 3,601 |
166
+ | 4 | `рашәарамза ԥхынгәымза нанҳәамза` | 3,601 |
167
+ | 5 | `мшаԥымза лаҵарамза рашәарамза` | 3,601 |
168
+
169
+ **4-grams (Word):**
170
+
171
+ | Rank | N-gram | Count |
172
+ |------|--------|-------|
173
+ | 1 | `мшаԥымза лаҵарамза рашәарамза ԥхынгәымза` | 3,601 |
174
+ | 2 | `лаҵарамза рашәарамза ԥхынгәымза нанҳәамза` | 3,601 |
175
+ | 3 | `рашәарамза ԥхынгәымза нанҳәамза цәыббрамза` | 3,601 |
176
+ | 4 | `ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза` | 3,601 |
177
+ | 5 | `нанҳәамза цәыббрамза жьҭаарамза абҵарамза` | 3,601 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | _` | 154,707 |
184
+ | 2 | `_ а` | 149,768 |
185
+ | 3 | а` | 100,539 |
186
+ | 4 | р` | 84,594 |
187
+ | 5 | а` | 75,990 |
188
+
189
+ **3-grams (Subword):**
190
+
191
+ | Rank | N-gram | Count |
192
+ |------|--------|-------|
193
+ | 1 | `а р а` | 50,267 |
194
+ | 2 | `м з а` | 45,869 |
195
+ | 3 | `з а _` | 44,868 |
196
+ | 4 | `а _ а` | 35,436 |
197
+ | 5 | `а м з` | 31,355 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `м з а _` | 44,436 |
204
+ | 2 | `а м з а` | 30,784 |
205
+ | 3 | `р а м з` | 22,744 |
206
+ | 4 | `а р а _` | 19,480 |
207
+ | 5 | `қ ә а _` | 17,506 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 3-gram (word) with 252
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~45% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.6658 | 1.586 | 3.61 | 90,583 | 33.4% |
231
+ | **1** | Subword | 1.3373 | 2.527 | 10.82 | 872 | 0.0% |
232
+ | **2** | Word | 0.1207 | 1.087 | 1.22 | 326,705 | 87.9% |
233
+ | **2** | Subword | 1.0105 | 2.015 | 5.95 | 9,435 | 0.0% |
234
+ | **3** | Word | 0.0295 | 1.021 | 1.04 | 396,742 | 97.1% |
235
+ | **3** | Subword | 0.7768 | 1.713 | 3.69 | 56,063 | 22.3% |
236
+ | **4** | Word | 0.0100 🏆 | 1.007 | 1.01 | 412,289 | 99.0% |
237
+ | **4** | Subword | 0.5289 | 1.443 | 2.33 | 206,852 | 47.1% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `уи иалнаршеит абанкбжьаратә ҳасабкрақәа рҭыԥ инаркны шықәсанӡа наринџьованиархангельский а лазарев м...`
246
+ 2. `рыԥсҭазаара иалҵит шықәса нанҳәа 5 ԥхынҷкәы́н 22 27 азы иҭыҵит раԥхьа ақырҭуа ҭыԥҳа викторина ж ж`
247
+ 3. `иит рыԥсҭазаара иалҵит дидим халкентер абырзен бызшәала адраматургиа иақәшәаны ииашаҵәҟьаны еилкааӡа...`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `иит рыԥсҭазаара иалҵит клеопатра селена ii марк антонии клеопатреи antony and cleopatra кориолан cor...`
252
+ 2. `рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит ...`
253
+ 3. `жәабранмза хәажәкырамза мшаԥымза лаҵарамза рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵ...`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит калигула римтәи аимператор дыԥсит 69 ш рыԥсҭазаара ...`
258
+ 2. `жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит аныҳәақәа араԥтә ар амш аҳәаахьчаҩцәа рамш ...`
259
+ 3. `ажьырныҳәамза жәабранмза хәажәкырамза мшаԥымза лаҵарамза рашәарамза ԥхынгәымза нанҳәамза цәыббрамза ...`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит арминии германиатә хер...`
264
+ 2. `цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит арминии германиатә херуски аимшь...`
265
+ 3. `рашәарамза ԥхынгәымза нанҳәамза цәыббрамза жьҭаарамза абҵарамза ԥхынҷкәынмза иит рыԥсҭазаара иалҵит ...`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `аил_даит.._аарԥа`
275
+ 2. `_дигьҭақәанбрббс`
276
+ 3. `иунмақә_ser_иала`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `а_иазара,_иа_мамҭ`
281
+ 2. `_ашықәаԥсҭақәа_из`
282
+ 3. `рамза_иционва,_30`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `ара_амшын._уи_аҩны`
287
+ 2. `мза_ԥхынгәымза_абе`
288
+ 3. `за_мшаԥысуа_кәу._а`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `мза_нанҳәамҭа._аҩаӡ`
293
+ 2. `амза_рашәара_мап_рц`
294
+ 3. `рамза_ԥхынгәы_9,_ш.`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 99.0% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (206,852 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 32,686 |
318
+ | Total Tokens | 440,475 |
319
+ | Mean Frequency | 13.48 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 100.81 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | уи | 4,159 |
328
+ | 2 | рыԥсҭазаара | 4,025 |
329
+ | 3 | иит | 3,987 |
330
+ | 4 | иалҵит | 3,980 |
331
+ | 5 | лаҵарамза | 3,752 |
332
+ | 6 | жәабранмза | 3,722 |
333
+ | 7 | хәажәкырамза | 3,701 |
334
+ | 8 | абҵарамза | 3,701 |
335
+ | 9 | нанҳәамза | 3,696 |
336
+ | 10 | ԥхынҷкәынмза | 3,696 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | slang | 2 |
343
+ | 2 | пуэрто | 2 |
344
+ | 3 | испантәи | 2 |
345
+ | 4 | reggaetón | 2 |
346
+ | 5 | маратон | 2 |
347
+ | 6 | урымтәыла | 2 |
348
+ | 7 | византии | 2 |
349
+ | 8 | белобров | 2 |
350
+ | 9 | длины | 2 |
351
+ | 10 | акармара | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 0.9638 |
358
+ | R² (Goodness of Fit) | 0.995375 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 30.3% |
366
+ | Top 1,000 | 55.8% |
367
+ | Top 5,000 | 76.9% |
368
+ | Top 10,000 | 85.7% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9954 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 30.3% of corpus
374
+ - **Long Tail:** 22,686 words needed for remaining 14.3% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8185 🏆 | 0.3505 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.6063 | 0.2895 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.1746 | 0.2841 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8185 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.3080. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-иа` | иаанхон, иаҳәшьа, иагьлыхьӡиҵоит |
430
+
431
+ #### Productive Suffixes
432
+ | Suffix | Examples |
433
+ |--------|----------|
434
+ | `-а` | азербайджана, рыҟамлара, аҭӡысахьақәа |
435
+ | `-әа` | аҭ��ысахьақәа, льготақәа, ааимҭақәа |
436
+ | `-қәа` | аҭӡысахьақәа, льготақәа, ааимҭақәа |
437
+ | `-ит` | еиҭарҳәоит, иагьлыхьӡиҵоит, рыдиулоит |
438
+ | `-ра` | рыҟамлара, архынҳәра, ахынҳәра |
439
+ | `-еи` | аԥсреи, русқәеи, аглобализациеи |
440
+ | `-тә` | антилопатә, аҳазылхратә, апрозатә |
441
+ | `-ақәа` | аҭӡысахьақәа, льготақәа, ааимҭақәа |
442
+
443
+ ### 6.3 Bound Stems (Lexical Roots)
444
+
445
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
446
+
447
+ | Stem | Cohesion | Substitutability | Examples |
448
+ |------|----------|------------------|----------|
449
+ | `гыла` | 1.69x | 82 contexts | гылан, ргылан, игылаз |
450
+ | `аҵар` | 1.72x | 38 contexts | аҵара, лаҵара, хаҵара |
451
+ | `әыла` | 1.72x | 34 contexts | тәыла, атәыла, тәылан |
452
+ | `ықәс` | 1.85x | 26 contexts | шықәс, шықәсы, шықәса |
453
+ | `әара` | 1.47x | 58 contexts | шәара, акәара, аҟәара |
454
+ | `арам` | 2.03x | 17 contexts | нарам, харам, арамка |
455
+ | `қәса` | 2.02x | 16 contexts | шықәса, щықәса, жәшықәса |
456
+ | `шәар` | 1.69x | 26 contexts | шәара, ршәарц, шәарах |
457
+ | `ҭаза` | 2.47x | 8 contexts | иԥсҭазара, ԥсҭазаара, ипсҭазаара |
458
+ | `азаа` | 1.69x | 23 contexts | лазаа, амазаап, иазааит |
459
+ | `ыҳәа` | 1.62x | 22 contexts | ныҳәа, иныҳәа, ныҳәан |
460
+ | `заар` | 2.08x | 10 contexts | акзаара, акзаарак, ракзаара |
461
+
462
+ ### 6.4 Affix Compatibility (Co-occurrence)
463
+
464
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
465
+
466
+ | Prefix | Suffix | Frequency | Examples |
467
+ |--------|--------|-----------|----------|
468
+ | `-иа` | `-ит` | 97 words | иахьӡырҵеит, иаҭаауеит |
469
+ | `-иа` | `-еит` | 54 words | иахьӡырҵеит, иаҭаауеит |
470
+ | `-иа` | `-а` | 42 words | иалихуа, иадыруа |
471
+ | `-иа` | `-әа` | 14 words | иаламҵакәа, иажәа |
472
+ | `-иа` | `-ра` | 5 words | иабшьҭра, иааӡара |
473
+ | `-иа` | `-тә` | 5 words | иаҳратә, иашахаҵаратә |
474
+ | `-иа` | `-қәа` | 5 words | иашақәа, ианҵамҭақәа |
475
+ | `-иа` | `-ақәа` | 4 words | иашақәа, ианҵамҭақәа |
476
+ | `-иа` | `-еи` | 3 words | иашьцәеи, иамхреи |
477
+ | `-иа` | `-әи` | 1 words | иапониатәи |
478
+
479
+ ### 6.5 Recursive Morpheme Segmentation
480
+
481
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
482
+
483
+ | Word | Suggested Split | Confidence | Stem |
484
+ |------|-----------------|------------|------|
485
+ | аҳәынҭқарқәа | **`аҳәынҭқар-қәа`** | 4.5 | `аҳәынҭқар` |
486
+ | мраҭашәаратә | **`мраҭаш-әа-ра-тә`** | 4.5 | `мраҭаш` |
487
+ | ашьауӷатә | **`ашьауӷа-тә`** | 4.5 | `ашьауӷа` |
488
+ | алахәылара | **`алахәыла-ра`** | 4.5 | `алахәыла` |
489
+ | амитингқәа | **`амитинг-қәа`** | 4.5 | `амитинг` |
490
+ | аметаллургиатә | **`аметаллургиа-тә`** | 4.5 | `аметаллургиа` |
491
+ | рекологиатә | **`рекологиа-тә`** | 4.5 | `рекологиа` |
492
+ | асиужетқәа | **`асиужет-қәа`** | 4.5 | `асиужет` |
493
+ | аконцерттә | **`аконцерт-тә`** | 4.5 | `аконцерт` |
494
+ | ипартиатә | **`ипартиа-тә`** | 4.5 | `ипартиа` |
495
+ | афасадқәа | **`афасад-қәа`** | 4.5 | `афасад` |
496
+ | астадионқәа | **`астадион-қәа`** | 4.5 | `астадион` |
497
+ | аномерқәа | **`аномер-қәа`** | 4.5 | `аномер` |
498
+ | аныҳәаратә | **`аныҳ-әа-ра-тә`** | 4.5 | `аныҳ` |
499
+ | атерминқәа | **`атермин-қәа`** | 4.5 | `атермин` |
500
+
501
+ ### 6.6 Linguistic Interpretation
502
+
503
+ > **Automated Insight:**
504
+ The language AB appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
505
+
506
+ ---
507
+ ## 7. Summary & Recommendations
508
 
509
  ![Performance Dashboard](visualizations/performance_dashboard.png)
510
 
 
512
 
513
  | Component | Recommended | Rationale |
514
  |-----------|-------------|-----------|
515
+ | Tokenizer | **64k BPE** | Best compression (4.19x) |
516
+ | N-gram | **3-gram** | Lowest perplexity (252) |
517
+ | Markov | **Context-4** | Highest predictability (99.0%) |
518
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
519
 
520
+
521
  ---
522
  ## Appendix: Metrics Glossary & Interpretation Guide
523
 
 
707
  author = {Kamali, Omar},
708
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
709
  year = {2025},
710
+ doi = {10.5281/zenodo.18073153},
711
+ publisher = {Zenodo},
712
  url = {https://huggingface.co/wikilangs}
713
  institution = {Omneity Labs}
714
  }
 
724
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
725
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
726
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
727
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
728
  ---
729
  *Generated by Wikilangs Models Pipeline*
730
 
731
+ *Report Date: 2026-01-03 05:05:55*
models/embeddings/monolingual/ab_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70a5bb9264c0f53018d482c2616f76c0bf9699ff0b3215c919337610e18f88a9
3
- size 1037017106
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:974accbf5e248437084213db34906d889d1387be4ba287f5bf3decc278094a76
3
+ size 1036399852
models/embeddings/monolingual/ab_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12418
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 11829
15
  }
models/embeddings/monolingual/ab_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f4040b5f28ef7e1b3cf96524fad4c6f4249e5107c0f78bdd6a3133c867ead2aa
3
- size 259480082
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20cc29bc8688764f66dd79168f5b38b9bc2c37d4cff74832ca2676787afff53e
3
+ size 259315180
models/embeddings/monolingual/ab_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12418
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 11829
15
  }
models/embeddings/monolingual/ab_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:59bba3121b8000d938989dc2a115057a7fd481c440ecd2c4af0c5f7711a53e1d
3
- size 518659090
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6e336f851f9509b219587c555a83bc96b5e5f8af5ceca5f7c676a4c11e0eec7
3
+ size 518343404
models/embeddings/monolingual/ab_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12418
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 11829
15
  }
models/subword_markov/ab_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d5b0a600b840e3db1ee3806d9be588a8213f99ef3cce2c1cd544318a9585f986
3
- size 93410
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08d81996caede4f16305dff37bfaf00cc4aa080b329a05b6e813e43b32d4b5b1
3
+ size 74300
models/subword_markov/ab_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_contexts": 876,
6
- "total_transitions": 4575952
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_contexts": 872,
6
+ "total_transitions": 4086915
7
  }
models/subword_markov/ab_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ea56c5cf197c6e7366129c2324fad6e8e710aca5379b47abfec22e6edd82fa6
3
- size 586863
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:391adb8ed94471fe378fd39fa4bafc27bc22517c04a0688782555f658bfa5660
3
+ size 430405
models/subword_markov/ab_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_contexts": 12157,
6
- "total_transitions": 4569421
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_contexts": 9435,
6
+ "total_transitions": 4080892
7
  }
models/subword_markov/ab_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3d3626fdfb84a1b24e4b45c30124c54337aabdb6a8585a36031b40d4b2aea4c4
3
- size 2014070
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2554570f44dad84400a3a91e792ec596f90a065c3a673af002a820c70dca44e9
3
+ size 1518240
models/subword_markov/ab_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_contexts": 83923,
6
- "total_transitions": 4562890
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_contexts": 56063,
6
+ "total_transitions": 4074869
7
  }
models/subword_markov/ab_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:142c381e03e008f540daba013f253cbcc8bcf7cfe2a7bac360d368f678c4c000
3
- size 5510132
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a36f4483bb607e4182b1e8a367258cf1570a36ccf6debdfa0c5ed6d83eb6baa9
3
+ size 4368269
models/subword_markov/ab_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_contexts": 280678,
6
- "total_transitions": 4556359
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_contexts": 206852,
6
+ "total_transitions": 4068846
7
  }
models/subword_ngram/ab_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:25a3ad73144d6ab005e2dfb7deb33bbc6f3db63a36d07ff42172d7b2be73b608
3
- size 72236
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6546e93ab975088eb711841de0aac0eef95cedd64a0648eef8735695ed55ff88
3
+ size 54917
models/subword_ngram/ab_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_ngrams": 5850,
6
- "total_ngrams": 4575952
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_ngrams": 4104,
6
+ "total_ngrams": 4086915
7
  }
models/subword_ngram/ab_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e7da8578639b16b8512cca6705fe65f303df54fe9d9e019e89159e0e2b7f2ff5
3
- size 516357
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b1b02523b1b148c8a70500352648cf3a5210acb9cff8bd3b8c613fbe55e72ef
3
+ size 368926
models/subword_ngram/ab_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_ngrams": 40776,
6
- "total_ngrams": 4569421
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_ngrams": 28199,
6
+ "total_ngrams": 4080892
7
  }
models/subword_ngram/ab_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e898894c24a5ef426ee5c2cd23250aabe7e24d18ae03c6bbee902c09cf9ec33
3
- size 1774272
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7eebb21889eb39845863d08c2bb3db59acb306cc610d9cff31719d1fccb8a9f
3
+ size 1420721
models/subword_ngram/ab_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ab",
5
- "unique_ngrams": 145474,
6
- "total_ngrams": 4562890
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ab",
5
+ "unique_ngrams": 112541,
6
+ "total_ngrams": 4074869
7
  }
models/tokenizer/ab_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b92a09ffb9d9e16d3cfcd9d14625e37f4a5f64b96aa51c52d35d12a333ad633b
3
- size 582079
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a2085566fc5290d88287b8216db2b990d022c905f7619c834e7a919406ca669
3
+ size 569009
models/tokenizer/ab_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ab_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5030df6b09c2e503b39eb01d7fa0fccf197e83ae7fdb9fd7b16866de09bababe
3
- size 937553
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e4a3fa167aa62fd56d1a517f1285f051a7587310895cdc82a80895e1bd8badc
3
+ size 936940
models/tokenizer/ab_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ab_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:049440a566c70344456ff4c82202e7e102f4915ace284706ad7cc4c5fc73dba9
3
- size 1697167
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09581380da9729bf63f3885bf0e54ee06e9b0659cd517ad67eab83cf01150542
3
+ size 1690301
models/tokenizer/ab_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ab_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:47045e15113d3f3907abf69824acfc10797a5b6e9942db1f6c5d4990df2dd043
3
- size 405395
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb1da49fc5cbf61e37585d974f3bcb0262f7b8e403c201974781b65a1079fa28
3
+ size 402830
models/tokenizer/ab_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ab_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1b85dbc4e6a7540ba1ff9dfa0ac21d279d0fce68b4180e8d6df6b29deb668b84
3
- size 683448
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b9954423f516ffad7b6b395410e0a3ed33a13656770c69fcd1e37b9a2500496
3
+ size 613460
models/vocabulary/ab_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "ab",
3
- "vocabulary_size": 34914,
 
4
  "statistics": {
5
- "type_token_ratio": 0.18177207522936784,
6
  "coverage": {
7
- "top_100": 0.2651855284354094,
8
- "top_1000": 0.48867710079779325,
9
- "top_5000": 0.6758529345765748,
10
- "top_10000": 0.7518667778310897
11
  },
12
- "hapax_count": 64722,
13
- "hapax_ratio": 0.649584487534626,
14
- "total_documents": 6531
15
  }
16
  }
 
1
  {
2
  "language": "ab",
3
+ "vocabulary_size": 32686,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.18190225895758938,
7
  "coverage": {
8
+ "top_100": 0.26752598001845684,
9
+ "top_1000": 0.4927917987401196,
10
+ "top_5000": 0.6798940737471412,
11
+ "top_10000": 0.7575291899049071
12
  },
13
+ "hapax_count": 57985,
14
+ "hapax_ratio": 0.639509876366203,
15
+ "total_documents": 6023
16
  }
17
  }
models/word_markov/ab_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1bf752e561b655e36306643fcd5f9ee780835f3c8ab87825116a8028fcef88a
3
- size 4988023
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:477c26b2510447d28ecaa37e0e7ab40a517773cfffb1210fbde670a44914bc00
3
+ size 4536676
models/word_markov/ab_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_contexts": 99604,
6
- "total_transitions": 715219
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_contexts": 90583,
6
+ "total_transitions": 492437
7
  }
models/word_markov/ab_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f00af198c4f86326bb4f959b22443089a350d189a2ac8d5415a26f3dcfb7f220
3
- size 10940272
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4bc4862d6258993f1f724d0d5118dc5b0d53b173d1761b40efd9fab5f483159
3
+ size 10110358
models/word_markov/ab_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_contexts": 360470,
6
- "total_transitions": 708688
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_contexts": 326705,
6
+ "total_transitions": 486414
7
  }
models/word_markov/ab_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7bac1826a063cba6bbd8f61df21c4be2dba68dac0abfc311b80cd598e258a68e
3
- size 14657425
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb37ec60704d2bebf6f001873a2a83cab37cde917c2788f8f44f143be73b3f5f
3
+ size 12602543
models/word_markov/ab_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_contexts": 515280,
6
- "total_transitions": 702160
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_contexts": 396742,
6
+ "total_transitions": 480391
7
  }
models/word_markov/ab_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:17958e76510075d2200f79a9911c3d1e9a3f4d0a87cd2a4cc0331afca431b300
3
- size 17354156
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:811fb71775eaf00c7218d8a816f74bc26333e418e9d329068207739c393fd7bf
3
+ size 14827550
models/word_markov/ab_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_contexts": 573219,
6
- "total_transitions": 695636
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_contexts": 412289,
6
+ "total_transitions": 474369
7
  }
models/word_ngram/ab_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:acdfd16e121b263288a429d93a717f252969d4848648d80287275db5e16edc95
3
- size 282764
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4a4d7d52cfdf902f9fe8cfe09802ecf37bacbd4cf5bf99b4545e9c58e1b6fe1
3
+ size 148052
models/word_ngram/ab_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_ngrams": 13494,
6
- "total_ngrams": 715219
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_ngrams": 5814,
6
+ "total_ngrams": 492437
7
  }
models/word_ngram/ab_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dab1f98b0cb7bd2d28f99f706169e76bd8a06a19b00b43438a7ba2c241b8d463
3
- size 389587
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f22297b51acd3186bc5a9c808c852c94d515c15a5ae6556948ea6097484cb83
3
+ size 162697
models/word_ngram/ab_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_ngrams": 16782,
6
- "total_ngrams": 708688
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_ngrams": 5216,
6
+ "total_ngrams": 486414
7
  }
models/word_ngram/ab_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:592dd7484ba41a21c3e7a4f4fc8fc748b19346f94898e746c0776742ba664e4d
3
- size 699987
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed909f3efc922de94c60f3fd686d519c1b57fb60acd95efc62e4dcd9179206f9
3
+ size 334679
models/word_ngram/ab_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ab",
5
- "unique_ngrams": 27732,
6
- "total_ngrams": 702160
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ab",
5
+ "unique_ngrams": 9800,
6
+ "total_ngrams": 480391
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 55aadd7d1aca1d2d2ca81213e38b757a06336b5b7cc81335689cca3e4dec76f7
  • Pointer size: 131 Bytes
  • Size of remote file: 178 kB

Git LFS Details

  • SHA256: cd26e5090a0d45a52ee96b930d764449e5a8e277ac4139ca425c7fd1b7a05b02
  • Pointer size: 131 Bytes
  • Size of remote file: 179 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED