omarkamali commited on
Commit
df8efce
·
verified ·
1 Parent(s): c16474f

Upload all models and assets for ce (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +312 -141
  2. models/embeddings/monolingual/ce_128d.bin +2 -2
  3. models/embeddings/monolingual/ce_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/ce_32d.bin +2 -2
  5. models/embeddings/monolingual/ce_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/ce_64d.bin +2 -2
  7. models/embeddings/monolingual/ce_64d_metadata.json +5 -3
  8. models/subword_markov/ce_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/ce_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/ce_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/ce_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/ce_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/ce_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/ce_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/ce_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/ce_2gram_subword.parquet +2 -2
  17. models/subword_ngram/ce_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/ce_3gram_subword.parquet +2 -2
  19. models/subword_ngram/ce_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/ce_4gram_subword.parquet +2 -2
  21. models/subword_ngram/ce_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/ce_tokenizer_16k.model +2 -2
  23. models/tokenizer/ce_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/ce_tokenizer_32k.model +2 -2
  25. models/tokenizer/ce_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/ce_tokenizer_64k.model +2 -2
  27. models/tokenizer/ce_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/ce_tokenizer_8k.model +2 -2
  29. models/tokenizer/ce_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/ce_vocabulary.parquet +2 -2
  31. models/vocabulary/ce_vocabulary_metadata.json +10 -9
  32. models/word_markov/ce_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/ce_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/ce_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/ce_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/ce_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/ce_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/ce_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/ce_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/ce_2gram_word.parquet +2 -2
  41. models/word_ngram/ce_2gram_word_metadata.json +2 -2
  42. models/word_ngram/ce_3gram_word.parquet +2 -2
  43. models/word_ngram/ce_3gram_word_metadata.json +2 -2
  44. models/word_ngram/ce_4gram_word.parquet +2 -2
  45. models/word_ngram/ce_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 3.716
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8750
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 267119
33
- generated: 2025-12-28
34
  ---
35
 
36
  # CE - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,59 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 2.743x | 2.70 | 1.0676% | 595,703 |
76
- | **16k** | 3.096x | 3.04 | 1.2050% | 527,806 |
77
- | **32k** | 3.417x | 3.36 | 1.3298% | 478,250 |
78
- | **64k** | 3.716x 🏆 | 3.65 | 1.4461% | 439,790 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `ДагӀйурд () Азербайджанан Ходжалин кӀоштара эвла.
85
-
86
- Бахархой
87
-
88
- Билгалдахарш
89
- ...`
90
 
91
  | Vocab | Tokens | Count |
92
  |-------|--------|-------|
93
- | 8k | `▁да гӏ й ур д ▁() ▁— ▁азербайджанан ▁х од ... (+17 more)` | 27 |
94
- | 16k | `▁дагӏ йур д ▁() ▁— ▁азербайджанан ▁ход ж алин ▁кӏоштара ... (+13 more)` | 23 |
95
- | 32k | `▁дагӏ йур д ▁() ▁— ▁азербайджанан ▁ходжалин ▁кӏоштара ▁эвла . ... (+9 more)` | 19 |
96
- | 64k | `▁дагӏ йур д ▁() ▁— ▁азербайджанан ▁ходжалин ▁кӏоштара ▁эвла . ... (+9 more)` | 19 |
97
 
98
- **Sample 2:** `Перселл (Миссури)
99
- Перселл (Оклахома)`
100
 
101
  | Vocab | Tokens | Count |
102
  |-------|--------|-------|
103
- | 8k | `▁пер сел л ▁( миссури ) ▁пер сел л ▁( ... (+2 more)` | 12 |
104
- | 16k | `▁пер сел л ▁( миссури ) ▁пер сел л ▁( ... (+2 more)` | 12 |
105
- | 32k | `▁пер сел л ▁( миссури ) ▁пер сел л ▁( ... (+2 more)` | 12 |
106
- | 64k | `▁пер селл ▁( миссури ) ▁пер селл ▁( оклахома )` | 10 |
107
 
108
- **Sample 3:** `Эль Баро (Мочитлан)
109
- Эль Ба��о (Сан Мигел Тотолапан)
110
- Эль Баро (Хенерал Элиодоро ...`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
- | 8k | `▁эль ▁баро ▁( м оч ит лан ) ▁эль ▁баро ... (+21 more)` | 31 |
115
- | 16k | `▁эль ▁баро ▁( м оч итлан ) ▁эль ▁баро ▁( ... (+18 more)` | 28 |
116
- | 32k | `▁эль ▁баро ▁( м оч итлан ) ▁эль ▁баро ▁( ... (+11 more)` | 21 |
117
- | 64k | `▁эль ▁баро ▁( моч итлан ) ▁эль ▁баро ▁( сан ... (+10 more)` | 20 |
118
 
119
 
120
  ### Key Findings
121
 
122
- - **Best Compression:** 64k achieves 3.716x compression
123
- - **Lowest UNK Rate:** 8k with 1.0676% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
@@ -129,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
 
 
132
  ![N-gram Coverage](visualizations/ngram_coverage.png)
133
 
134
  ### Results
135
 
136
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
137
- |--------|------------|---------|----------------|------------------|-------------------|
138
- | **2-gram** | 3,434 🏆 | 11.75 | 180,710 | 25.5% | 63.0% |
139
- | **2-gram** | 484 🏆 | 8.92 | 7,755 | 52.4% | 97.1% |
140
- | **3-gram** | 5,932 | 12.53 | 322,719 | 16.0% | 53.7% |
141
- | **3-gram** | 2,779 | 11.44 | 72,318 | 22.8% | 66.2% |
142
- | **4-gram** | 7,779 | 12.93 | 709,202 | 12.7% | 49.4% |
143
- | **4-gram** | 7,269 | 12.83 | 422,662 | 15.4% | 47.3% |
144
 
145
  ### Top 5 N-grams by Size
146
 
147
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
148
 
149
  | Rank | N-gram | Count |
150
  |------|--------|-------|
151
- | 1 | `. —` | 1,635,703 |
152
- | 2 | `категори :` | 1,346,163 |
153
- | 3 | `нах беха` | 1,039,301 |
154
- | 4 | `беха меттигаш` | 953,016 |
155
- | 5 | .` | 797,532 |
156
 
157
- **3-grams:**
158
 
159
  | Rank | N-gram | Count |
160
  |------|--------|-------|
161
- | 1 | `нах беха меттигаш` | 952,979 |
162
- | 2 | `( ) —` | 477,700 |
163
- | 3 | `меттигаш категори :` | 455,946 |
164
- | 4 | `беха меттигаш категори` | 448,323 |
165
- | 5 | `. а .` | 416,844 |
166
 
167
- **4-grams:**
168
 
169
  | Rank | N-gram | Count |
170
  |------|--------|-------|
171
- | 1 | `беха меттигаш категори :` | 448,323 |
172
- | 2 | `нах беха меттигаш категори` | 448,323 |
173
- | 3 | `. м .` | 345,745 |
174
- | 4 | `— м . :` | 345,423 |
175
- | 5 | `кӏоштан нах беха меттигаш` | 256,924 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
176
 
177
 
178
  ### Key Findings
179
 
180
- - **Best Perplexity:** 2-gram with 484
181
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
182
- - **Coverage:** Top-1000 patterns cover ~47% of corpus
183
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
184
 
185
  ---
@@ -187,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
187
 
188
  ![Markov Entropy](visualizations/markov_entropy.png)
189
 
 
 
190
  ![Markov Branching](visualizations/markov_branching.png)
191
 
192
  ### Results
193
 
194
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
195
- |---------|-------------|------------|------------------|-----------------|----------------|
196
- | **1** | 0.4898 | 1.404 | 3.61 | 596,401 | 51.0% |
197
- | **1** | 1.0551 | 2.078 | 11.42 | 1,510 | 0.0% |
198
- | **2** | 0.2471 | 1.187 | 1.75 | 2,141,469 | 75.3% |
199
- | **2** | 1.0286 | 2.040 | 7.78 | 17,227 | 0.0% |
200
- | **3** | 0.1096 | 1.079 | 1.30 | 3,726,034 | 89.0% |
201
- | **3** | 0.8548 | 1.809 | 4.95 | 133,970 | 14.5% |
202
- | **4** | 0.0635 🏆 | 1.045 | 1.17 | 4,825,259 | 93.6% |
203
- | **4** | 0.7262 🏆 | 1.654 | 3.28 | 662,768 | 27.4% |
 
 
204
 
205
- ### Generated Text Samples
206
 
207
- Below are text samples generated from each Markov chain model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
208
 
209
  **Context Size 1:**
210
 
211
- 1. `. геогр . — м . а беттанашкахь , цуьнан гуш болу седин атмосфера конвекцина дикка`
212
- 2. `, йуккъера барам 2000 . изд . catherine b . surface properties of chicle extraction in`
213
- 3. `— июль ( по зарубежным странам ) кӏеззиг къилбаседехьа хокана мотт буьйцуш долу ӏаьнцаклимат калужск...`
214
 
215
  **Context Size 2:**
216
 
217
- 1. `. — екатеринбург : у - фактория , 2006 . альперович м . родригеса , м .`
218
- 2. `категори : сербин нах беха меттигаш категори : мексикин нах беха меттигаш категори : витебскан облас...`
219
- 3. `нах беха меттигаш категори : мексикин нах беха меттиг . географи . бахархойн дукхалла бахархойн дукх...`
220
 
221
  **Context Size 3:**
222
 
223
- 1. `( ) — российн федерацин вологдин областан междуреченскан кӏоштара дӏатесна эвла . бахархойн дукхалла...`
224
- 2. `нах беха меттигаш категори : молдавин нах беха меттигаш категори : ацш гуонаш ru : калмен ( округ`
225
- 3. `меттигаш категори : идальго штатан нах беха меттигаш категори : белхан категори категори : новосибир...`
226
 
227
  **Context Size 4:**
228
 
229
- 1. `беха меттигаш категори : подлясьен воеводаллин нах беха меттигаш категори : абатца нисйина нах беха ...`
230
- 2. `нах беха меттигаш категори : лаха калифорни штатан нах беха меттигаш категори : вилча жудецан коммун...`
231
- 3. `. м . : высшая школа , 2005 . — 463 с . — isbn 5060045196 . новая`
232
 
233
 
234
  ### Key Findings
235
 
236
- - **Best Predictability:** Context-4 with 93.6% predictability
237
  - **Branching Factor:** Decreases with context size (more deterministic)
238
- - **Memory Trade-off:** Larger contexts require more storage (662,768 contexts)
239
  - **Recommendation:** Context-3 or Context-4 for text generation
240
 
241
  ---
@@ -251,64 +314,64 @@ Below are text samples generated from each Markov chain model:
251
 
252
  | Metric | Value |
253
  |--------|-------|
254
- | Vocabulary Size | 267,119 |
255
- | Total Tokens | 73,448,738 |
256
- | Mean Frequency | 274.97 |
257
  | Median Frequency | 3 |
258
- | Frequency Std Dev | 8220.95 |
259
 
260
  ### Most Common Words
261
 
262
  | Rank | Word | Frequency |
263
  |------|------|-----------|
264
- | 1 | а | 1,816,439 |
265
- | 2 | категори | 1,354,932 |
266
- | 3 | нах | 1,049,211 |
267
- | 4 | беха | 1,039,698 |
268
- | 5 | меттигаш | 968,759 |
269
- | 6 | йу | 814,168 |
270
- | 7 | м | 798,682 |
271
- | 8 | климат | 741,279 |
272
- | 9 | в | 737,093 |
273
- | 10 | билгалдахарш | 631,115 |
274
 
275
  ### Least Common Words (from vocabulary)
276
 
277
  | Rank | Word | Frequency |
278
  |------|------|-----------|
279
- | 1 | эмпачадо | 2 |
280
- | 2 | энано | 2 |
281
- | 3 | эскопетал | 2 |
282
- | 4 | эскриторио | 2 |
283
- | 5 | макариос | 2 |
284
- | 6 | эроика | 2 |
285
- | 7 | скирринг | 2 |
286
- | 8 | зигуинчор | 2 |
287
- | 9 | зигуиншор | 2 |
288
- | 10 | люксембургхо | 2 |
289
 
290
  ### Zipf's Law Analysis
291
 
292
  | Metric | Value |
293
  |--------|-------|
294
- | Zipf Coefficient | 1.8071 |
295
- | R² (Goodness of Fit) | 0.946340 |
296
  | Adherence Quality | **excellent** |
297
 
298
  ### Coverage Analysis
299
 
300
  | Top N Words | Coverage |
301
  |-------------|----------|
302
- | Top 100 | 40.3% |
303
- | Top 1,000 | 81.6% |
304
- | Top 5,000 | 96.4% |
305
- | Top 10,000 | 97.6% |
306
 
307
  ### Key Findings
308
 
309
- - **Zipf Compliance:** R²=0.9463 indicates excellent adherence to Zipf's law
310
- - **High Frequency Dominance:** Top 100 words cover 40.3% of corpus
311
- - **Long Tail:** 257,119 words needed for remaining 2.4% coverage
312
 
313
  ---
314
  ## 5. Word Embeddings Evaluation
@@ -321,24 +384,129 @@ Below are text samples generated from each Markov chain model:
321
 
322
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
323
 
324
- ### Model Comparison
325
 
326
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
327
- |-------|------------|-----------|----------|----------|----------|
328
- | **mono_32d** | 105,624 | 32 | 6.269 | 1.405 | 0.8750 🏆 |
329
- | **mono_64d** | 105,624 | 64 | 6.426 | 0.986 | 0.8540 |
330
- | **mono_128d** | 105,624 | 128 | 6.612 | 0.771 | 0.7972 |
331
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
332
 
333
  ### Key Findings
334
 
335
- - **Best Isotropy:** mono_32d with 0.8750 (more uniform distribution)
336
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
337
- - **Vocabulary Coverage:** All models cover 105,624 words
338
- - **Recommendation:** 100d for balanced semantic capture and efficiency
339
 
340
  ---
341
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
342
 
343
  ![Performance Dashboard](visualizations/performance_dashboard.png)
344
 
@@ -346,11 +514,12 @@ Below are text samples generated from each Markov chain model:
346
 
347
  | Component | Recommended | Rationale |
348
  |-----------|-------------|-----------|
349
- | Tokenizer | **32k BPE** | Best compression (3.72x) with low UNK rate |
350
- | N-gram | **5-gram** | Lowest perplexity (484) |
351
- | Markov | **Context-4** | Highest predictability (93.6%) |
352
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
353
 
 
354
  ---
355
  ## Appendix: Metrics Glossary & Interpretation Guide
356
 
@@ -540,7 +709,8 @@ If you use these models in your research, please cite:
540
  author = {Kamali, Omar},
541
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
542
  year = {2025},
543
- publisher = {HuggingFace},
 
544
  url = {https://huggingface.co/wikilangs}
545
  institution = {Omneity Labs}
546
  }
@@ -556,7 +726,8 @@ MIT License - Free for academic and commercial use.
556
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
557
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
558
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
559
  ---
560
  *Generated by Wikilangs Models Pipeline*
561
 
562
- *Report Date: 2025-12-28 17:05:28*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 3.783
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8761
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # CE - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 2.792x | 2.80 | 0.9604% | 543,837 |
84
+ | **16k** | 3.140x | 3.15 | 1.0803% | 483,478 |
85
+ | **32k** | 3.480x | 3.49 | 1.1970% | 436,328 |
86
+ | **64k** | 3.783x 🏆 | 3.79 | 1.3016% | 401,281 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Жаныспай (Акмолан область) Жаныспай (Костанайн область)`
 
 
 
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁жан ыс п ай ▁( ак молан ▁область ) ▁жан ... (+8 more)` | 18 |
97
+ | 16k | `▁жан ыс пай ▁( акмолан ▁область ) ▁жан ыс пай ... (+5 more)` | 15 |
98
+ | 32k | `▁жан ыс пай ▁( акмолан ▁область ) ▁жан ыс пай ... (+4 more)` | 14 |
99
+ | 64k | `▁жан ыс пай ▁( акмолан ▁область ) ▁жан ыс пай ... (+4 more)` | 14 |
100
 
101
+ **Sample 2:** `Антиго (Висконсин) Антиго (Маса-Карара) Антиго (гӀала, Висконсин)`
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁анти го ▁( ви сконсин ) ▁анти го ▁( ма ... (+12 more)` | 22 |
106
+ | 16k | `▁анти го ▁( висконсин ) ▁анти го ▁( ма са ... (+11 more)` | 21 |
107
+ | 32k | `▁анти го ▁( висконсин ) ▁анти го ▁( маса - ... (+9 more)` | 19 |
108
+ | 64k | `▁анти го ▁( висконсин ) ▁анти го ▁( маса - ... (+9 more)` | 19 |
109
 
110
+ **Sample 3:** `Барда (Иркутскан область) Барда (Пермийн мохк) Барда (гӀала)`
 
 
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁бар да ▁( иркутскан ▁область ) ▁бар да ▁( пермийн ... (+7 more)` | 17 |
115
+ | 16k | `▁бар да ▁( иркутскан ▁область ) ▁бар да ▁( пермийн ... (+7 more)` | 17 |
116
+ | 32k | `▁барда ▁( иркутскан ▁область ) ▁барда ▁( пермийн ▁мохк ) ... (+4 more)` | 14 |
117
+ | 64k | `▁барда ▁( иркутскан ▁область ) ▁барда ▁( пермийн ▁мохк ) ... (+4 more)` | 14 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 3.783x compression
123
+ - **Lowest UNK Rate:** 8k with 0.9604% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 2,545 | 11.31 | 100,140 | 25.5% | 70.0% |
141
+ | **2-gram** | Subword | 423 🏆 | 8.72 | 6,176 | 55.1% | 98.2% |
142
+ | **3-gram** | Word | 3,286 | 11.68 | 157,541 | 21.2% | 65.9% |
143
+ | **3-gram** | Subword | 2,337 | 11.19 | 58,954 | 23.8% | 69.8% |
144
+ | **4-gram** | Word | 4,089 | 12.00 | 330,019 | 18.2% | 63.2% |
145
+ | **4-gram** | Subword | 5,832 | 12.51 | 337,533 | 16.4% | 50.9% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `нах беха` | 927,008 |
154
+ | 2 | `беха меттигаш` | 876,464 |
155
+ | 3 | `билгалдахарш хьажоргаш` | 387,483 |
156
+ | 4 | `климат кхузахь` | 294,017 |
157
+ | 5 | `сахьтан аса` | 272,866 |
158
+
159
+ **3-grams (Word):**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
+ | 1 | `нах беха меттигаш` | 876,426 |
164
+ | 2 | `кӏоштан нах беха` | 256,950 |
165
+ | 3 | `климат кхузахь климат` | 254,686 |
166
+ | 4 | `бахархой билгалдахарш хьажоргаш` | 156,558 |
167
+ | 5 | `сахьтан аса йу` | 135,690 |
168
 
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `кӏоштан нах беха меттигаш` | 256,946 |
174
+ | 2 | `лелаш ду сахьтан аса` | 134,397 |
175
+ | 3 | `нийса лелаш ду сахьтан` | 134,397 |
176
+ | 4 | `сахьтан аса йу utc` | 133,768 |
177
+ | 5 | `ду сахьтан аса йу` | 133,768 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | _` | 8,696,976 |
184
+ | 2 | `. _` | 8,337,924 |
185
+ | 3 | _` | 7,066,559 |
186
+ | 4 | н` | 6,445,422 |
187
+ | 5 | а` | 5,305,199 |
188
+
189
+ **3-grams (Subword):**
190
+
191
+ | Rank | N-gram | Count |
192
+ |------|--------|-------|
193
+ | 1 | `а н _` | 4,127,441 |
194
+ | 2 | `_ — _` | 2,719,160 |
195
+ | 3 | `а ш _` | 1,910,774 |
196
+ | 4 | `и н _` | 1,668,837 |
197
+ | 5 | `а р а` | 1,610,648 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `т а н _` | 1,416,987 |
204
+ | 2 | `а х а р` | 1,374,119 |
205
+ | 3 | `. _ — _` | 1,045,081 |
206
+ | 4 | `а _ м е` | 1,006,220 |
207
+ | 5 | `_ м е т` | 999,858 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 423
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~51% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.6226 | 1.540 | 3.90 | 520,111 | 37.7% |
231
+ | **1** | Subword | 0.9426 | 1.922 | 9.07 | 1,553 | 5.7% |
232
+ | **2** | Word | 0.1849 | 1.137 | 1.44 | 2,019,671 | 81.5% |
233
+ | **2** | Subword | 0.9737 | 1.964 | 7.37 | 14,069 | 2.6% |
234
+ | **3** | Word | 0.0632 | 1.045 | 1.13 | 2,889,994 | 93.7% |
235
+ | **3** | Subword | 0.8560 | 1.810 | 4.97 | 103,627 | 14.4% |
236
+ | **4** | Word | 0.0320 🏆 | 1.022 | 1.08 | 3,246,178 | 96.8% |
237
+ | **4** | Subword | 0.7168 | 1.643 | 3.27 | 515,118 | 28.3% |
238
+
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
+ **Context Size 1:**
244
+
245
+ 1. `а ду йалташ хастоьмаш малхбален кӏошташкара пачхьалкхан европин дехьайолуш алсама гӏийлачу мехца бек...`
246
+ 2. `нах беха меттигаш нах беха меттигаш провинцин нах беха меттигаш кӏоштан нах беха меттигаш воеводалли...`
247
+ 3. `беха меттигаш штатан йукъахь квинс университет им м в пономарёва м прохоров т 82 т и`
248
+
249
+ **Context Size 2:**
250
+
251
+ 1. `нах беха меттигаш нах беха меттигаш нисйина нах беха меттигаш кӏоштан нах беха меттигаш кӏоштан нах ...`
252
+ 2. `беха меттигаш кӏоштан нах беха меттигаш кӏоштан нах беха меттигаш воеводаллин нах беха меттигаш нах ...`
253
+ 3. `билгалдахарш хьажоргаш черкассин областан индексаш кӏоштан нах беха меттигаш микрокӏошташ нах беха м...`
254
+
255
+ **Context Size 3:**
256
+
257
+ 1. `нах беха меттигаш микрокӏошташ нах беха меттигаш нисйина нах беха меттигаш нах беха меттигаш микрокӏ...`
258
+ 2. `кӏоштан нах беха меттигаш нисйина нах беха меттигаш нах беха меттигаш кӏоштан нах беха меттигаш нах ...`
259
+ 3. `климат кхузахь климат барамехь континенталан йу аьхка йовха хуьлу ткъа ӏа барамехь шийла хуьлу шаран...`
260
+
261
+ **Context Size 4:**
262
+
263
+ 1. `нийса лелаш ду сахьтан аса йу utc 3 билгалдахарш хьажоргаш неклиновскан кӏоштан индексаш кӏоштан нах...`
264
+ 2. `лелаш ду сахьтан аса йу utc 3 билгалдахарш хьажоргаш селижарован кӏоштан индексаш кӏоштан нах беха м...`
265
+ 3. `ду сахьтан аса йу utc 3 билгалдахарш хьажоргаш максатихан кӏоштан индексаш кӏоштан нах беха меттигаш...`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
 
272
  **Context Size 1:**
273
 
274
+ 1. `_/_циллан_olevia`
275
+ 2. `а_перерхашес._ба`
276
+ 3. `нилию._7959-со_к`
277
 
278
  **Context Size 2:**
279
 
280
+ 1. `а_койн_сахар_тӏуь`
281
+ 2. `._ре_нашкая_:_спу`
282
+ 3. `н_схойн_стр_штаме`
283
 
284
  **Context Size 3:**
285
 
286
+ 1. `ан_аркатерия_исти_`
287
+ 2. `_—_итан_новгорокӏо`
288
+ 3. `аш_беха_местник_гу`
289
 
290
  **Context Size 4:**
291
 
292
+ 1. `тан_кӏоштан_воеводс`
293
+ 2. `ахарш_хьажоргаш_нах`
294
+ 3. `.__b.,_heidelberg,`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 96.8% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (515,118 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 230,774 |
318
+ | Total Tokens | 54,539,322 |
319
+ | Mean Frequency | 236.33 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 7087.98 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | а | 1,429,788 |
328
+ | 2 | нах | 929,389 |
329
+ | 3 | беха | 927,412 |
330
+ | 4 | меттигаш | 892,206 |
331
+ | 5 | в | 665,820 |
332
+ | 6 | климат | 663,481 |
333
+ | 7 | м | 649,926 |
334
+ | 8 | йу | 631,461 |
335
+ | 9 | билгалдахарш | 595,304 |
336
+ | 10 | с | 497,975 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | горушкинскан | 2 |
343
+ | 2 | тулинскан | 2 |
344
+ | 3 | долгопольскан | 2 |
345
+ | 4 | погостищенскан | 2 |
346
+ | 5 | кохановскан | 2 |
347
+ | 6 | морховскан | 2 |
348
+ | 7 | нежадовскан | 2 |
349
+ | 8 | липиницкан | 2 |
350
+ | 9 | зачепичи | 2 |
351
+ | 10 | меетиг | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 1.8318 |
358
+ | R² (Goodness of Fit) | 0.964473 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 44.4% |
366
+ | Top 1,000 | 86.7% |
367
+ | Top 5,000 | 96.7% |
368
+ | Top 10,000 | 97.7% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9645 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 44.4% of corpus
374
+ - **Long Tail:** 220,774 words needed for remaining 2.3% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8761 🏆 | 0.3710 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.8520 | 0.3045 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.7849 | 0.2825 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8761 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.3193. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-ка` | картографии, карайора, карпат |
430
+ | `-ко` | количество, кочаны, кошехаблан |
431
+ | `-ма` | майкен, маршаллвилл, машано |
432
+
433
+ #### Productive Suffixes
434
+ | Suffix | Examples |
435
+ |--------|----------|
436
+ | `-а` | ривица, валенсуэла, карайора |
437
+ | `-о` | монтеморо, количество, мятнево |
438
+ | `-н` | расистийн, майкен, тефран |
439
+ | `-ан` | тефран, дмитрован, кертан |
440
+ | `-во` | количество, мятнево, крайково |
441
+ | `-ки` | исаковски, юридически, перлавки |
442
+ | `-ово` | крайково, перегудово, дубново |
443
+ | `-ка` | узника, кукушка, тлаика |
444
+
445
+ ### 6.3 Bound Stems (Lexical Roots)
446
+
447
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
448
+
449
+ | Stem | Cohesion | Substitutability | Examples |
450
+ |------|----------|------------------|----------|
451
+ | `архо` | 2.04x | 122 contexts | архон, тархо, лархо |
452
+ | `галд` | 2.73x | 16 contexts | галдо, галда, угалде |
453
+ | `ргаш` | 2.16x | 34 contexts | ургаш, бергаш, цер��аш |
454
+ | `лгал` | 2.58x | 17 contexts | билгал, билгало, билгала |
455
+ | `етти` | 1.89x | 42 contexts | бетти, нетти, меттин |
456
+ | `харх` | 1.88x | 41 contexts | ахархо, вахарх, мухарх |
457
+ | `халл` | 1.51x | 92 contexts | халла, халле, халль |
458
+ | `ийла` | 1.86x | 35 contexts | кийла, шийла, мийла |
459
+ | `игаш` | 2.25x | 18 contexts | бигаш, цигаш, книгаш |
460
+ | `рхой` | 2.21x | 19 contexts | лархой, сурхой, сурхойн |
461
+ | `ласт` | 1.59x | 60 contexts | пласт, ласта, селаст |
462
+ | `ттиг` | 1.99x | 25 contexts | меттиг, гаттиг, ме́ттиг |
463
+
464
+ ### 6.4 Affix Compatibility (Co-occurrence)
465
+
466
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
467
+
468
+ | Prefix | Suffix | Frequency | Examples |
469
+ |--------|--------|-----------|----------|
470
+ | `-ко` | `-а` | 51 words | королиха, кокориха |
471
+ | `-ка` | `-а` | 43 words | карпеевка, камила |
472
+ | `-ка` | `-о` | 38 words | картелево, катюшино |
473
+ | `-ма` | `-а` | 35 words | машакепара, малакода |
474
+ | `-ко` | `-о` | 33 words | косогорово, косяково |
475
+ | `-ка` | `-н` | 31 words | калустовгӏеран, камблен |
476
+ | `-ма` | `-н` | 24 words | малоярославецан, марьинкан |
477
+ | `-ко` | `-н` | 23 words | коритен, койдин |
478
+ | `-ма` | `-о` | 22 words | маторо, манкузо |
479
+ | `-ко` | `-во` | 18 words | косогорово, косяково |
480
+
481
+ ### 6.5 Recursive Morpheme Segmentation
482
+
483
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
484
+
485
+ | Word | Suggested Split | Confidence | Stem |
486
+ |------|-----------------|------------|------|
487
+ | полканово | **`полк-ан-ово`** | 6.0 | `полк` |
488
+ | андрюшкино | **`андрюш-ки-но`** | 6.0 | `андрюш` |
489
+ | зимовники | **`зимовни-ки`** | 4.5 | `зимовни` |
490
+ | гринвичан | **`гринвич-ан`** | 4.5 | `гринвич` |
491
+ | гуннбьёрнан | **`гуннбьёрн-ан`** | 4.5 | `гуннбьёрн` |
492
+ | хромосоман | **`хромосом-ан`** | 4.5 | `хромосом` |
493
+ | боьлкъазаран | **`боьлкъазар-ан`** | 4.5 | `боьлкъазар` |
494
+ | хӏуманашна | **`хӏуманаш-на`** | 4.5 | `хӏуманаш` |
495
+ | ынтымакан | **`ынтымак-ан`** | 4.5 | `ынтымак` |
496
+ | бартолина | **`бартоли-на`** | 4.5 | `бартоли` |
497
+ | судженскан | **`судженск-ан`** | 4.5 | `судженск` |
498
+ | бузиновка | **`бузинов-ка`** | 4.5 | `бузинов` |
499
+ | тракторашна | **`трактораш-на`** | 4.5 | `трактораш` |
500
+ | пайхӏамаран | **`пайхӏамар-ан`** | 4.5 | `пайхӏамар` |
501
+ | нуьрнберган | **`нуьрнберг-ан`** | 4.5 | `нуьрнберг` |
502
+
503
+ ### 6.6 Linguistic Interpretation
504
+
505
+ > **Automated Insight:**
506
+ The language CE appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
507
+
508
+ ---
509
+ ## 7. Summary & Recommendations
510
 
511
  ![Performance Dashboard](visualizations/performance_dashboard.png)
512
 
 
514
 
515
  | Component | Recommended | Rationale |
516
  |-----------|-------------|-----------|
517
+ | Tokenizer | **64k BPE** | Best compression (3.78x) |
518
+ | N-gram | **2-gram** | Lowest perplexity (423) |
519
+ | Markov | **Context-4** | Highest predictability (96.8%) |
520
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
521
 
522
+
523
  ---
524
  ## Appendix: Metrics Glossary & Interpretation Guide
525
 
 
709
  author = {Kamali, Omar},
710
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
711
  year = {2025},
712
+ doi = {10.5281/zenodo.18073153},
713
+ publisher = {Zenodo},
714
  url = {https://huggingface.co/wikilangs}
715
  institution = {Omneity Labs}
716
  }
 
726
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
727
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
728
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
729
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
730
  ---
731
  *Generated by Wikilangs Models Pipeline*
732
 
733
+ *Report Date: 2026-01-03 10:17:57*
models/embeddings/monolingual/ce_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:18a8bbd0641b8970aab2c0ccf604502eba74b97f35c3606ad01acb009b142f02
3
- size 1134808955
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:552e31d70a010dcf9ef87e857ff88199b6929bbb6fb3bcdccca9386585c7aa73
3
+ size 1106869199
models/embeddings/monolingual/ce_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 105624
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 79041
15
  }
models/embeddings/monolingual/ce_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44c015065e910726d15abdf60c4dd0e4462950b3f9c99e40c82a6bec99b9cec2
3
- size 285689723
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5102b3e58f419aba998033902e9953b5363500654754d4e63832e110651c49fa
3
+ size 278165711
models/embeddings/monolingual/ce_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 105624
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 79041
15
  }
models/embeddings/monolingual/ce_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:160402c76b1bde05f4bbde11003b39939ec6f0e7a3c4e8d2d20b4d20e5810eed
3
- size 568729467
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78021fa809a68d2e2ee1a5da53c5c92c25afcb13d930f8165b5ca92116725dfd
3
+ size 554400207
models/embeddings/monolingual/ce_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 105624
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 79041
15
  }
models/subword_markov/ce_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cf69dd5043773e4a6ec0dd7fca8fb532d1def26240aa6214a05619b9f7a0320
3
- size 134168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b662cf799690d78b190708472b117520cd7cbcad0ac633b1286d43e5c79ae3a
3
+ size 117929
models/subword_markov/ce_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_contexts": 1510,
6
- "total_transitions": 552073024
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_contexts": 1553,
6
+ "total_transitions": 402142071
7
  }
models/subword_markov/ce_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a6187b0fabb36d6b719c6ec916b358a15d63a341364e7852e2f3a23c5586073
3
- size 1098619
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4dfc6fd0f57c29113d3b3fec461c0764a0522c46a13186880ed52b9d6958571
3
+ size 872325
models/subword_markov/ce_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_contexts": 17227,
6
- "total_transitions": 551384673
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_contexts": 14069,
6
+ "total_transitions": 401528698
7
  }
models/subword_markov/ce_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d252c42256fc6c0b6fce0d261303ba6de8609a4c175ccf6885164947f96bdded
3
- size 5724342
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3414e9127a5344c044f38d2218042344b271b5e8396bc6d4d72233e2b2d1118
3
+ size 4229687
models/subword_markov/ce_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_contexts": 133970,
6
- "total_transitions": 550696322
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_contexts": 103627,
6
+ "total_transitions": 400915325
7
  }
models/subword_markov/ce_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e1e81c1dd821aba41445dadb821e7e3bcb207d9eb6e163013e5c9212255e4ccc
3
- size 19564690
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f130a9685a8aed9c6b457e355c462d4bb6bc97cd44b8ef634c042ea0133165d1
3
+ size 15358547
models/subword_markov/ce_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_contexts": 662768,
6
- "total_transitions": 550007971
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_contexts": 515118,
6
+ "total_transitions": 400301952
7
  }
models/subword_ngram/ce_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a1697b51648dc86c9e845aafa0afbfc74dc00aa7028ba2b5d395142bf6f3640
3
- size 118388
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:461d6692fb3cde3c1a7fd56b30b64912a0c140583d19fdeca1497422c4effcb1
3
+ size 97261
models/subword_ngram/ce_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_ngrams": 7755,
6
- "total_ngrams": 552073024
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_ngrams": 6176,
6
+ "total_ngrams": 402142071
7
  }
models/subword_ngram/ce_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4acca9406308437ca11a05cfcb287cb1fec43d90ddfdd2e54583391e67b0dcfe
3
- size 1012357
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58a82a7f67b113302603309f686885d9e1c8883bfbb49e51f83b18800eabd3e3
3
+ size 816823
models/subword_ngram/ce_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_ngrams": 72318,
6
- "total_ngrams": 551384673
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_ngrams": 58954,
6
+ "total_ngrams": 401528698
7
  }
models/subword_ngram/ce_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cde1675327291babdadf64b31e49b9f88ccfec8517d7a1de3c798508559e9140
3
- size 5464388
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47cefb2349ec2159ead2601f3247b831fb17076c78f2ad1df98bfeca85c04804
3
+ size 4353133
models/subword_ngram/ce_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ce",
5
- "unique_ngrams": 422662,
6
- "total_ngrams": 550696322
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ce",
5
+ "unique_ngrams": 337533,
6
+ "total_ngrams": 400915325
7
  }
models/tokenizer/ce_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:693fef07fd91c952bd6dc391258ac00b72412b245cccf2273b8da2e74f98a918
3
- size 580690
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47044bfafe2471fb7dd149ae56b1ee71a3dc0dae2187dde2da97d70536d94302
3
+ size 583986
models/tokenizer/ce_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ce_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7641da1e8ba9bb6ac932a554231606e62d54ac9ed4acdfcc75c032e584b726cf
3
- size 947341
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dc3aa08231d203aaef76058c47ffd13abd5516006e009bda38a089e8f521043
3
+ size 941717
models/tokenizer/ce_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ce_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dd5f0f5cb1ac9ddae9543ca026331e7095bfb49e92713ef6d2d1f2b1af81c700
3
- size 1695288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0ca3910cc123379fbdb7db536fdc659e073f98693c0cdc2028daa98f77221fe
3
+ size 1671632
models/tokenizer/ce_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ce_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:61b5f6dfdef5fedd1877009ce07f66b91e1f97f11807dd468dcdb16a014824e2
3
- size 406333
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f2a3488af9a58357efbe0e4a03cc1777791f582cd49bc1b578b466f3c6fe09e
3
+ size 409035
models/tokenizer/ce_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ce_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:12b88a5312e7a67ba957dffeff9943c20553b754f8d99ae3bab9bc7925021712
3
- size 4165372
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53491c1be3ddba1539a45ab8004c1a5fe96048b27ea345ef740d3827b2c5eb4a
3
+ size 3729004
models/vocabulary/ce_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "ce",
3
- "vocabulary_size": 267119,
 
4
  "statistics": {
5
- "type_token_ratio": 0.00808419251496939,
6
  "coverage": {
7
- "top_100": 0.4013568533353177,
8
- "top_1000": 0.8125705130068827,
9
- "top_5000": 0.9597777008352958,
10
- "top_10000": 0.9715211792991832
11
  },
12
- "hapax_count": 329317,
13
- "hapax_ratio": 0.5521413865024914,
14
- "total_documents": 688351
15
  }
16
  }
 
1
  {
2
  "language": "ce",
3
+ "vocabulary_size": 230774,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.009492890208497928,
7
  "coverage": {
8
+ "top_100": 0.4413509820362693,
9
+ "top_1000": 0.8620556765599773,
10
+ "top_5000": 0.9621095823063379,
11
+ "top_10000": 0.9714254859934246
12
  },
13
+ "hapax_count": 289712,
14
+ "hapax_ratio": 0.5566182375702708,
15
+ "total_documents": 613373
16
  }
17
  }
models/word_markov/ce_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:599dfc01d9395014b5691fd2f74145fcc4405443ba6655b4078f8756b54fe14c
3
- size 26132254
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3bb6f9d89f6d5115c2926090a43cce4d79d1f587ec3a35363d32b6e7597bef4
3
+ size 26676852
models/word_markov/ce_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_contexts": 596401,
6
- "total_transitions": 106354951
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_contexts": 520111,
6
+ "total_transitions": 54215661
7
  }
models/word_markov/ce_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb42589d1cb2a6e1ec32d0fb09f361272a03b0d53a6ddf4b788ff111a967fbd8
3
- size 63812094
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e50bac15d55d8209d7a94c5b47b626d9ec9efcfbc87ea034378579c8baae771a
3
+ size 61192411
models/word_markov/ce_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_contexts": 2141469,
6
- "total_transitions": 105666601
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_contexts": 2019671,
6
+ "total_transitions": 53602288
7
  }
models/word_markov/ce_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f8e2020efe22d041025528c0a6f3588a337e4ff44e09ea096401c37fdf145a05
3
- size 98196773
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7731dcdedfda586ce23049d63d1cf35f363b73a5694fc5959c9acc5d96708db
3
+ size 83617046
models/word_markov/ce_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_contexts": 3726034,
6
- "total_transitions": 104978251
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_contexts": 2889994,
6
+ "total_transitions": 52988915
7
  }
models/word_markov/ce_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d606c8d8d7711381abb87ee93407843ba0f735bcc2358cc37ee6bb64dc89a91
3
- size 127811549
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7a571da9e93c548265c09ac2fa741dd64799c4eb59edfd177a5c2d1d6d9b405
3
+ size 103083705
models/word_markov/ce_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_contexts": 4825259,
6
- "total_transitions": 104289907
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_contexts": 3246178,
6
+ "total_transitions": 52375542
7
  }
models/word_ngram/ce_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f013fabd46773d7120af2601c77cf0ab74f745ad856acbd7e9eee74a785ae73
3
- size 3471073
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eec9750c3913795005af6c38ea29300fed66c9a6bf75815c3f9610aa60530e9
3
+ size 2175399
models/word_ngram/ce_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_ngrams": 180710,
6
- "total_ngrams": 106354951
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_ngrams": 100140,
6
+ "total_ngrams": 54215661
7
  }
models/word_ngram/ce_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:38e66219baeec769e5c5ad382c8dd648581653177e8a32ad53b8b8ba808b9c3c
3
- size 6715161
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:237f7c39561c5eecec8a4b4fe0f5d823821231402041fe6b873a1bdb29e2ab83
3
+ size 3958635
models/word_ngram/ce_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_ngrams": 322719,
6
- "total_ngrams": 105666601
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_ngrams": 157541,
6
+ "total_ngrams": 53602288
7
  }
models/word_ngram/ce_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:918e1068a19861c1df2e2c44e0f5e49bde38b0e6eb1e0cdf4f44d0350bbee9a0
3
- size 15779066
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66b57f5089c75aef2f67e25c1358b26b0ace9b73aac63422694c1db35df26046
3
+ size 9113237
models/word_ngram/ce_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ce",
5
- "unique_ngrams": 709202,
6
- "total_ngrams": 104978251
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ce",
5
+ "unique_ngrams": 330019,
6
+ "total_ngrams": 52988915
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: b2a7062ae9e7c63efe6d7950543b73061b02c00d4215422311e6185395d96092
  • Pointer size: 131 Bytes
  • Size of remote file: 163 kB

Git LFS Details

  • SHA256: 7fc2cb784d158f5ae967d5bdd875e9d27166e9fa4a09bc94456d7d0b2f6f4294
  • Pointer size: 131 Bytes
  • Size of remote file: 163 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED