omarkamali commited on
Commit
ff6240b
·
verified ·
1 Parent(s): 5cb773a

Upload all models and assets for bxr (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +307 -138
  2. models/embeddings/monolingual/bxr_128d.bin +2 -2
  3. models/embeddings/monolingual/bxr_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/bxr_32d.bin +2 -2
  5. models/embeddings/monolingual/bxr_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/bxr_64d.bin +2 -2
  7. models/embeddings/monolingual/bxr_64d_metadata.json +5 -3
  8. models/subword_markov/bxr_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/bxr_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/bxr_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/bxr_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/bxr_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/bxr_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/bxr_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/bxr_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/bxr_2gram_subword.parquet +2 -2
  17. models/subword_ngram/bxr_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/bxr_3gram_subword.parquet +2 -2
  19. models/subword_ngram/bxr_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/bxr_4gram_subword.parquet +2 -2
  21. models/subword_ngram/bxr_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/bxr_tokenizer_16k.model +2 -2
  23. models/tokenizer/bxr_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/bxr_tokenizer_32k.model +2 -2
  25. models/tokenizer/bxr_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/bxr_tokenizer_64k.model +2 -2
  27. models/tokenizer/bxr_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/bxr_tokenizer_8k.model +2 -2
  29. models/tokenizer/bxr_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/bxr_vocabulary.parquet +2 -2
  31. models/vocabulary/bxr_vocabulary_metadata.json +10 -9
  32. models/word_markov/bxr_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/bxr_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/bxr_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/bxr_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/bxr_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/bxr_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/bxr_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/bxr_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/bxr_2gram_word.parquet +2 -2
  41. models/word_ngram/bxr_2gram_word_metadata.json +2 -2
  42. models/word_ngram/bxr_3gram_word.parquet +2 -2
  43. models/word_ngram/bxr_3gram_word_metadata.json +2 -2
  44. models/word_ngram/bxr_4gram_word.parquet +2 -2
  45. models/word_ngram/bxr_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.299
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8992
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 37973
33
- generated: 2025-12-28
34
  ---
35
 
36
  # BXR - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,56 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.277x | 3.23 | 0.1090% | 644,799 |
76
- | **16k** | 3.643x | 3.59 | 0.1212% | 580,038 |
77
- | **32k** | 3.983x | 3.93 | 0.1325% | 530,456 |
78
- | **64k** | 4.299x 🏆 | 4.24 | 0.1430% | 491,550 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `Маршалл аралууд - Австралиин болон Океаниин улас.
85
-
86
- Категори:Австрали ба Океани`
87
 
88
  | Vocab | Tokens | Count |
89
  |-------|--------|-------|
90
- | 8k | `▁мар шал л ▁арал ууд ▁- ▁австралиин ▁болон ▁оке аниин ... (+9 more)` | 19 |
91
- | 16k | `▁маршал л ▁аралууд ▁- ▁австралиин ▁болон ▁оке аниин ▁улас . ... (+5 more)` | 15 |
92
- | 32k | `▁маршал л ▁аралууд ▁- ▁австралиин ▁болон ▁океаниин ▁улас . ▁категори ... (+4 more)` | 14 |
93
- | 64k | `▁маршалл ▁аралууд ▁- ▁австралиин ▁болон ▁океаниин ▁улас . ▁категори : ... (+3 more)` | 13 |
94
-
95
- **Sample 2:** `Зимбабве - Африкийн улас.
96
 
97
- President : Emmerson Mnangagwa
98
- Категори:Африка`
99
 
100
  | Vocab | Tokens | Count |
101
  |-------|--------|-------|
102
- | 8k | `▁з им ба б ве ▁- ▁африкийн ▁улас . ▁pr ... (+16 more)` | 26 |
103
- | 16k | `▁з имба б ве ▁- ▁африкийн ▁улас . ▁pres ident ... (+12 more)` | 22 |
104
- | 32k | `▁зимбабве ▁- ▁африкийн ▁улас . ▁president ▁: ▁em m erson ... (+8 more)` | 18 |
105
- | 64k | `▁зимбабве ▁- ▁африкийн ▁улас . ▁president ▁: ▁emm erson ▁mn ... (+6 more)` | 16 |
106
 
107
- **Sample 3:** `Ваканси ( «хооһон, сүлөө») эмхи зургаанда, һургуулиин газарта эзэлэгдээгүй ту...`
108
 
109
  | Vocab | Tokens | Count |
110
  |-------|--------|-------|
111
- | 8k | `▁вак ан си ▁( ▁— ▁« х ооһон , ▁сүлөө ... (+27 more)` | 37 |
112
- | 16k | `▁вак ан си ▁( ▁— ▁« х ооһон , ▁сүлөө ... (+26 more)` | 36 |
113
- | 32k | `▁вак ан си ▁( ▁— ▁« хооһон , ▁сүлөө ») ... (+22 more)` | 32 |
114
- | 64k | `▁вак ан си ▁( ▁— ▁« хооһон , ▁сүлөө ») ... (+19 more)` | 29 |
115
 
116
 
117
  ### Key Findings
118
 
119
- - **Best Compression:** 64k achieves 4.299x compression
120
- - **Lowest UNK Rate:** 8k with 0.1090% unknown tokens
121
  - **Trade-off:** Larger vocabularies improve compression but increase model size
122
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
123
 
@@ -126,57 +129,89 @@ President : Emmerson Mnangagwa
126
 
127
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
128
 
 
 
129
  ![N-gram Coverage](visualizations/ngram_coverage.png)
130
 
131
  ### Results
132
 
133
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
134
- |--------|------------|---------|----------------|------------------|-------------------|
135
- | **2-gram** | 5,799 🏆 | 12.50 | 15,674 | 20.1% | 46.7% |
136
- | **2-gram** | 532 🏆 | 9.05 | 4,895 | 53.6% | 95.0% |
137
- | **3-gram** | 8,896 | 13.12 | 19,215 | 16.4% | 37.7% |
138
- | **3-gram** | 4,459 | 12.12 | 35,775 | 19.2% | 59.0% |
139
- | **4-gram** | 16,750 | 14.03 | 31,775 | 13.4% | 28.0% |
140
- | **4-gram** | 21,165 | 14.37 | 149,695 | 9.1% | 33.0% |
141
 
142
  ### Top 5 N-grams by Size
143
 
144
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
145
 
146
  | Rank | N-gram | Count |
147
  |------|--------|-------|
148
- | 1 | `категори :` | 4,149 |
149
- | 2 | `юм .` | 3,028 |
150
- | 3 | `) ,` | 2,485 |
151
- | 4 | `байна .` | 2,414 |
152
- | 5 | `) .` | 1,647 |
153
 
154
- **3-grams:**
155
 
156
  | Rank | N-gram | Count |
157
  |------|--------|-------|
158
- | 1 | `зүүлтэ категори :` | 1,025 |
159
- | 2 | `гү , али` | 1,017 |
160
- | 3 | `. зүүлтэ категори` | 687 |
161
- | 4 | `. категори :` | 682 |
162
- | 5 | `( ородоор :` | 647 |
163
 
164
- **4-grams:**
165
 
166
  | Rank | N-gram | Count |
167
  |------|--------|-------|
168
- | 1 | `. зүүлтэ категори :` | 687 |
169
- | 2 | `тохёоһон үйлэ ябадалай жагсаалта` | 366 |
170
- | 3 | `үдэр тохёоһон үйлэ ябадалай` | 366 |
171
- | 4 | `энэ үдэр тохёоһон үйлэ` | 366 |
172
- | 5 | `энэ үдэр наһа бараһаниинь` | 366 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
 
174
 
175
  ### Key Findings
176
 
177
- - **Best Perplexity:** 2-gram with 532
178
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
179
- - **Coverage:** Top-1000 patterns cover ~33% of corpus
180
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
181
 
182
  ---
@@ -184,55 +219,86 @@ President : Emmerson Mnangagwa
184
 
185
  ![Markov Entropy](visualizations/markov_entropy.png)
186
 
 
 
187
  ![Markov Branching](visualizations/markov_branching.png)
188
 
189
  ### Results
190
 
191
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
192
- |---------|-------------|------------|------------------|-----------------|----------------|
193
- | **1** | 0.6555 | 1.575 | 4.12 | 100,619 | 34.4% |
194
- | **1** | 1.0618 | 2.088 | 7.29 | 2,105 | 0.0% |
195
- | **2** | 0.1980 | 1.147 | 1.45 | 414,597 | 80.2% |
196
- | **2** | 0.8900 | 1.853 | 5.27 | 15,331 | 11.0% |
197
- | **3** | 0.0669 | 1.047 | 1.12 | 601,940 | 93.3% |
198
- | **3** | 0.7767 | 1.713 | 3.63 | 80,818 | 22.3% |
199
- | **4** | 0.0247 🏆 | 1.017 | 1.04 | 671,341 | 97.5% |
200
- | **4** | 0.5432 🏆 | 1.457 | 2.28 | 293,211 | 45.7% |
 
 
201
 
202
- ### Generated Text Samples
203
 
204
- Below are text samples generated from each Markov chain model:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
 
206
  **Context Size 1:**
207
 
208
- 1. `, 080 526 | | almeirim | 9 хуби . түмэр гарган абаха , тэршэлэн ,`
209
- 2. `. металл болон туризм , ( 0 , амеэрикын нэгэдэһэн улас һуулшын 10 жилийн хугацаатай томилогджээ`
210
- 3. `( 14 , ехэ , гурбан шажанууд болон xiii зуунай франциин уран үгэ гаралтай . аж`
211
 
212
  **Context Size 2:**
213
 
214
- 1. `категори : зэмэ аймаг ( ) — оросой улас түрын , эдэй засагта шэлжэһэн . 1926 .`
215
- 2. `юм . 48 c - һээ 10 ° c ( уһанай хүлдэхын сэг , шугам , сахилгаан`
216
- 3. `) , пит бест хоёр бүрилдэхүүнһөө гарашье хожомынь ринго старр ( бүмбэршэн , дуушан ) дүрбэнһөө 1960`
217
 
218
  **Context Size 3:**
219
 
220
- 1. `зүүлтэ категори : хүдэлмэри`
221
- 2. `гү , али кесаревын зүһэлгын аргаар 39 долоо хоногой урда нарайлалга бусад элүүр мэндын шалтгаанай ул...`
222
- 3. `. зүүлтэ категори : буддын шажан категори : хамба ламанар категори : 1935 ондо түрэгэһэд категори : ...`
223
 
224
  **Context Size 4:**
225
 
226
- 1. `. зүүлтэ категори : ехэ британи категори : арадууд`
227
- 2. `энэ үдэр тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь категори : ...`
228
- 3. `энэ үдэр наһа бараһаниинь категори : үдэрнүүд`
229
 
230
 
231
  ### Key Findings
232
 
233
- - **Best Predictability:** Context-4 with 97.5% predictability
234
  - **Branching Factor:** Decreases with context size (more deterministic)
235
- - **Memory Trade-off:** Larger contexts require more storage (293,211 contexts)
236
  - **Recommendation:** Context-3 or Context-4 for text generation
237
 
238
  ---
@@ -248,64 +314,64 @@ Below are text samples generated from each Markov chain model:
248
 
249
  | Metric | Value |
250
  |--------|-------|
251
- | Vocabulary Size | 37,973 |
252
- | Total Tokens | 528,039 |
253
- | Mean Frequency | 13.91 |
254
  | Median Frequency | 3 |
255
- | Frequency Std Dev | 76.38 |
256
 
257
  ### Most Common Words
258
 
259
  | Rank | Word | Frequency |
260
  |------|------|-----------|
261
- | 1 | категори | 4,150 |
262
- | 2 | ба | 3,836 |
263
- | 3 | юм | 3,234 |
264
- | 4 | энэ | 3,067 |
265
- | 5 | ондо | 2,851 |
266
- | 6 | болон | 2,646 |
267
- | 7 | байна | 2,600 |
268
- | 8 | улас | 2,578 |
269
- | 9 | оной | 2,541 |
270
- | 10 | the | 2,196 |
271
 
272
  ### Least Common Words (from vocabulary)
273
 
274
  | Rank | Word | Frequency |
275
  |------|------|-----------|
276
- | 1 | ᠲᠠᠢ | 2 |
277
- | 2 | ᠮᠣᠩᠭᠤᠯ | 2 |
278
- | 3 | ᠤᠷᠤᠨ | 2 |
279
- | 4 | ᠮᠢᠨᠢ | 2 |
280
- | 5 | ᠦᠷ | 2 |
281
- | 6 | ᠵᠢᠷᠭᠠᠯ | 2 |
282
- | 7 | дүхэригтэй | 2 |
283
- | 8 | e5 | 2 |
284
- | 9 | исибагай | 2 |
285
- | 10 | ылын | 2 |
286
 
287
  ### Zipf's Law Analysis
288
 
289
  | Metric | Value |
290
  |--------|-------|
291
- | Zipf Coefficient | 0.9692 |
292
- | R² (Goodness of Fit) | 0.992557 |
293
  | Adherence Quality | **excellent** |
294
 
295
  ### Coverage Analysis
296
 
297
  | Top N Words | Coverage |
298
  |-------------|----------|
299
- | Top 100 | 21.7% |
300
- | Top 1,000 | 51.3% |
301
- | Top 5,000 | 74.3% |
302
- | Top 10,000 | 83.7% |
303
 
304
  ### Key Findings
305
 
306
- - **Zipf Compliance:** R²=0.9926 indicates excellent adherence to Zipf's law
307
- - **High Frequency Dominance:** Top 100 words cover 21.7% of corpus
308
- - **Long Tail:** 27,973 words needed for remaining 16.3% coverage
309
 
310
  ---
311
  ## 5. Word Embeddings Evaluation
@@ -318,24 +384,124 @@ Below are text samples generated from each Markov chain model:
318
 
319
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
320
 
321
- ### Model Comparison
322
 
323
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
324
- |-------|------------|-----------|----------|----------|----------|
325
- | **mono_32d** | 14,816 | 32 | 3.693 | 0.816 | 0.8992 🏆 |
326
- | **mono_64d** | 14,816 | 64 | 4.178 | 0.727 | 0.8181 |
327
- | **mono_128d** | 14,816 | 128 | 4.428 | 0.707 | 0.4039 |
328
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
329
 
330
  ### Key Findings
331
 
332
- - **Best Isotropy:** mono_32d with 0.8992 (more uniform distribution)
333
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
334
- - **Vocabulary Coverage:** All models cover 14,816 words
335
- - **Recommendation:** 100d for balanced semantic capture and efficiency
336
 
337
  ---
338
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
339
 
340
  ![Performance Dashboard](visualizations/performance_dashboard.png)
341
 
@@ -343,11 +509,12 @@ Below are text samples generated from each Markov chain model:
343
 
344
  | Component | Recommended | Rationale |
345
  |-----------|-------------|-----------|
346
- | Tokenizer | **32k BPE** | Best compression (4.30x) with low UNK rate |
347
- | N-gram | **5-gram** | Lowest perplexity (532) |
348
- | Markov | **Context-4** | Highest predictability (97.5%) |
349
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
350
 
 
351
  ---
352
  ## Appendix: Metrics Glossary & Interpretation Guide
353
 
@@ -537,7 +704,8 @@ If you use these models in your research, please cite:
537
  author = {Kamali, Omar},
538
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
539
  year = {2025},
540
- publisher = {HuggingFace},
 
541
  url = {https://huggingface.co/wikilangs}
542
  institution = {Omneity Labs}
543
  }
@@ -553,7 +721,8 @@ MIT License - Free for academic and commercial use.
553
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
554
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
555
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
556
  ---
557
  *Generated by Wikilangs Models Pipeline*
558
 
559
- *Report Date: 2025-12-28 09:22:58*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.390
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8916
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # BXR - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.450x | 3.45 | 0.1418% | 628,340 |
84
+ | **16k** | 3.842x | 3.84 | 0.1579% | 564,308 |
85
+ | **32k** | 4.148x | 4.15 | 0.1705% | 522,647 |
86
+ | **64k** | 4.390x 🏆 | 4.39 | 0.1804% | 493,909 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Цзяинь - Ород Википеэдийн Үбэр Монголой долоо хоногой үгүүлэл. Мүн үзэхэ Үбэр Мо...`
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁цз я инь ▁- ▁ород ▁википеэдийн ▁үбэр ▁монголой ▁долоо ▁хоногой ... (+8 more)` | 18 |
97
+ | 16k | `▁цз я инь ▁- ▁ород ▁википеэдийн ▁үбэр ▁монголой ▁долоо ▁хоногой ... (+8 more)` | 18 |
98
+ | 32k | `▁цзя инь ▁- ▁ород ▁википеэдийн ▁үбэр ▁монголой ▁долоо ▁хоногой ▁үгүүлэл ... (+7 more)` | 17 |
99
+ | 64k | `▁цзяинь ▁- ▁ород ▁википеэдийн ▁үбэр ▁монголой ▁долоо ▁хоногой ▁үгүүлэл . ... (+6 more)` | 16 |
 
 
100
 
101
+ **Sample 2:** `Мобилизаци гээшэ зэбсэгтэ хүсэниие энхэ тайбангай байдалһаань дайнай байдалда ор...`
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁м об ил изаци ���гээшэ ▁зэбсэгтэ ▁хүсэн иие ▁энхэ ▁тайбан ... (+11 more)` | 21 |
106
+ | 16k | `▁м об ил изаци ▁гээшэ ▁зэбсэгтэ ▁хүсэниие ▁энхэ ▁тайбан гай ... (+9 more)` | 19 |
107
+ | 32k | `▁м обилизаци ▁гээшэ ▁зэбсэгтэ ▁хүсэниие ▁энхэ ▁тайбангай ▁байдалһаань ▁дайнай ▁байдалда ... (+5 more)` | 15 |
108
+ | 64k | `▁мобилизаци ▁гээшэ ▁зэбсэгтэ ▁хүсэниие ▁энхэ ▁тайбангай ▁байдалһаань ▁дайнай ▁байдалда ▁оруулха ... (+4 more)` | 14 |
109
 
110
+ **Sample 3:** `Гильзэбуугай һомоной түмэр патрон. Зүүлтэ зэбсэг`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁г иль зэ ▁— ▁буу гай ▁һом оной ▁түмэр ▁патр ... (+4 more)` | 14 |
115
+ | 16k | `▁г иль зэ ▁— ▁буу гай ▁һомоной ▁түмэр ▁патр он ... (+3 more)` | 13 |
116
+ | 32k | `▁г иль зэ ▁— ▁буу гай ▁һомоной ▁түмэр ▁патр он ... (+3 more)` | 13 |
117
+ | 64k | `▁г иль зэ ▁— ▁буугай ▁һомоной ▁түмэр ▁патрон . ▁зүүлтэ ... (+1 more)` | 11 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.390x compression
123
+ - **Lowest UNK Rate:** 8k with 0.1418% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 4,169 | 12.03 | 8,128 | 19.5% | 49.4% |
141
+ | **2-gram** | Subword | 452 🏆 | 8.82 | 3,823 | 56.9% | 96.7% |
142
+ | **3-gram** | Word | 3,724 | 11.86 | 7,805 | 24.5% | 47.7% |
143
+ | **3-gram** | Subword | 3,736 | 11.87 | 29,340 | 20.6% | 62.1% |
144
+ | **4-gram** | Word | 7,537 | 12.88 | 14,616 | 19.0% | 34.7% |
145
+ | **4-gram** | Subword | 18,031 | 14.14 | 124,835 | 9.4% | 34.5% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `энэ үдэр` | 1,070 |
154
+ | 2 | `гү али` | 1,030 |
155
+ | 3 | `of the` | 465 |
156
+ | 4 | `байна энэ` | 415 |
157
+ | 5 | `бүгэдэ найрамдаха` | 396 |
158
+
159
+ **3-grams (Word):**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
+ | 1 | `үдэр наһа бараһаниинь` | 353 |
164
+ | 2 | `үдэр тохёоһон үйлэ` | 353 |
165
+ | 3 | `энэ үдэр түрэһэниинь` | 353 |
166
+ | 4 | `үйлэ ябадалай жагсаалта` | 353 |
167
+ | 5 | `энэ үдэр тохёоһон` | 353 |
168
 
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `энэ үдэр наһа бараһаниинь` | 353 |
174
+ | 2 | `тохёоһон үйлэ ябадалай жагсаалта` | 353 |
175
+ | 3 | `үдэр тохёоһон үйлэ ябадалай` | 353 |
176
+ | 4 | `энэ үдэр тохёоһон үйлэ` | 353 |
177
+ | 5 | `энэ үдэрэй тэ��дэглэлтэ баяр` | 345 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | _` | 82,295 |
184
+ | 2 | _` | 56,691 |
185
+ | 3 | `_ б` | 54,353 |
186
+ | 4 | `_ х` | 50,092 |
187
+ | 5 | й` | 48,574 |
188
+
189
+ **3-grams (Subword):**
190
+
191
+ | Rank | N-gram | Count |
192
+ |------|--------|-------|
193
+ | 1 | `а й _` | 24,558 |
194
+ | 2 | `_ б а` | 24,246 |
195
+ | 3 | `ы н _` | 18,435 |
196
+ | 4 | `э й _` | 17,416 |
197
+ | 5 | `а н _` | 16,805 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `_ б а й` | 12,907 |
204
+ | 2 | `_ б о л` | 11,173 |
205
+ | 3 | `б о л о` | 9,002 |
206
+ | 4 | `и и н _` | 6,889 |
207
+ | 5 | `_ у л а` | 6,870 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 452
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~35% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.7391 | 1.669 | 4.14 | 92,909 | 26.1% |
231
+ | **1** | Subword | 0.8623 | 1.818 | 5.69 | 2,141 | 13.8% |
232
+ | **2** | Word | 0.1430 | 1.104 | 1.26 | 383,260 | 85.7% |
233
+ | **2** | Subword | 0.8174 | 1.762 | 5.04 | 12,176 | 18.3% |
234
+ | **3** | Word | 0.0340 | 1.024 | 1.05 | 482,888 | 96.6% |
235
+ | **3** | Subword | 0.7977 | 1.738 | 3.77 | 61,348 | 20.2% |
236
+ | **4** | Word | 0.0111 🏆 | 1.008 | 1.02 | 504,904 | 98.9% |
237
+ | **4** | Subword | 0.5768 | 1.491 | 2.40 | 230,966 | 42.3% |
238
+
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
+ **Context Size 1:**
244
+
245
+ 1. `ба үлэнтэй шэлэ нюруу шулуун харшанууд гэхэ мэтэ болобош 1 хушааһан 4 зуун орост нуран унажа`
246
+ 2. `юм зүүлтэ гадаада ба хүн гэжэ намые арадай хуралай депутатаар һунгагдаһан юрэнхылэгшэ байгаа тула тэ...`
247
+ 3. `энэ үедэ мадрид мадридынь шэнэ үгэнүүд бии гал носоохо гал задагай агаарта гү али алинда гарза`
248
+
249
+ **Context Size 2:**
250
+
251
+ 1. `энэ үдэр тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь оной урда үе энэ үдэр тохёоһон үйлэ я...`
252
+ 2. `гү али бэеын дархалалай харюу урбалаар янза бүриин үнгэтэй улаан ногоон шара г м түлэб хиинүүдынь хи...`
253
+ 3. `of the american library association energy data statistics for russia from the principles of water i...`
254
+
255
+ **Context Size 3:**
256
+
257
+ 1. `энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь николай островский зүблэлтэ зохёолшо как закалялась с...`
258
+ 2. `тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь энэ үдэрэй тэмдэглэл...`
259
+ 3. `энэ үдэр тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь энэ үдэрэй ...`
260
+
261
+ **Context Size 4:**
262
+
263
+ 1. `үдэр тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь энэ үдэрэй тэмд...`
264
+ 2. `энэ үдэр тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь энэ үдэр наһа бараһаниинь энэ үдэрэй ...`
265
+ 3. `тохёоһон үйлэ ябадалай жагсаалта энэ үдэр түрэһэниинь оной урда үе энэ үдэр наһа бараһаниинь энэ үдэ...`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
 
272
  **Context Size 1:**
273
 
274
+ 1. `_raps_s_бон_аали`
275
+ 2. `аандэрыноданаай_`
276
+ 3. `эругэнь_ой_оте_м`
277
 
278
  **Context Size 2:**
279
 
280
+ 1. `н_хүн_юм.,_5_бари`
281
+ 2. `й_(ганые_/kazano!`
282
+ 3. `_бай_һар_180—512_`
283
 
284
  **Context Size 3:**
285
 
286
+ 1. `ай_ботар_(үндэ_үед`
287
+ 2. `_баярын,_камын_ург`
288
+ 3. `ын_5-дуңма_хэрэ_өө`
289
 
290
  **Context Size 4:**
291
 
292
+ 1. `_байлгаха_агналда_х`
293
+ 2. `_боложо_уласые_байр`
294
+ 3. `болон_тэнгисангуй_б`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 98.9% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (230,966 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 36,185 |
318
+ | Total Tokens | 491,809 |
319
+ | Mean Frequency | 13.59 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 73.56 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | ба | 3,838 |
328
+ | 2 | юм | 3,200 |
329
+ | 3 | энэ | 3,020 |
330
+ | 4 | ондо | 2,873 |
331
+ | 5 | болон | 2,652 |
332
+ | 6 | оной | 2,566 |
333
+ | 7 | байна | 2,566 |
334
+ | 8 | улас | 2,455 |
335
+ | 9 | the | 2,159 |
336
+ | 10 | of | 2,042 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | үүсэбэринүүд | 2 |
343
+ | 2 | ᠮᠠᠨᠠᠶ | 2 |
344
+ | 3 | ᠲᠠᠢ | 2 |
345
+ | 4 | ᠮᠣᠩᠭᠤᠯ | 2 |
346
+ | 5 | ᠤᠷᠤᠨ | 2 |
347
+ | 6 | ᠮᠢᠨᠢ | 2 |
348
+ | 7 | ᠦᠷ | 2 |
349
+ | 8 | ᠵᠢᠷᠭᠠᠯ | 2 |
350
+ | 9 | дүхэригтэй | 2 |
351
+ | 10 | исибагай | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 0.9662 |
358
+ | R² (Goodness of Fit) | 0.993759 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 22.2% |
366
+ | Top 1,000 | 52.2% |
367
+ | Top 5,000 | 74.6% |
368
+ | Top 10,000 | 84.1% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9938 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 22.2% of corpus
374
+ - **Long Tail:** 26,185 words needed for remaining 15.9% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8916 🏆 | 0.3371 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.8046 | 0.2601 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.3726 | 0.2357 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8916 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2776. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-ба` | байхдаа, балнад, байһан |
430
+
431
+ #### Productive Suffixes
432
+ | Suffix | Examples |
433
+ |--------|----------|
434
+ | `-н` | хилын, биотехнологиин, зайдан |
435
+ | `-й` | феодорой, намуудай, зангай |
436
+ | `-ай` | намуудай, зангай, дарангылалай |
437
+ | `-ан` | зайдан, хааншалһан, буруудхан |
438
+ | `-эй` | сэнтэй, клэй, тэригүүдэй |
439
+ | `-ын` | хилын, доржын, эмнэлгын |
440
+ | `-ые` | хүүгэдые, различные, вермахтые |
441
+ | `-эн` | үйһэн, нэмэгдэһэн, дэбжүүлэн |
442
+
443
+ ### 6.3 Bound Stems (Lexical Roots)
444
+
445
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
446
+
447
+ | Stem | Cohesion | Substitutability | Examples |
448
+ |------|----------|------------------|----------|
449
+ | `анай` | 1.88x | 75 contexts | ганай, манай, ханай |
450
+ | `гуул` | 1.80x | 67 contexts | уугуул, агуулга, хайгуул |
451
+ | `эгдэ` | 1.65x | 93 contexts | жэгдэ, нэгдэн, нэгдэл |
452
+ | `азар` | 2.38x | 21 contexts | газар, базар, лазарь |
453
+ | `дэһэ` | 1.85x | 44 contexts | үдэһэн, гэдэһэ, үндэһэ |
454
+ | `энэй` | 1.75x | 53 contexts | сэнэй, үгэнэй, үһэнэй |
455
+ | `эдэг` | 1.70x | 57 contexts | гэдэг, хэдэг, ерэдэг |
456
+ | `алай` | 1.78x | 47 contexts | далай, малай, һалай |
457
+ | `ниин` | 1.85x | 40 contexts | ниинь, даниин, линиин |
458
+ | `нууд` | 1.62x | 56 contexts | онууд, орнууд, ионууд |
459
+ | `үндэ` | 1.75x | 39 contexts | һүндэ, хүндэ, үндэр |
460
+ | `айда` | 1.77x | 36 contexts | сайда, дайда, зайда |
461
+
462
+ ### 6.4 Affix Compatibility (Co-occurrence)
463
+
464
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
465
+
466
+ | Prefix | Suffix | Frequency | Examples |
467
+ |--------|--------|-----------|----------|
468
+ | `-ба` | `-й` | 38 words | байнхэй, баримталалай |
469
+ | `-ба` | `-ай` | 29 words | баримталалай, баталгаатай |
470
+ | `-ба` | `-н` | 27 words | балжан, байн |
471
+ | `-ба` | `-ан` | 17 words | балжан, барилгашан |
472
+ | `-ба` | `-ые` | 13 words | байрлалые, баримтые |
473
+ | `-ба` | `-ын` | 4 words | байгуулгын, багамын |
474
+ | `-ба` | `-эй` | 1 words | байнхэй |
475
+
476
+ ### 6.5 Recursive Morpheme Segmentation
477
+
478
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
479
+
480
+ | Word | Suggested Split | Confidence | Stem |
481
+ |------|-----------------|------------|------|
482
+ | түүхэшэдые | **`түүхэшэд-ые`** | 4.5 | `түүхэшэд` |
483
+ | түшэмэлые | **`түшэмэл-ые`** | 4.5 | `түшэмэл` |
484
+ | дамжуулгануудые | **`дамжуулганууд-ые`** | 4.5 | `дамжуулганууд` |
485
+ | далайшадай | **`далайшад-ай`** | 4.5 | `далайшад` |
486
+ | хубисалай | **`хубисал-ай`** | 4.5 | `хубисал` |
487
+ | ниигэмүүдэй | **`ниигэмүүд-эй`** | 4.5 | `ниигэмүүд` |
488
+ | хэлэгшэдэй | **`хэлэгшэд-эй`** | 4.5 | `хэлэгшэд` |
489
+ | таряашадай | **`таряашад-ай`** | 4.5 | `таряашад` |
490
+ | магадлалай | **`магадлал-ай`** | 4.5 | `магадлал` |
491
+ | тогтоһоные | **`тогтоһон-ые`** | 4.5 | `тогтоһон` |
492
+ | буурсагые | **`буурсаг-ые`** | 4.5 | `буурсаг` |
493
+ | юумэнүүдые | **`юумэнүүд-ые`** | 4.5 | `юумэнүүд` |
494
+ | дашинимаевай | **`дашинимаев-ай`** | 4.5 | `дашинимаев` |
495
+ | найрамдалай | **`найрамдал-ай`** | 4.5 | `найрамдал` |
496
+ | зохёолуудые | **`зохёолууд-ые`** | 4.5 | `зохёолууд` |
497
+
498
+ ### 6.6 Linguistic Interpretation
499
+
500
+ > **Automated Insight:**
501
+ The language BXR appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
502
+
503
+ ---
504
+ ## 7. Summary & Recommendations
505
 
506
  ![Performance Dashboard](visualizations/performance_dashboard.png)
507
 
 
509
 
510
  | Component | Recommended | Rationale |
511
  |-----------|-------------|-----------|
512
+ | Tokenizer | **64k BPE** | Best compression (4.39x) |
513
+ | N-gram | **2-gram** | Lowest perplexity (452) |
514
+ | Markov | **Context-4** | Highest predictability (98.9%) |
515
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
516
 
517
+
518
  ---
519
  ## Appendix: Metrics Glossary & Interpretation Guide
520
 
 
704
  author = {Kamali, Omar},
705
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
706
  year = {2025},
707
+ doi = {10.5281/zenodo.18073153},
708
+ publisher = {Zenodo},
709
  url = {https://huggingface.co/wikilangs}
710
  institution = {Omneity Labs}
711
  }
 
721
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
722
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
723
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
724
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
725
  ---
726
  *Generated by Wikilangs Models Pipeline*
727
 
728
+ *Report Date: 2026-01-03 09:00:32*
models/embeddings/monolingual/bxr_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:74b8869363141d972d72be5bdf332a7e4fdd0330e861684e70a4b494e8291893
3
- size 1039505247
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a68f121ed60ec837dcd0cd6dfe40ccae7bda27cefb1882915867e69840e1104
3
+ size 1038925515
models/embeddings/monolingual/bxr_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 14816
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 14262
15
  }
models/embeddings/monolingual/bxr_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:003deff7c5ca0b619e47c9989e4f389629b2a0d871190e9d2902770d857ca46d
3
- size 260126559
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf82d275c6ec9e3300c42d4bf6f1d83083710befa2f67e76dcf383dcfb7c187a
3
+ size 259972299
models/embeddings/monolingual/bxr_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 14816
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 14262
15
  }
models/embeddings/monolingual/bxr_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ca0f7f1e62bb09a2478a59565e98e61f30486cbd4b309435d81d3530e1c2a6dc
3
- size 519919455
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:846d84f5d2248a39c8ed392e72d06b7956f7871585cd833febab2c57b28b0799
3
+ size 519623371
models/embeddings/monolingual/bxr_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 14816
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 14262
15
  }
models/subword_markov/bxr_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f40c2222fbfccd237862781d0d0e955784a245a38a54dbe387b8cec2b9d2b82
3
- size 117796
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ef28afcd8620169ed8fa2c34876e8625f7c7d635864fbd2796f37f30dba4cf1
3
+ size 102324
models/subword_markov/bxr_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_contexts": 2105,
6
- "total_transitions": 4289081
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_contexts": 2141,
6
+ "total_transitions": 3947454
7
  }
models/subword_markov/bxr_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:608dcc0d1cb6248070161cbadb7420910339b0915882fb5dd132e4901a394daf
3
- size 623776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8eeb9396bbe777fa33dbcb0a026a02660f3421472954259ef5fb21e30676aba
3
+ size 526873
models/subword_markov/bxr_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_contexts": 15331,
6
- "total_transitions": 4286148
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_contexts": 12176,
6
+ "total_transitions": 3944697
7
  }
models/subword_markov/bxr_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ba216b7648c71d9678295b66f161d77cbdceb72f9be9fb280ff239f8df4097b
3
- size 2166085
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:251c7577b1bff07e9bd8f78d3e3413f809f22133920660f8ff8fd5953d1df234
3
+ size 1738691
models/subword_markov/bxr_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_contexts": 80818,
6
- "total_transitions": 4283215
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_contexts": 61348,
6
+ "total_transitions": 3941940
7
  }
models/subword_markov/bxr_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:720c028f0afa2beef418da44146e3aa7fdd4e8a84d87aaec098f46b0c4904607
3
- size 6019932
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6704187c11e7bf62086d0438a41d230986b7075412d814372c74e9c42a43bb8
3
+ size 4955182
models/subword_markov/bxr_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_contexts": 293211,
6
- "total_transitions": 4280282
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_contexts": 230966,
6
+ "total_transitions": 3939183
7
  }
models/subword_ngram/bxr_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b30174f435d373a7a684d0fdf82ab6f85dd7d97e258e9d2966c7f95ac04897bd
3
- size 63850
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dd1815af3d79add27e46b93017d57f0b62838b802a873a7a07c137fb6540c3d
3
+ size 51809
models/subword_ngram/bxr_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_ngrams": 4895,
6
- "total_ngrams": 4289081
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_ngrams": 3823,
6
+ "total_ngrams": 3947454
7
  }
models/subword_ngram/bxr_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:918475421fb0cf8063f92f0780e8b12e328c8328a0fc30ff110bb89e499d66ee
3
- size 467922
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63257791e85ed8eaf2b882c6823a3a702fe1a0ea07f2c59e64c50dd97653a352
3
+ size 376817
models/subword_ngram/bxr_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_ngrams": 35775,
6
- "total_ngrams": 4286148
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_ngrams": 29340,
6
+ "total_ngrams": 3944697
7
  }
models/subword_ngram/bxr_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e1042ddad008a9599c0748487f78c73efa7d9b50d6e70a4a6fafe6c8db84ab71
3
- size 1826857
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fc6a6ba4fa3d0662727116dddfc49d4ebb04b8fe918900b45bddaff9a5e475c
3
+ size 1528848
models/subword_ngram/bxr_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bxr",
5
- "unique_ngrams": 149695,
6
- "total_ngrams": 4283215
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "bxr",
5
+ "unique_ngrams": 124835,
6
+ "total_ngrams": 3941940
7
  }
models/tokenizer/bxr_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c313a02b460324a3fd78aeeebbf024c367c66eec31762824e5d13559b06e96eb
3
- size 566509
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ec3513a9e102b31e37ebbe9c74465e68178a75caa31640e2da4dd18964909d1
3
+ size 572848
models/tokenizer/bxr_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bxr_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:54bddc6897ff3219983c48eaffae095b8f605807ed27292c894a640c31089df6
3
- size 917207
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e83a11fb6d2c68714a56a8f79737f3bdf4020f6741555062a3277802b3b50563
3
+ size 936687
models/tokenizer/bxr_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bxr_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29ab501244bdf544b03adcaaefea70ce2eccdb557572c1df01afcc5ba62a6ac1
3
- size 1665622
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a9a5aa8bf4ae1cf3f260d408856d0de604cee508f58aa294462236f70101270
3
+ size 1699640
models/tokenizer/bxr_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/bxr_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:421a45e29770748dce802c6ebcb76913d893b4e21831c4ceebf7fc41ffa3d537
3
- size 396771
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8bb41ea3e440a8f23950afc663ca4194ddb8fe1cbbfd3fc96c5e7a3187bdb9a0
3
+ size 400889
models/tokenizer/bxr_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/bxr_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f052baf5dc25aeabdf73c8b5285ec87abcc088d635e2973f70a8ffe31ed6b36
3
- size 717641
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71921497bda1a7f95ca89688025ce27f6680e9797e7176d62d5eeb5de52180fa
3
+ size 687102
models/vocabulary/bxr_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "bxr",
3
- "vocabulary_size": 37973,
 
4
  "statistics": {
5
- "type_token_ratio": 0.17001129648792698,
6
  "coverage": {
7
- "top_100": 0.19418781300332458,
8
- "top_1000": 0.4590506546712756,
9
- "top_5000": 0.664120017139499,
10
- "top_10000": 0.748515113074965
11
  },
12
- "hapax_count": 62410,
13
- "hapax_ratio": 0.6217188169311537,
14
- "total_documents": 2933
15
  }
16
  }
 
1
  {
2
  "language": "bxr",
3
+ "vocabulary_size": 36185,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.16949986693740954,
7
  "coverage": {
8
+ "top_100": 0.19864421979752614,
9
+ "top_1000": 0.46809049714371126,
10
+ "top_5000": 0.6691097930421025,
11
+ "top_10000": 0.7539690930235101
12
  },
13
+ "hapax_count": 56805,
14
+ "hapax_ratio": 0.6108721367889021,
15
+ "total_documents": 2757
16
  }
17
  }
models/word_markov/bxr_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:650b9705441f0a50ae4936a70a92a0cb793133d94dc1d0370ed9ed6d3c7dd84c
3
- size 5152081
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d83d9d3e8e86f09c7979a3b6839e87cbf8a1c801cae88a26b8e2568e80c0af1
3
+ size 4628667
models/word_markov/bxr_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_contexts": 100619,
6
- "total_transitions": 759550
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_contexts": 92909,
6
+ "total_transitions": 545857
7
  }
models/word_markov/bxr_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ac1c70b5bf0d07084ffb47645e4f1fad509d187d04f4eeb81e55e1abc7af15e
3
- size 11903170
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08a52284e5b3cf3a5d895964ac029e73604bb505096a1643018fcbce3b7f0871
3
+ size 11044627
models/word_markov/bxr_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_contexts": 414597,
6
- "total_transitions": 756617
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_contexts": 383260,
6
+ "total_transitions": 543100
7
  }
models/word_markov/bxr_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:902dc90ffc87fd7ada473260dd7c7a9bd442da2e1209173b26a8b5bb99539fbf
3
- size 16125345
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20b01b9dbd6b5fb671c3229725a1ed9ec3da87304bb0d69aa5d5c22d2d65e482
3
+ size 14044491
models/word_markov/bxr_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_contexts": 601940,
6
- "total_transitions": 753684
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_contexts": 482888,
6
+ "total_transitions": 540343
7
  }
models/word_markov/bxr_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f4c22cf906dbab93a3218dc56a6b362a41a9c0e192456e5f64beca972c76c3f
3
- size 19134570
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1672222d8edd090496caa2f1e1bd6c68fd909d2a58607dab3d33a10e8047ca99
3
+ size 16507536
models/word_markov/bxr_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_contexts": 671341,
6
- "total_transitions": 750751
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_contexts": 504904,
6
+ "total_transitions": 537586
7
  }
models/word_ngram/bxr_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2a3ebfb828cb5690e2603dc347ba8fa6c7472b2c4e334354e5ff525f64915e0
3
- size 320596
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f22572c4d354fedb3053f8b2b9665062ea46ca2de919d373b3f625078118c3c
3
+ size 192506
models/word_ngram/bxr_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_ngrams": 15674,
6
- "total_ngrams": 759550
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_ngrams": 8128,
6
+ "total_ngrams": 545857
7
  }
models/word_ngram/bxr_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0cda9295ac08e43661c1ef513f79da30c697a4ffd25d8b920bc6d5a8e91b67f2
3
- size 440192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97ee45afd7fac64151a4862029dca6299fc0b2a8c65aa3d7868466142d152bd5
3
+ size 220646
models/word_ngram/bxr_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_ngrams": 19215,
6
- "total_ngrams": 756617
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_ngrams": 7805,
6
+ "total_ngrams": 543100
7
  }
models/word_ngram/bxr_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c95d55178d9a68895f802a5b94c63c1a5584c899404e70ef1d1e0a17fa6afa20
3
- size 791045
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0fe193bfe17ed9385235885ddb5fa79c77354a7a702f0a815c82ede303edbfc
3
+ size 453116
models/word_ngram/bxr_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bxr",
5
- "unique_ngrams": 31775,
6
- "total_ngrams": 753684
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "bxr",
5
+ "unique_ngrams": 14616,
6
+ "total_ngrams": 540343
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 9d32554c911846f0bb05a035e3b3e0064168a54fd22a597a668a751f885a609f
  • Pointer size: 131 Bytes
  • Size of remote file: 152 kB

Git LFS Details

  • SHA256: 269f7d5b759c16cef20fa98dd40eab0a8d92b85b920412b0a05d53f81944d72d
  • Pointer size: 131 Bytes
  • Size of remote file: 153 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED