omarkamali commited on
Commit
b82db98
·
verified ·
1 Parent(s): 689119d

Upload all models and assets for be (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +311 -138
  2. models/embeddings/monolingual/be_128d.bin +2 -2
  3. models/embeddings/monolingual/be_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/be_32d.bin +2 -2
  5. models/embeddings/monolingual/be_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/be_64d.bin +2 -2
  7. models/embeddings/monolingual/be_64d_metadata.json +5 -3
  8. models/subword_markov/be_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/be_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/be_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/be_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/be_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/be_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/be_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/be_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/be_2gram_subword.parquet +2 -2
  17. models/subword_ngram/be_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/be_3gram_subword.parquet +2 -2
  19. models/subword_ngram/be_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/be_4gram_subword.parquet +2 -2
  21. models/subword_ngram/be_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/be_tokenizer_16k.model +2 -2
  23. models/tokenizer/be_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/be_tokenizer_32k.model +2 -2
  25. models/tokenizer/be_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/be_tokenizer_64k.model +2 -2
  27. models/tokenizer/be_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/be_tokenizer_8k.model +2 -2
  29. models/tokenizer/be_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/be_vocabulary.parquet +2 -2
  31. models/vocabulary/be_vocabulary_metadata.json +10 -9
  32. models/word_markov/be_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/be_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/be_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/be_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/be_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/be_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/be_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/be_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/be_2gram_word.parquet +2 -2
  41. models/word_ngram/be_2gram_word_metadata.json +2 -2
  42. models/word_ngram/be_3gram_word.parquet +2 -2
  43. models/word_ngram/be_3gram_word_metadata.json +2 -2
  44. models/word_ngram/be_4gram_word.parquet +2 -2
  45. models/word_ngram/be_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 3.609
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.6652
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 834514
33
- generated: 2025-12-28
34
  ---
35
 
36
  # BE - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,57 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 2.945x | 2.88 | 0.0099% | 395,619 |
76
- | **16k** | 3.198x | 3.13 | 0.0107% | 364,322 |
77
- | **32k** | 3.434x | 3.36 | 0.0115% | 339,332 |
78
- | **64k** | 3.609x 🏆 | 3.53 | 0.0121% | 322,873 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `Адаі () — вёска ў Акнянскім раёне Адэскай вобласці Украіны.
85
-
86
- Крыніцы
87
-
88
- Катэгоры...`
89
 
90
  | Vocab | Tokens | Count |
91
  |-------|--------|-------|
92
- | 8k | `▁ада і ▁() ▁— ▁вёска ▁ў ▁ак ня нскім ▁раёне ... (+14 more)` | 24 |
93
- | 16k | `▁ада і ▁() ▁— ▁вёска ▁ў ▁ак ня нскім ▁раёне ... (+13 more)` | 23 |
94
- | 32k | `▁ада і ▁() ▁— ▁вёска ▁ў ▁ак ня нскім ▁раёне ... (+12 more)` | 22 |
95
- | 64k | `▁ада і ▁() ▁— ▁вёска ▁ў ▁ак нянскім ▁раёне ▁адэскай ... (+11 more)` | 21 |
96
 
97
- **Sample 2:** `Назву Асманіе маюць: горад і правінцыя ў Турцыі.`
98
 
99
  | Vocab | Tokens | Count |
100
  |-------|--------|-------|
101
- | 8k | `▁назву ▁ас мані е ▁маюць : ▁горад ▁і ▁правінцыя ▁ў ... (+2 more)` | 12 |
102
- | 16k | `▁назву ▁ас мані е ▁маюць : ▁горад ▁і ▁правінцыя ▁ў ... (+2 more)` | 12 |
103
- | 32k | `▁назву ▁ас мані е ▁маюць : ▁горад ▁і ▁правінцыя ▁ў ... (+2 more)` | 12 |
104
- | 64k | `▁назву ▁ас мані е ▁маюць : ▁горад ▁і ▁правінцыя ▁ў ... (+2 more)` | 12 |
105
 
106
- **Sample 3:** `M21 (каталог Месье) — рассеянае скопішча ў сузор'і Стральца.
107
-
108
- Катэгорыя:Астранам...`
109
 
110
  | Vocab | Tokens | Count |
111
  |-------|--------|-------|
112
- | 8k | `▁m 2 1 ▁( ката лог ▁месье ) ▁— ▁рас ... (+39 more)` | 49 |
113
- | 16k | `▁m 2 1 ▁( ката лог ▁месье ) ▁— ▁рассея ... (+36 more)` | 46 |
114
- | 32k | `▁m 2 1 ▁( ката лог ▁месье ) ▁— ▁рассеянае ... (+35 more)` | 45 |
115
- | 64k | `▁m 2 1 ▁( каталог ▁месье ) ▁— ▁рассеянае ▁скопішча ... (+34 more)` | 44 |
116
 
117
 
118
  ### Key Findings
119
 
120
- - **Best Compression:** 64k achieves 3.609x compression
121
- - **Lowest UNK Rate:** 8k with 0.0099% unknown tokens
122
  - **Trade-off:** Larger vocabularies improve compression but increase model size
123
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
124
 
@@ -127,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
127
 
128
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
129
 
 
 
130
  ![N-gram Coverage](visualizations/ngram_coverage.png)
131
 
132
  ### Results
133
 
134
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
135
- |--------|------------|---------|----------------|------------------|-------------------|
136
- | **2-gram** | 79,086 🏆 | 16.27 | 1,374,832 | 15.1% | 29.5% |
137
- | **2-gram** | 534 🏆 | 9.06 | 19,108 | 52.5% | 95.6% |
138
- | **3-gram** | 294,087 | 18.17 | 2,802,568 | 8.0% | 20.4% |
139
- | **3-gram** | 5,046 | 12.30 | 199,361 | 17.9% | 56.0% |
140
- | **4-gram** | 713,543 | 19.44 | 5,150,781 | 6.7% | 16.8% |
141
- | **4-gram** | 30,738 | 14.91 | 1,228,443 | 8.4% | 28.2% |
142
 
143
  ### Top 5 N-grams by Size
144
 
145
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
  | Rank | N-gram | Count |
148
  |------|--------|-------|
149
- | 1 | `0 ,` | 1,884,848 |
150
- | 2 | `катэгорыя :` | 589,578 |
151
- | 3 | `. у` | 390,424 |
152
- | 4 | `) .` | 239,852 |
153
- | 5 | `) —` | 224,324 |
154
 
155
- **3-grams:**
156
 
157
  | Rank | N-gram | Count |
158
  |------|--------|-------|
159
- | 1 | `0 , 10` | 188,226 |
160
- | 2 | `0 , 09` | 178,159 |
161
- | 3 | `0 , 11` | 136,658 |
162
- | 4 | `0 , 08` | 133,746 |
163
- | 5 | `0 , 07` | 96,108 |
164
 
165
- **4-grams:**
166
 
167
  | Rank | N-gram | Count |
168
  |------|--------|-------|
169
- | 1 | `катэгорыя : населеныя пункты` | 53,581 |
170
- | 2 | `) вёска ў` | 38,538 |
171
- | 3 | `. уваходзіць у склад` | 36,002 |
172
- | 4 | `. с .` | 29,359 |
173
- | 5 | `, 44 0 ,` | 29,322 |
 
 
 
 
 
 
 
 
 
 
174
 
175
 
176
  ### Key Findings
177
 
178
- - **Best Perplexity:** 2-gram with 534
179
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
180
- - **Coverage:** Top-1000 patterns cover ~28% of corpus
181
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
182
 
183
  ---
@@ -185,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
185
 
186
  ![Markov Entropy](visualizations/markov_entropy.png)
187
 
 
 
188
  ![Markov Branching](visualizations/markov_branching.png)
189
 
190
  ### Results
191
 
192
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
193
- |---------|-------------|------------|------------------|-----------------|----------------|
194
- | **1** | 0.5334 | 1.447 | 6.21 | 2,760,646 | 46.7% |
195
- | **1** | 0.5275 | 1.441 | 4.42 | 17,006 | 47.2% |
196
- | **2** | 0.3318 | 1.259 | 2.22 | 17,120,073 | 66.8% |
197
- | **2** | 0.7155 | 1.642 | 5.63 | 75,115 | 28.5% |
198
- | **3** | 0.1509 | 1.110 | 1.38 | 37,905,234 | 84.9% |
199
- | **3** | 0.8770 | 1.837 | 5.02 | 422,960 | 12.3% |
200
- | **4** | 0.0724 🏆 | 1.051 | 1.16 | 52,452,226 | 92.8% |
201
- | **4** | 0.7380 🏆 | 1.668 | 3.59 | 2,124,122 | 26.2% |
202
 
203
- ### Generated Text Samples
204
 
205
- Below are text samples generated from each Markov chain model:
206
 
207
  **Context Size 1:**
208
 
209
- 1. `, аэрапорт знаходзіцца ў свеце вядома з сакрысціямі і іншыя матэрыялы , створаных самім фактам ,`
210
- 2. `. 3 адкрыты чэмпіянат па 21 кастрычніка 1957 супраціўнае звычайна была прысвоена званне « за`
211
- 3. `0 , лячэбны факультэт кіравання версіямі , евангелле паводле ніпурскага , а ў сялянскай сям ’`
212
 
213
  **Context Size 2:**
214
 
215
- 1. `0 , 41 0 , 11 1912897 0 , 47 0 , 44 0 , 81 0`
216
- 2. `катэгорыя : лігатуры кірыліцы катэгорыя : паняцці індуізму катэгорыя : будынкі і збудаванні бабруйск...`
217
- 3. `. у 1932 1982 ) , святы рпцз . 12 . 3 . игнатенко , в`
218
 
219
  **Context Size 3:**
220
 
221
- 1. `0 , 10 47053 sd 1 , 20 0 , 09 21985 e - so 0 , 93`
222
- 2. `0 , 09 637479 0 , 39 0 , 11 88688 0 , 60 0 , 08 310690`
223
- 3. `0 , 11 348939 0 , 60 0 , 08 2587753 0 , 73 0 , 07 838578`
224
 
225
  **Context Size 4:**
226
 
227
- 1. `катэгорыя : населеныя пункты гарадэнкіўскага раёна катэгорыя : населеныя пункты на дняпры катэгорыя ...`
228
- 2. `) вёска ў сосныцкім раёне чарнігаўскай вобласці украіны . крыніцы катэгорыя : населеныя пункты аст...`
229
- 3. `. уваходзіць у склад сямлёўскага сельскага паселішча . гісторыя 25 сакавіка 1918 года згодна з трэця...`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
230
 
231
 
232
  ### Key Findings
233
 
234
- - **Best Predictability:** Context-4 with 92.8% predictability
235
  - **Branching Factor:** Decreases with context size (more deterministic)
236
- - **Memory Trade-off:** Larger contexts require more storage (2,124,122 contexts)
237
  - **Recommendation:** Context-3 or Context-4 for text generation
238
 
239
  ---
@@ -249,64 +314,64 @@ Below are text samples generated from each Markov chain model:
249
 
250
  | Metric | Value |
251
  |--------|-------|
252
- | Vocabulary Size | 834,514 |
253
- | Total Tokens | 59,867,674 |
254
- | Mean Frequency | 71.74 |
255
  | Median Frequency | 4 |
256
- | Frequency Std Dev | 3746.92 |
257
 
258
  ### Most Common Words
259
 
260
  | Rank | Word | Frequency |
261
  |------|------|-----------|
262
- | 1 | 0 | 1,972,029 |
263
- | 2 | і | 1,334,509 |
264
- | 3 | у | 1,241,263 |
265
- | 4 | ў | 1,163,394 |
266
- | 5 | з | 869,929 |
267
- | 6 | на | 710,982 |
268
- | 7 | катэгорыя | 590,266 |
269
- | 8 | года | 368,272 |
270
- | 9 | да | 291,578 |
271
- | 10 | годзе | 258,701 |
272
 
273
  ### Least Common Words (from vocabulary)
274
 
275
  | Rank | Word | Frequency |
276
  |------|------|-----------|
277
- | 1 | мурашава | 2 |
278
- | 2 | девятке | 2 |
279
- | 3 | дэкунаў | 2 |
280
- | 4 | iovine | 2 |
281
- | 5 | аёвіну | 2 |
282
- | 6 | джэніка | 2 |
283
- | 7 | мэрылінам | 2 |
284
- | 8 | сардэшная | 2 |
285
- | 9 | івасю | 2 |
286
- | 10 | стеценко | 2 |
287
 
288
  ### Zipf's Law Analysis
289
 
290
  | Metric | Value |
291
  |--------|-------|
292
- | Zipf Coefficient | 0.9824 |
293
- | R² (Goodness of Fit) | 0.995868 |
294
  | Adherence Quality | **excellent** |
295
 
296
  ### Coverage Analysis
297
 
298
  | Top N Words | Coverage |
299
  |-------------|----------|
300
- | Top 100 | 28.3% |
301
- | Top 1,000 | 50.1% |
302
  | Top 5,000 | 67.4% |
303
- | Top 10,000 | 74.4% |
304
 
305
  ### Key Findings
306
 
307
- - **Zipf Compliance:** R²=0.9959 indicates excellent adherence to Zipf's law
308
- - **High Frequency Dominance:** Top 100 words cover 28.3% of corpus
309
- - **Long Tail:** 824,514 words needed for remaining 25.6% coverage
310
 
311
  ---
312
  ## 5. Word Embeddings Evaluation
@@ -319,24 +384,129 @@ Below are text samples generated from each Markov chain model:
319
 
320
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
321
 
322
- ### Model Comparison
323
 
324
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
325
- |-------|------------|-----------|----------|----------|----------|
326
- | **mono_32d** | 563,209 | 32 | 4.064 | 1.812 | 0.6194 |
327
- | **mono_64d** | 563,209 | 64 | 4.475 | 1.623 | 0.6528 |
328
- | **mono_128d** | 563,209 | 128 | 4.940 | 1.374 | 0.6652 🏆 |
329
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
330
 
331
  ### Key Findings
332
 
333
- - **Best Isotropy:** mono_128d with 0.6652 (more uniform distribution)
334
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
335
- - **Vocabulary Coverage:** All models cover 563,209 words
336
- - **Recommendation:** 100d for balanced semantic capture and efficiency
337
 
338
  ---
339
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
340
 
341
  ![Performance Dashboard](visualizations/performance_dashboard.png)
342
 
@@ -344,11 +514,12 @@ Below are text samples generated from each Markov chain model:
344
 
345
  | Component | Recommended | Rationale |
346
  |-----------|-------------|-----------|
347
- | Tokenizer | **32k BPE** | Best compression (3.61x) with low UNK rate |
348
- | N-gram | **5-gram** | Lowest perplexity (534) |
349
- | Markov | **Context-4** | Highest predictability (92.8%) |
350
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
351
 
 
352
  ---
353
  ## Appendix: Metrics Glossary & Interpretation Guide
354
 
@@ -538,7 +709,8 @@ If you use these models in your research, please cite:
538
  author = {Kamali, Omar},
539
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
540
  year = {2025},
541
- publisher = {HuggingFace},
 
542
  url = {https://huggingface.co/wikilangs}
543
  institution = {Omneity Labs}
544
  }
@@ -554,7 +726,8 @@ MIT License - Free for academic and commercial use.
554
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
555
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
556
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
557
  ---
558
  *Generated by Wikilangs Models Pipeline*
559
 
560
- *Report Date: 2025-12-28 02:15:50*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.769
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.6512
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # BE - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.593x | 3.60 | 0.0487% | 287,700 |
84
+ | **16k** | 4.036x | 4.04 | 0.0547% | 256,163 |
85
+ | **32k** | 4.451x | 4.46 | 0.0603% | 232,280 |
86
+ | **64k** | 4.769x 🏆 | 4.77 | 0.0646% | 216,795 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `Грынчэнкавэ () — вёска ў Ахтырскім раёне Сумскай вобласці Украіны. Уваходзіць у ...`
 
 
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁гры н чэн ка вэ ▁() ▁— ▁вёска ▁ў ▁ах ... (+23 more)` | 33 |
97
+ | 16k | `▁грын чэнка вэ ▁() ▁— ▁вёска ▁ў ▁ах ты рскім ... (+21 more)` | 31 |
98
+ | 32k | `▁грын чэнка вэ ▁() ▁— ▁вёска ▁ў ▁ахты рскім ▁раёне ... (+19 more)` | 29 |
99
+ | 64k | `▁грын чэнка вэ ▁() ▁— ▁вёска ▁ў ▁ахтырскім ▁раёне ▁сумскай ... (+17 more)` | 27 |
100
 
101
+ **Sample 2:** `Лугавэ () вёска ў Бродыўскім раёне Львоўскай вобласці Украіны. Крыніцы пункты ...`
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁луга вэ ▁() ▁— ▁вёска ▁ў ▁б роды ўскім ▁раёне ... (+15 more)` | 25 |
106
+ | 16k | `▁луга вэ ▁() ▁— ▁вёска ▁ў ▁б роды ўскім ▁раёне ... (+15 more)` | 25 |
107
+ | 32k | `▁луга вэ ▁() ▁— ▁вёска ▁ў ▁броды ўскім ▁раёне ▁львоўскай ... (+13 more)` | 23 |
108
+ | 64k | `▁луга вэ ▁() ▁— ▁вёска ▁ў ▁бродыўскім ▁раёне ▁львоўскай ▁вобласці ... (+11 more)` | 21 |
109
 
110
+ **Sample 3:** `Косарэвэ () — вёска ў Млыніўскім раёне Ровенскай вобласці Украіны. Уваходзіць у ...`
 
 
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁ко са рэ вэ ▁() ▁— ▁вёска ▁ў ▁млы ніў ... (+21 more)` | 31 |
115
+ | 16k | `▁ко са рэ вэ ▁() ▁— ▁вёска ▁ў ▁млы ніўскім ... (+19 more)` | 29 |
116
+ | 32k | `▁коса рэ вэ ▁() ▁— ▁вёска ▁ў ▁млы ніўскім ▁раёне ... (+17 more)` | 27 |
117
+ | 64k | `▁коса рэ вэ ▁() ▁— ▁вёска ▁ў ▁млыніўскім ▁раёне ▁ровенскай ... (+15 more)` | 25 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.769x compression
123
+ - **Lowest UNK Rate:** 8k with 0.0487% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 114,899 | 16.81 | 1,095,876 | 11.4% | 25.2% |
141
+ | **2-gram** | Subword | 453 🏆 | 8.82 | 15,607 | 55.9% | 96.8% |
142
+ | **3-gram** | Word | 176,550 | 17.43 | 1,682,544 | 11.7% | 25.2% |
143
+ | **3-gram** | Subword | 4,192 | 12.03 | 145,836 | 18.7% | 59.5% |
144
+ | **4-gram** | Word | 286,677 | 18.13 | 2,809,290 | 9.5% | 25.0% |
145
+ | **4-gram** | Subword | 25,337 | 14.63 | 930,596 | 8.0% | 29.4% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `0 10` | 188,589 |
154
+ | 2 | `10 0` | 184,433 |
155
+ | 3 | `0 09` | 178,218 |
156
+ | 4 | `09 0` | 172,686 |
157
+ | 5 | `у годзе` | 140,117 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `0 10 0` | 183,056 |
164
+ | 2 | `0 09 0` | 171,686 |
165
+ | 3 | `0 11 0` | 133,046 |
166
+ | 4 | `0 08 0` | 125,664 |
167
+ | 5 | `0 07 0` | 84,761 |
168
+
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `0 44 0 10` | 28,229 |
174
+ | 2 | `44 0 10 0` | 27,892 |
175
+ | 3 | `0 47 0 10` | 27,125 |
176
+ | 4 | `47 0 10 0` | 26,709 |
177
+ | 5 | `0 50 0 10` | 26,628 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | _` | 7,375,676 |
184
+ | 2 | а` | 5,829,339 |
185
+ | 3 | а` | 5,735,773 |
186
+ | 4 | а` | 4,959,811 |
187
+ | 5 | `_ п` | 4,750,427 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `_ п а` | 2,102,007 |
194
+ | 2 | `_ 0 ,` | 1,872,298 |
195
+ | 3 | `_ н а` | 1,670,363 |
196
+ | 4 | а _` | 1,424,587 |
197
+ | 5 | `_ п ��` | 1,341,590 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `а г а _` | 980,628 |
204
+ | 2 | `_ п р а` | 746,402 |
205
+ | 3 | `_ г о д` | 708,921 |
206
+ | 4 | `_ н а _` | 692,237 |
207
+ | 5 | `к а й _` | 545,902 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 453
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~29% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.9806 | 1.973 | 10.65 | 1,594,726 | 1.9% |
231
+ | **1** | Subword | 0.4731 | 1.388 | 3.96 | 16,459 | 52.7% |
232
+ | **2** | Word | 0.3129 | 1.242 | 1.94 | 16,955,773 | 68.7% |
233
+ | **2** | Subword | 0.6387 | 1.557 | 4.81 | 65,143 | 36.1% |
234
+ | **3** | Word | 0.1126 | 1.081 | 1.23 | 32,878,014 | 88.7% |
235
+ | **3** | Subword | 0.8192 | 1.764 | 4.91 | 313,186 | 18.1% |
236
+ | **4** | Word | 0.0455 🏆 | 1.032 | 1.08 | 40,250,681 | 95.5% |
237
+ | **4** | Subword | 0.7603 | 1.694 | 3.75 | 1,537,647 | 24.0% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `0 57 0 09 0 67 0 07 0 58 км на 1 20 лютага жэнева`
246
+ 2. стаўшы першым урадзе і гітарыст разам з поўдня сутыкненні прыпыніліся на кіргізскай сср 10 0`
247
+ 3. годзе гэтыя эксперыменты па год 11 0 56 0 75 0 50 0 08 0`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `0 10 0 50 0 10 0 39 0 11 0 36 0 12 0 54 0`
252
+ 2. `10 0 68 0 25 0 6 1 52 1 25 джэсіка пегула эна сібахара 7 6`
253
+ 3. `0 09 0 46 0 10 0 35 0 12 0 37 0 12 0 д2 прамень`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `0 10 0 37 0 12 0 35 0 48 0 10 0 56 0 09 0 51`
258
+ 2. `0 09 0 37 0 12 0 57 0 09 0 41 0 11 0 45 0 10`
259
+ 3. `0 11 0 42 0 11 0 61 0 08 0 51 0 09 0 37 0 12`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `0 44 0 10 0 52 0 09 0 43 0 11 0 76 0 07 0 37 0`
264
+ 2. `44 0 10 0 51 0 09 0 51 0 09 0 42 0 11 0 60 0 08`
265
+ 3. `0 47 0 10 0 54 0 09 0 65 0 08 0 38 0 11 0 46 0`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `_irone_саджырода`
275
+ 2. `аса._бетвекаенсы`
276
+ 3. `ных_г._тэні_09_—`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `а_абто_чальны,_пр`
281
+ 2. `наяны_нькімпіныма`
282
+ 3. `раён_з_10),_якаге`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `_паднакадэміі_пало`
287
+ 2. `_0,40_0,56_0,50_0,`
288
+ 3. `_на_паданні._перац`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `ага_адсек_нацыя_4_т`
293
+ 2. `_пра_ў_сваюць_62-я_`
294
+ 3. `_годзе._жывяць_дызе`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 95.5% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (1,537,647 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 739,605 |
318
+ | Total Tokens | 54,963,738 |
319
+ | Mean Frequency | 74.31 |
320
  | Median Frequency | 4 |
321
+ | Frequency Std Dev | 3865.57 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | 0 | 1,944,698 |
328
+ | 2 | і | 1,322,186 |
329
+ | 3 | у | 1,231,156 |
330
+ | 4 | ў | 1,155,870 |
331
+ | 5 | з | 858,124 |
332
+ | 6 | на | 705,989 |
333
+ | 7 | года | 365,156 |
334
+ | 8 | да | 288,350 |
335
+ | 9 | годзе | 255,744 |
336
+ | 10 | 10 | 239,762 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | іцуно | 2 |
343
+ | 2 | міурай | 2 |
344
+ | 3 | kodanshas | 2 |
345
+ | 4 | llb | 2 |
346
+ | 5 | давы́даўскае | 2 |
347
+ | 6 | эльханон | 2 |
348
+ | 7 | vilner | 2 |
349
+ | 8 | emes | 2 |
350
+ | 9 | folkstsaytung | 2 |
351
+ | 10 | dertseyln | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 0.9714 |
358
+ | R² (Goodness of Fit) | 0.997385 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 29.3% |
366
+ | Top 1,000 | 50.6% |
367
  | Top 5,000 | 67.4% |
368
+ | Top 10,000 | 74.5% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9974 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 29.3% of corpus
374
+ - **Long Tail:** 729,605 words needed for remaining 25.5% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.6148 | 0.3550 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.6479 | 0.2915 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.6512 🏆 | 0.2220 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_128d with 0.6512 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2895. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-ка` | каганаў, кайлі, карэлятыўных |
430
+ | `-па` | пасуэлу, падую, паліцыянтаў |
431
+ | `-пр` | протестантами, провозглашении, принципу |
432
+
433
+ #### Productive Suffixes
434
+ | Suffix | Examples |
435
+ |--------|----------|
436
+ | `-а` | кішскага, краснасельскага, апельсіна |
437
+ | `-кі` | ліпнякі, чарашкі, вярцінскі |
438
+ | `-га` | кішскага, краснасельскага, луэнга |
439
+ | `-ай` | абнаўленчай, пустэльніцай, факталагічнай |
440
+ | `-ага` | кішскага, краснасельскага, найбагацейшага |
441
+ | `-мі` | неадмоўнымі, контурамі, абрамі |
442
+ | `-ая` | наватухінская, загорская, чакаўская |
443
+ | `-ыя` | шматбаковыя, перанятыя, узбагачаныя |
444
+
445
+ ### 6.3 Bound Stems (Lexical Roots)
446
+
447
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
448
+
449
+ | Stem | Cohesion | Substitutability | Examples |
450
+ |------|----------|------------------|----------|
451
+ | `насц` | 1.82x | 190 contexts | насцю, насць, насці |
452
+ | `елар` | 2.47x | 46 contexts | белар, гелар, келар |
453
+ | `анск` | 1.35x | 1021 contexts | ганск, данск, канск |
454
+ | `асел` | 2.07x | 87 contexts | расел, насел, асель |
455
+ | `нскі` | 1.43x | 414 contexts | янскі, енскі, інскі |
456
+ | `ання` | 1.67x | 173 contexts | рання, вання, ранняе |
457
+ | `аецц` | 2.21x | 48 contexts | ваецца, каецца, лаецца |
458
+ | `нска` | 1.35x | 500 contexts | унска, янска, минска |
459
+ | `ўска` | 1.52x | 236 contexts | еўска, іўска, еўская |
460
+ | `ленн` | 1.48x | 234 contexts | гленн, ленны, ленная |
461
+ | `йска` | 1.59x | 149 contexts | йская, ейска, войска |
462
+ | `уска` | 1.36x | 263 contexts | буска, гуска, ускат |
463
+
464
+ ### 6.4 Affix Compatibility (Co-occurrence)
465
+
466
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
467
+
468
+ | Prefix | Suffix | Frequency | Examples |
469
+ |--------|--------|-----------|----------|
470
+ | `-ка` | `-а` | 66 words | каміна, камунізма |
471
+ | `-па` | `-а` | 55 words | паступаленка, панінскага |
472
+ | `-пр` | `-а` | 28 words | прыкладвацца, прынада |
473
+ | `-па` | `-ай` | 21 words | паплаўковай, пастаяннай |
474
+ | `-па` | `-мі` | 17 words | пасіўнымі, паказнікамі |
475
+ | `-па` | `-кі` | 16 words | палінскі, падзьячаскі |
476
+ | `-ка` | `-га` | 16 words | какамега, калобжагскага |
477
+ | `-ка` | `-ага` | 15 words | калобжагскага, каламойскага |
478
+ | `-ка` | `-кі` | 14 words | кадомскі, каўхаёкі |
479
+ | `-ка` | `-аў` | 12 words | карыбаў, катэрынычаў |
480
+
481
+ ### 6.5 Recursive Morpheme Segmentation
482
+
483
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
484
+
485
+ | Word | Suggested Split | Confidence | Stem |
486
+ |------|-----------------|------------|------|
487
+ | барыёнамі | **`барыё-на-мі`** | 6.0 | `барыё` |
488
+ | курапаткіна | **`курапат-кі-на`** | 6.0 | `курапат` |
489
+ | хакеістаў | **`хакеіст-аў`** | 4.5 | `хакеіст` |
490
+ | навасібірская | **`навасібірск-ая`** | 4.5 | `навасібірск` |
491
+ | пірамідаў | **`пірамід-аў`** | 4.5 | `пірамід` |
492
+ | трансфарматараў | **`трансфарматар-аў`** | 4.5 | `трансфарматар` |
493
+ | участковыя | **`участков-ыя`** | 4.5 | `участков` |
494
+ | вузельчыкамі | **`вузельчыка-мі`** | 4.5 | `вузельчыка` |
495
+ | мікрараёнаў | **`мікрараён-аў`** | 4.5 | `мікрараён` |
496
+ | патраціць | **`па-траціць`** | 4.5 | `траціць` |
497
+ | папоўніцца | **`па-поўніцца`** | 4.5 | `поўніцца` |
498
+ | капашчэўскі | **`ка-па-шчэўс-кі`** | 4.5 | `шчэўс` |
499
+ | накрыўкамі | **`накрыўка-мі`** | 4.5 | `накрыўка` |
500
+ | наведвальніцкі | **`наведвальніц-кі`** | 4.5 | `наведвальніц` |
501
+ | беспартыйнымі | **`беспартыйны-мі`** | 4.5 | `беспартыйны` |
502
+
503
+ ### 6.6 Linguistic Interpretation
504
+
505
+ > **Automated Insight:**
506
+ The language BE appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
507
+
508
+ ---
509
+ ## 7. Summary & Recommendations
510
 
511
  ![Performance Dashboard](visualizations/performance_dashboard.png)
512
 
 
514
 
515
  | Component | Recommended | Rationale |
516
  |-----------|-------------|-----------|
517
+ | Tokenizer | **64k BPE** | Best compression (4.77x) |
518
+ | N-gram | **2-gram** | Lowest perplexity (453) |
519
+ | Markov | **Context-4** | Highest predictability (95.5%) |
520
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
521
 
522
+
523
  ---
524
  ## Appendix: Metrics Glossary & Interpretation Guide
525
 
 
709
  author = {Kamali, Omar},
710
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
711
  year = {2025},
712
+ doi = {10.5281/zenodo.18073153},
713
+ publisher = {Zenodo},
714
  url = {https://huggingface.co/wikilangs}
715
  institution = {Omneity Labs}
716
  }
 
726
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
727
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
728
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
729
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
730
  ---
731
  *Generated by Wikilangs Models Pipeline*
732
 
733
+ *Report Date: 2026-01-03 11:32:17*
models/embeddings/monolingual/be_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:348b8d47c3adfcab8cf7ed5c5c126bac20a90632d48e0a96d1ac58cf34bdd206
3
- size 1615310423
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e80dd83fd9b000c473bacdfc520317bc08c8e6232f6acc8ddf47a4dc636212b7
3
+ size 1567868138
models/embeddings/monolingual/be_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 563209
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 518052
15
  }
models/embeddings/monolingual/be_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:95ba41adfe852e9124f1f4e2a999e53493f8ac5a72620a903103616d33e1d3f8
3
- size 414765911
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e15ec6617f84546d2951de84ffe80fbfa2280da80a7135e996e30747c163a575
3
+ size 402004202
models/embeddings/monolingual/be_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 563209
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 518052
15
  }
models/embeddings/monolingual/be_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0aae5fa847ee5bb9a765d41f00f80662622108ba7c2da83c491b6ee1aaff3cbc
3
- size 814947415
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dca4824861fd94e6b9de472d555ae08662bf04a8795cab1ac77097e32c191f3
3
+ size 790625514
models/embeddings/monolingual/be_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 563209
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 518052
15
  }
models/subword_markov/be_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:613af873a5c8ee531d4fd55c96b5d2024b23df806ba9c46820017e322c5c5213
3
- size 581483
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16ec89ccbf7b33b419dff7091cc3396dd6b9c2f2d9e7b4aaa101c1f6dc261e98
3
+ size 528755
models/subword_markov/be_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_contexts": 17006,
6
- "total_transitions": 433862151
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_contexts": 16459,
6
+ "total_transitions": 384276543
7
  }
models/subword_markov/be_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0c1db4d2e461008750e32dfa8de1dc9c3409c516afaa54674aef8e0c2ab8830
3
- size 3387673
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75a72bc43ff9fcb1e07421d9900ef838856a31dd2e997a85da1ec51c2da7313f
3
+ size 2698586
models/subword_markov/be_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_contexts": 75115,
6
- "total_transitions": 433604630
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_contexts": 65143,
6
+ "total_transitions": 384021043
7
  }
models/subword_markov/be_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab73b13dfa84faefbb17dd279eb97e6291463168a9832e9f2d357b36483bf97b
3
- size 16387590
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cc5bbbf80158973cace1739a5b2da93ae4aba1805dddfad45e13be87b4dd5b4
3
+ size 12779069
models/subword_markov/be_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_contexts": 422960,
6
- "total_transitions": 433347109
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_contexts": 313186,
6
+ "total_transitions": 383765543
7
  }
models/subword_markov/be_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85fb9c5f2e85e087f9fd863b630a9776994de89274e1d9581cb75947969ee1f6
3
- size 64453185
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78d8dd2621f613c8c1d06109067bcf9cbac4f41f2929f949639de78672590907
3
+ size 48720729
models/subword_markov/be_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_contexts": 2124122,
6
- "total_transitions": 433089588
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_contexts": 1537647,
6
+ "total_transitions": 383510043
7
  }
models/subword_ngram/be_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c0e30a48df55f96d4bf2b8dc208c62d144ebf34b1a70cfcf18baab44a7ed51fd
3
- size 267023
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aafc4ee9f69f303f6f198618f5bee0cac66a99dacae147499dc0cae12854a772
3
+ size 221209
models/subword_ngram/be_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_ngrams": 19108,
6
- "total_ngrams": 433862151
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_ngrams": 15607,
6
+ "total_ngrams": 384276543
7
  }
models/subword_ngram/be_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1f8fca4862a13251eca0d62c746599da6a1c83bee3b4fd07ef8d26fd3782087b
3
- size 2504735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87fb73101845b2c8cdea801fcdcd4465df82baa9bba94ed1aefa8c506c088840
3
+ size 1907996
models/subword_ngram/be_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_ngrams": 199361,
6
- "total_ngrams": 433604630
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_ngrams": 145836,
6
+ "total_ngrams": 384021043
7
  }
models/subword_ngram/be_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:258fc772791e206747c2feee7088b7f28d6c39225074e25abce4217fc6aa0cba
3
- size 15989176
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a1d30f2ce0acd57d7abf371e1916b606dd8f958d5fc479f3dc5736b5bb18b10
3
+ size 12274905
models/subword_ngram/be_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "be",
5
- "unique_ngrams": 1228443,
6
- "total_ngrams": 433347109
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "be",
5
+ "unique_ngrams": 930596,
6
+ "total_ngrams": 383765543
7
  }
models/tokenizer/be_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1402065668cb58d0465980ced26b6d3d48c3faaebd6dda93b82b5cbe1aabbe99
3
- size 589454
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:008fe4df9c07918b817613d49143c9d406e08cd7c95f2c94d7e35e4d7af0322f
3
+ size 592885
models/tokenizer/be_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/be_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1e0a60bc4e1c3914f6172171b41c29fd097713bfa2666961c50cd45a0a57c495
3
- size 968380
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2db34459f167d40ce24759a3730279bf398faad2bcfe0de422d5a1ec7a70ffc
3
+ size 969782
models/tokenizer/be_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/be_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7dd70bc8b3492061d8b4f5bfbcfdc3a51c6d45fd46e0c5ca413a9b0977d1bbb1
3
- size 1750083
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df2ee1b2850c4e4bd93d09aa2f1f4c06b4fd62dd623170b145a36f61154961b9
3
+ size 1751650
models/tokenizer/be_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/be_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:335477bcdfc1cb61bc835c34d31b30091119b1b7473fa74c622aca874b26dab0
3
- size 408070
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e07b5ee32211d68f303eb0ca2473ef5a3e47cf3d435dbe20a3f50b5e40747119
3
+ size 410385
models/tokenizer/be_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/be_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:67ae10394b92721eb663679625db202a64a880e70109d0b21040a910e8804efb
3
- size 13628498
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaef8e90391cc62be2430106a5b0b4c67cc2dfacdb35f517f62c46107295d042
3
+ size 12490294
models/vocabulary/be_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "be",
3
- "vocabulary_size": 834514,
 
4
  "statistics": {
5
- "type_token_ratio": 0.044671476336453776,
6
  "coverage": {
7
- "top_100": 0.2744355925705539,
8
- "top_1000": 0.48551658368338324,
9
- "top_5000": 0.6526621297329156,
10
- "top_10000": 0.7207621763882552
11
  },
12
- "hapax_count": 1925896,
13
- "hapax_ratio": 0.697684764219808,
14
- "total_documents": 257521
15
  }
16
  }
 
1
  {
2
  "language": "be",
3
+ "vocabulary_size": 739605,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.028584751562556937,
7
  "coverage": {
8
+ "top_100": 0.28834518105660356,
9
+ "top_1000": 0.49802155961854777,
10
+ "top_5000": 0.6639092244917146,
11
+ "top_10000": 0.7333083111156797
12
  },
13
+ "hapax_count": 855988,
14
+ "hapax_ratio": 0.5364701399417019,
15
+ "total_documents": 255500
16
  }
17
  }
models/word_markov/be_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f573302999a72f7aa3e9d142ae089b7b678ffd15700999d05203009d925512c0
3
- size 196984346
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8935add6e30b042b611c05c62b5e95de82abb4595dfd2e015226bf394cfb1f0
3
+ size 207227789
models/word_markov/be_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_contexts": 2760646,
6
- "total_transitions": 81548782
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_contexts": 1594726,
6
+ "total_transitions": 55564226
7
  }
models/word_markov/be_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63dffe71cfd86f4ff61c37367e2dbee74fa2f7c50e05a4825b7739595e1b2155
3
- size 696994958
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecade528e4ed410d8e502020cb4476eb3034bb468a559acee35d2d25b0b413e1
3
+ size 740234356
models/word_markov/be_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_contexts": 17120073,
6
- "total_transitions": 81291262
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_contexts": 16955773,
6
+ "total_transitions": 55308726
7
  }
models/word_markov/be_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd8f08d3be91c4c054ae6f0aaa136416885527a2d29113415ab337c834789d0e
3
- size 1226716369
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79ab049dc05e060d35810bba686870b9c9a206eddbb744584e16841aca50e4f2
3
+ size 1158519507
models/word_markov/be_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_contexts": 37905234,
6
- "total_transitions": 81033747
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_contexts": 32878014,
6
+ "total_transitions": 55053226
7
  }
models/word_markov/be_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:33587a71fb18a5c4cc838d154494581511cee2776f0934c16659de4c95e12078
3
- size 1617896422
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5efe48d536fb8243e5c2508fc3a751114aaf85a61fbf563ecc161ba97c65c3dc
3
+ size 1439522702
models/word_markov/be_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_contexts": 52452226,
6
- "total_transitions": 80776256
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_contexts": 40250681,
6
+ "total_transitions": 54797726
7
  }
models/word_ngram/be_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e4fc961aa5fc25059f776fbf54bffac241e518ff469fdaf22b2abdea5a86e4d
3
- size 33153735
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a25d00ebef666273e7160d97bdef6a637b74a4a36ef712be7aa842c92bbff7d
3
+ size 28486547
models/word_ngram/be_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_ngrams": 1374832,
6
- "total_ngrams": 81548782
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_ngrams": 1095876,
6
+ "total_ngrams": 55564226
7
  }
models/word_ngram/be_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:58da42f46883e63cb423ab5d1f9a73b92e98b59bc8da341ae4aaed966d43beef
3
- size 75522602
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18acde570dcbb5bbf4141bebdadf37c9ea7cd46ab121e7d42466e7b12d35d67a
3
+ size 51844728
models/word_ngram/be_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_ngrams": 2802568,
6
- "total_ngrams": 81291262
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_ngrams": 1682544,
6
+ "total_ngrams": 55308726
7
  }
models/word_ngram/be_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5e3e3cc455ebf7ad153584a4edb334f24017f1636a5e52b898c56b9a3a24755
3
- size 147804215
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:556869d5b61ce881002e2f0826d29a633f55212072deae87c3eea091a943b1ad
3
+ size 95068402
models/word_ngram/be_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "be",
5
- "unique_ngrams": 5150781,
6
- "total_ngrams": 81033747
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "be",
5
+ "unique_ngrams": 2809290,
6
+ "total_ngrams": 55053226
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: d6e2987d324435cea95e42e83255c774ac98eca75a749f726406bcc60fd7ea02
  • Pointer size: 131 Bytes
  • Size of remote file: 144 kB

Git LFS Details

  • SHA256: 82db2985a76213dfef858fb1ac32136f2999c6530580724c82c9f25402cf98e8
  • Pointer size: 131 Bytes
  • Size of remote file: 148 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED