omarkamali commited on
Commit
c5df758
·
verified ·
1 Parent(s): 521637e

Upload all models and assets for ann (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +283 -126
  2. models/embeddings/monolingual/ann_128d.bin +2 -2
  3. models/embeddings/monolingual/ann_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/ann_32d.bin +2 -2
  5. models/embeddings/monolingual/ann_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/ann_64d.bin +2 -2
  7. models/embeddings/monolingual/ann_64d_metadata.json +5 -3
  8. models/subword_markov/ann_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/ann_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/ann_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/ann_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/ann_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/ann_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/ann_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/ann_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/ann_2gram_subword.parquet +2 -2
  17. models/subword_ngram/ann_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/ann_3gram_subword.parquet +2 -2
  19. models/subword_ngram/ann_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/ann_4gram_subword.parquet +2 -2
  21. models/subword_ngram/ann_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/ann_tokenizer_16k.model +2 -2
  23. models/tokenizer/ann_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/ann_tokenizer_8k.model +2 -2
  25. models/tokenizer/ann_tokenizer_8k.vocab +0 -0
  26. models/vocabulary/ann_vocabulary.parquet +2 -2
  27. models/vocabulary/ann_vocabulary_metadata.json +8 -8
  28. models/word_markov/ann_markov_ctx1_word.parquet +2 -2
  29. models/word_markov/ann_markov_ctx1_word_metadata.json +2 -2
  30. models/word_markov/ann_markov_ctx2_word.parquet +2 -2
  31. models/word_markov/ann_markov_ctx2_word_metadata.json +2 -2
  32. models/word_markov/ann_markov_ctx3_word.parquet +2 -2
  33. models/word_markov/ann_markov_ctx3_word_metadata.json +2 -2
  34. models/word_markov/ann_markov_ctx4_word.parquet +2 -2
  35. models/word_markov/ann_markov_ctx4_word_metadata.json +2 -2
  36. models/word_ngram/ann_2gram_word.parquet +2 -2
  37. models/word_ngram/ann_2gram_word_metadata.json +2 -2
  38. models/word_ngram/ann_3gram_word.parquet +2 -2
  39. models/word_ngram/ann_3gram_word_metadata.json +2 -2
  40. models/word_ngram/ann_4gram_word.parquet +2 -2
  41. models/word_ngram/ann_4gram_word_metadata.json +2 -2
  42. visualizations/embedding_isotropy.png +0 -0
  43. visualizations/embedding_norms.png +0 -0
  44. visualizations/embedding_similarity.png +2 -2
  45. visualizations/markov_branching.png +0 -0
  46. visualizations/markov_contexts.png +0 -0
  47. visualizations/markov_entropy.png +0 -0
  48. visualizations/model_sizes.png +0 -0
  49. visualizations/ngram_coverage.png +0 -0
  50. visualizations/ngram_entropy.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.268
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.1631
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 4397
33
- generated: 2025-12-27
34
  ---
35
 
36
  # ANN - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,51 +70,49 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 4.015x | 4.00 | 0.1279% | 136,774 |
76
- | **16k** | 4.268x 🏆 | 4.25 | 0.1360% | 128,655 |
77
 
78
  ### Tokenization Examples
79
 
80
  Below are sample sentences tokenized with each vocabulary size:
81
 
82
- **Sample 1:** `Nọwè ìre ido me Yurop.
83
-
84
- 150px|thumb|Iman̄-ido Nọwè
85
-
86
- Ọgbọn̄:Ido
87
- Ọgbọn̄:Yurop`
88
 
89
  | Vocab | Tokens | Count |
90
  |-------|--------|-------|
91
- | 8k | `▁nọwè ▁ìre ▁idome ▁yurop .1 5 0 ... (+14 more)` | 24 |
92
- | 16k | `▁nọwè ▁ìre ▁idomeyurop . 1 5 0 ... (+14 more)` | 24 |
93
 
94
- **Sample 2:** `Ọrọn ìre mkpulu-ija ge òkjp me Agan̄ Mkpulu Akwa Ibom me ido Naijiria. Mkpulu-ij...`
95
 
96
  | Vocab | Tokens | Count |
97
  |-------|--------|-------|
98
- | 8k | `▁ọrọn ▁ìre ▁mkpulu - ijage ▁òk j p me ... (+15 more)` | 25 |
99
- | 16k | `▁ọrọn ▁ìre ▁mkpulu - ija ge ▁òkjpme ▁agan̄mkpulu ... (+13 more)` | 23 |
100
 
101
- **Sample 3:** `Ikpọ̀n̄ ìre mfut uko òkitibi mè ito lek me emen ijọn̄. Îre mfut eyi acha ge.
102
-
103
-
104
- Ọ...`
105
 
106
  | Vocab | Tokens | Count |
107
  |-------|--------|-------|
108
- | 8k | `▁ikpọ̀n̄ ▁ìre ▁mfutuko ▁òkitibi ▁itolek ▁me ▁emen ... (+13 more)` | 23 |
109
- | 16k | `▁ikpọ̀n̄ ▁ìre ▁mfutuko ▁òkitibi ▁itolek ▁me ▁emen ... (+13 more)` | 23 |
110
 
111
 
112
  ### Key Findings
113
 
114
- - **Best Compression:** 16k achieves 4.268x compression
115
- - **Lowest UNK Rate:** 8k with 0.1279% unknown tokens
116
  - **Trade-off:** Larger vocabularies improve compression but increase model size
117
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
118
 
@@ -121,55 +121,87 @@ Below are sample sentences tokenized with each vocabulary size:
121
 
122
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
123
 
 
 
124
  ![N-gram Coverage](visualizations/ngram_coverage.png)
125
 
126
  ### Results
127
 
128
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
129
- |--------|------------|---------|----------------|------------------|-------------------|
130
- | **2-gram** | 1,256 🏆 | 10.29 | 3,472 | 37.4% | 75.8% |
131
- | **2-gram** | 253 🏆 | 7.98 | 1,421 | 67.5% | 99.4% |
132
- | **3-gram** | 2,655 | 11.37 | 5,764 | 25.2% | 58.7% |
133
- | **3-gram** | 1,454 | 10.51 | 7,986 | 32.8% | 79.6% |
134
- | **4-gram** | 4,959 | 12.28 | 9,244 | 18.4% | 44.6% |
135
- | **4-gram** | 4,906 | 12.26 | 25,888 | 20.8% | 56.1% |
136
 
137
  ### Top 5 N-grams by Size
138
 
139
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
  | Rank | N-gram | Count |
142
  |------|--------|-------|
143
- | 1 | `agan ̄` | 1,895 |
144
- | 2 | `me lek` | 1,071 |
145
- | 3 | `me agan` | 841 |
146
- | 4 | `me emen` | 793 |
147
- | 5 | `ijọn ̄` | 694 |
148
 
149
- **3-grams:**
150
 
151
  | Rank | N-gram | Count |
152
  |------|--------|-------|
153
- | 1 | `me agan ̄` | 831 |
154
- | 2 | - mkpulu` | 373 |
155
- | 3 | `agan ̄ -` | 372 |
156
- | 4 | a me` | 371 |
157
- | 5 | `ọgbọn ̄ :` | 339 |
158
 
159
- **4-grams:**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
- | 1 | `agan ̄ - mkpulu` | 364 |
164
- | 2 | `agan ̄ inyọn ̄` | 302 |
165
- | 3 | ichep - ura` | 255 |
166
- | 4 | mbum - ura` | 235 |
167
- | 5 | `. ọgbọn ̄ :` | 233 |
 
 
 
 
 
 
 
 
 
 
168
 
169
 
170
  ### Key Findings
171
 
172
- - **Best Perplexity:** 2-gram with 253
173
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
174
  - **Coverage:** Top-1000 patterns cover ~56% of corpus
175
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
@@ -179,55 +211,86 @@ Below are sample sentences tokenized with each vocabulary size:
179
 
180
  ![Markov Entropy](visualizations/markov_entropy.png)
181
 
 
 
182
  ![Markov Branching](visualizations/markov_branching.png)
183
 
184
  ### Results
185
 
186
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
187
- |---------|-------------|------------|------------------|-----------------|----------------|
188
- | **1** | 0.6614 | 1.582 | 4.53 | 10,250 | 33.9% |
189
- | **1** | 1.2658 | 2.405 | 9.65 | 293 | 0.0% |
190
- | **2** | 0.3064 | 1.237 | 1.78 | 46,339 | 69.4% |
191
- | **2** | 1.1141 | 2.165 | 5.65 | 2,827 | 0.0% |
192
- | **3** | 0.1431 | 1.104 | 1.26 | 82,397 | 85.7% |
193
- | **3** | 0.7678 | 1.703 | 3.09 | 15,977 | 23.2% |
194
- | **4** | 0.0688 🏆 | 1.049 | 1.11 | 104,013 | 93.1% |
195
- | **4** | 0.4695 🏆 | 1.385 | 1.96 | 49,379 | 53.0% |
196
 
197
- ### Generated Text Samples
198
 
199
- Below are text samples generated from each Markov chain model:
200
 
201
  **Context Size 1:**
202
 
203
- 1. mbum - paeonia ebot ìre anam oron - enenwaan ̄ sa me acha`
204
- 2. `me lek keya serum ; ikaan ̄ inu ike ikpa ebi ido ya me`
205
- 3. `. erieen ̄ igege inu kire ama ebilene ; ọmọ ìluk me < emirate ] eyi`
206
 
207
  **Context Size 2:**
208
 
209
- 1. `agan ̄ osiki ichep - ura kan ̄ , ike ekpabe me ibebene emen 1990 cha`
210
- 2. `me lek otu - ifuk ebi ìluk me lek èwê sayara me afirika agan ̄ osiki mbum`
211
- 3. `me agan ̄ - mkpulu 36 cha ọmọ ore ama ibot irân ire bagidadi . ọgbọn ̄`
212
 
213
  **Context Size 3:**
214
 
215
- 1. `me agan ̄ òsiki . emen - awaji atilantik , emen - awaji eyi india , emen -`
216
- 2. - mkpulu yi ìnan ̄ a me ujit fo ulom , ìyaka ìbọkọ si mkpukpe òmimin`
217
- 3. `agan ̄ - mkpulu yi obenbe ìre 9 , 251 km² , otu - ifuk ifit ya`
218
 
219
  **Context Size 4:**
220
 
221
- 1. `agan ̄ - mkpulu yi ìre okwaan ̄ imo , òkilibi iraka me okike ijọn ̄ kan ̄ .`
222
- 2. `agan ̄ inyọn ̄ ọfọkọ agan ̄ osiki . iman ̄ yi ìlibi inan ̄ a me ofifi`
223
- 3. ichep - ura , bawuchi gombe me agan ̄ osiki , kogi me agan ̄ inyọn`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
 
225
 
226
  ### Key Findings
227
 
228
- - **Best Predictability:** Context-4 with 93.1% predictability
229
  - **Branching Factor:** Decreases with context size (more deterministic)
230
- - **Memory Trade-off:** Larger contexts require more storage (49,379 contexts)
231
  - **Recommendation:** Context-3 or Context-4 for text generation
232
 
233
  ---
@@ -243,38 +306,38 @@ Below are text samples generated from each Markov chain model:
243
 
244
  | Metric | Value |
245
  |--------|-------|
246
- | Vocabulary Size | 4,397 |
247
- | Total Tokens | 93,822 |
248
- | Mean Frequency | 21.34 |
249
  | Median Frequency | 4 |
250
- | Frequency Std Dev | 149.48 |
251
 
252
  ### Most Common Words
253
 
254
  | Rank | Word | Frequency |
255
  |------|------|-----------|
256
- | 1 | me | 7,521 |
257
- | 2 | mè | 2,898 |
258
- | 3 | agan | 1,911 |
259
- | 4 | ebi | 1,731 |
260
- | 5 | ido | 1,637 |
261
- | 6 | ìre | 1,613 |
262
- | 7 | lek | 1,578 |
263
- | 8 | eyi | 1,242 |
264
- | 9 | ya | 1,175 |
265
- | 10 | emen | 1,068 |
266
 
267
  ### Least Common Words (from vocabulary)
268
 
269
  | Rank | Word | Frequency |
270
  |------|------|-----------|
271
- | 1 | 3500 | 2 |
272
- | 2 | iyaak | 2 |
273
- | 3 | medvedev | 2 |
274
- | 4 | race | 2 |
275
- | 5 | lenin | 2 |
276
  | 6 | ọkọlọba | 2 |
277
- | 7 | ǹkọọn | 2 |
278
  | 8 | edeh | 2 |
279
  | 9 | ogwuile | 2 |
280
  | 10 | bruxelles | 2 |
@@ -283,24 +346,24 @@ Below are text samples generated from each Markov chain model:
283
 
284
  | Metric | Value |
285
  |--------|-------|
286
- | Zipf Coefficient | 1.1589 |
287
- | R² (Goodness of Fit) | 0.990794 |
288
  | Adherence Quality | **excellent** |
289
 
290
  ### Coverage Analysis
291
 
292
  | Top N Words | Coverage |
293
  |-------------|----------|
294
- | Top 100 | 58.8% |
295
- | Top 1,000 | 87.2% |
296
  | Top 5,000 | 0.0% |
297
  | Top 10,000 | 0.0% |
298
 
299
  ### Key Findings
300
 
301
- - **Zipf Compliance:** R²=0.9908 indicates excellent adherence to Zipf's law
302
- - **High Frequency Dominance:** Top 100 words cover 58.8% of corpus
303
- - **Long Tail:** -5,603 words needed for remaining 100.0% coverage
304
 
305
  ---
306
  ## 5. Word Embeddings Evaluation
@@ -313,24 +376,115 @@ Below are text samples generated from each Markov chain model:
313
 
314
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
315
 
316
- ### Model Comparison
317
 
318
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
319
- |-------|------------|-----------|----------|----------|----------|
320
- | **mono_32d** | 1,947 | 32 | 2.277 | 0.449 | 0.1631 🏆 |
321
- | **mono_64d** | 1,947 | 64 | 2.234 | 0.444 | 0.0371 |
322
- | **mono_128d** | 1,947 | 128 | 2.248 | 0.444 | 0.0066 |
323
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
324
 
325
  ### Key Findings
326
 
327
- - **Best Isotropy:** mono_32d with 0.1631 (more uniform distribution)
328
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
329
- - **Vocabulary Coverage:** All models cover 1,947 words
330
- - **Recommendation:** 100d for balanced semantic capture and efficiency
331
 
332
  ---
333
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
334
 
335
  ![Performance Dashboard](visualizations/performance_dashboard.png)
336
 
@@ -338,11 +492,12 @@ Below are text samples generated from each Markov chain model:
338
 
339
  | Component | Recommended | Rationale |
340
  |-----------|-------------|-----------|
341
- | Tokenizer | **32k BPE** | Best compression (4.27x) with low UNK rate |
342
- | N-gram | **5-gram** | Lowest perplexity (253) |
343
- | Markov | **Context-4** | Highest predictability (93.1%) |
344
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
345
 
 
346
  ---
347
  ## Appendix: Metrics Glossary & Interpretation Guide
348
 
@@ -532,7 +687,8 @@ If you use these models in your research, please cite:
532
  author = {Kamali, Omar},
533
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
534
  year = {2025},
535
- publisher = {HuggingFace},
 
536
  url = {https://huggingface.co/wikilangs}
537
  institution = {Omneity Labs}
538
  }
@@ -548,7 +704,8 @@ MIT License - Free for academic and commercial use.
548
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
549
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
550
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
551
  ---
552
  *Generated by Wikilangs Models Pipeline*
553
 
554
- *Report Date: 2025-12-27 06:05:57*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.351
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.1947
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # ANN - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 4.112x | 4.12 | 0.1464% | 133,892 |
84
+ | **16k** | 4.351x 🏆 | 4.36 | 0.1549% | 126,546 |
85
 
86
  ### Tokenization Examples
87
 
88
  Below are sample sentences tokenized with each vocabulary size:
89
 
90
+ **Sample 1:** `Luwis òso 14 ìre ogwu ubọọn̄ me Furans bene me acha abayaage ire usen mkpa kan̄ ...`
 
 
 
 
 
91
 
92
  | Vocab | Tokens | Count |
93
  |-------|--------|-------|
94
+ | 8k | `▁lu wis ▁òso1 4 ▁ìre ogwu ▁ubọọn̄ ▁me ... (+23 more)` | 33 |
95
+ | 16k | `▁luwis ▁òso ▁ 1 4 ▁ìre ▁ogwuubọọn̄mefurans ... (+21 more)` | 31 |
96
 
97
+ **Sample 2:** `Mọlita ìre ido me Yurop. thumb|Egop Ido Mọlita thumb|Iman̄-ido Mọlita thumb|Okwa...`
98
 
99
  | Vocab | Tokens | Count |
100
  |-------|--------|-------|
101
+ | 8k | `▁mọlita ▁ìre ▁ido ▁me ▁yurop . thumb | egopido ... (+19 more)` | 29 |
102
+ | 16k | `▁mọlita ▁ìre ▁ido ▁meyurop .thumb | egop ido ... (+19 more)` | 29 |
103
 
104
+ **Sample 3:** `Saint Marino ìre ido me Yurop. thumb|Egop Ido Saint Marino thumb|Iman̄-ido Saint...`
 
 
 
105
 
106
  | Vocab | Tokens | Count |
107
  |-------|--------|-------|
108
+ | 8k | `▁saint ▁marino ▁ìre ▁idomeyurop .thumb | egop ... (+17 more)` | 27 |
109
+ | 16k | `▁saint ▁marino ▁ìre ▁idomeyurop .thumb | egop ... (+17 more)` | 27 |
110
 
111
 
112
  ### Key Findings
113
 
114
+ - **Best Compression:** 16k achieves 4.351x compression
115
+ - **Lowest UNK Rate:** 8k with 0.1464% unknown tokens
116
  - **Trade-off:** Larger vocabularies improve compression but increase model size
117
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
118
 
 
121
 
122
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
123
 
124
+ ![N-gram Unique](visualizations/ngram_unique.png)
125
+
126
  ![N-gram Coverage](visualizations/ngram_coverage.png)
127
 
128
  ### Results
129
 
130
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
131
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
132
+ | **2-gram** | Word | 1,111 | 10.12 | 2,498 | 36.3% | 78.1% |
133
+ | **2-gram** | Subword | 241 🏆 | 7.91 | 1,230 | 67.8% | 99.7% |
134
+ | **3-gram** | Word | 1,927 | 10.91 | 3,289 | 25.2% | 65.4% |
135
+ | **3-gram** | Subword | 1,414 | 10.47 | 7,165 | 32.1% | 80.6% |
136
+ | **4-gram** | Word | 3,376 | 11.72 | 4,802 | 16.9% | 48.5% |
137
+ | **4-gram** | Subword | 4,883 | 12.25 | 24,184 | 20.0% | 55.6% |
138
 
139
  ### Top 5 N-grams by Size
140
 
141
+ **2-grams (Word):**
142
+
143
+ | Rank | N-gram | Count |
144
+ |------|--------|-------|
145
+ | 1 | `me lek` | 1,089 |
146
+ | 2 | `me agan̄` | 844 |
147
+ | 3 | `me emen` | 801 |
148
+ | 4 | `ido ya` | 458 |
149
+ | 5 | `ichit me` | 381 |
150
+
151
+ **3-grams (Word):**
152
+
153
+ | Rank | N-gram | Count |
154
+ |------|--------|-------|
155
+ | 1 | `agan̄ ichep ura` | 217 |
156
+ | 2 | `me ido ya` | 190 |
157
+ | 3 | `me agan̄ osiki` | 183 |
158
+ | 4 | `agan̄ mbum ura` | 176 |
159
+ | 5 | `me agan̄ inyọn̄` | 172 |
160
+
161
+ **4-grams (Word):**
162
 
163
  | Rank | N-gram | Count |
164
  |------|--------|-------|
165
+ | 1 | `me agan̄ mbum ura` | 103 |
166
+ | 2 | `me agan̄ ichep ura` | 96 |
167
+ | 3 | `me ido ya ìre` | 62 |
168
+ | 4 | `agan̄ inyọn̄ mbum ura` | 56 |
169
+ | 5 | `ewabe ichit me emen` | 50 |
170
 
171
+ **2-grams (Subword):**
172
 
173
  | Rank | N-gram | Count |
174
  |------|--------|-------|
175
+ | 1 | `e _` | 19,443 |
176
+ | 2 | `_ i` | 16,978 |
177
+ | 3 | `_ m` | 15,100 |
178
+ | 4 | `_ e` | 11,773 |
179
+ | 5 | `a _` | 9,778 |
180
 
181
+ **3-grams (Subword):**
182
 
183
  | Rank | N-gram | Count |
184
  |------|--------|-------|
185
+ | 1 | `_ m e` | 7,822 |
186
+ | 2 | `m e _` | 7,755 |
187
+ | 3 | `a _` | 4,098 |
188
+ | 4 | `r e _` | 4,084 |
189
+ | 5 | `e _ i` | 3,290 |
190
+
191
+ **4-grams (Subword):**
192
+
193
+ | Rank | N-gram | Count |
194
+ |------|--------|-------|
195
+ | 1 | `_ m e _` | 7,635 |
196
+ | 2 | `_ m è _` | 2,895 |
197
+ | 3 | `l e k _` | 2,350 |
198
+ | 4 | `_ a g a` | 1,914 |
199
+ | 5 | `a g a n̄` | 1,906 |
200
 
201
 
202
  ### Key Findings
203
 
204
+ - **Best Perplexity:** 2-gram (subword) with 241
205
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
206
  - **Coverage:** Top-1000 patterns cover ~56% of corpus
207
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
 
211
 
212
  ![Markov Entropy](visualizations/markov_entropy.png)
213
 
214
+ ![Markov Contexts](visualizations/markov_contexts.png)
215
+
216
  ![Markov Branching](visualizations/markov_branching.png)
217
 
218
  ### Results
219
 
220
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
221
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
222
+ | **1** | Word | 0.7773 | 1.714 | 4.68 | 9,818 | 22.3% |
223
+ | **1** | Subword | 1.1325 | 2.192 | 8.75 | 290 | 0.0% |
224
+ | **2** | Word | 0.2751 | 1.210 | 1.61 | 45,714 | 72.5% |
225
+ | **2** | Subword | 1.0635 | 2.090 | 5.50 | 2,537 | 0.0% |
226
+ | **3** | Word | 0.1069 | 1.077 | 1.18 | 73,222 | 89.3% |
227
+ | **3** | Subword | 0.7725 | 1.708 | 3.21 | 13,932 | 22.7% |
228
+ | **4** | Word | 0.0450 🏆 | 1.032 | 1.07 | 86,066 | 95.5% |
229
+ | **4** | Subword | 0.4908 | 1.405 | 2.04 | 44,651 | 50.9% |
230
 
231
+ ### Generated Text Samples (Word-based)
232
 
233
+ Below are text samples generated from each word-based Markov chain model:
234
 
235
  **Context Size 1:**
236
 
237
+ 1. `me akparalek ijọn̄ ikọkọp uji ijọn̄ ìjeen̄ ibe ekọp esaba èwê alt left thumb iman̄ ido`
238
+ 2. ` owuwa ebi barazilu thumb iman̄ kan̄ belgiọm burazil pọtugalu thumb egop ubọọn̄ me zambia`
239
+ 3. `agan̄ mkpulu ubọọn̄ yi ìre siera leyon togo me yurop ìniluk me agan̄ ichepura eyi india`
240
 
241
  **Context Size 2:**
242
 
243
+ 1. `me lek ike uti ìkatibi me èwê dubai`
244
+ 2. `me agan̄ osiki ire ebi ofifi èwê ere òla ijọn̄ eba ìkup ewuuk ewuuk me mgbọ`
245
+ 3. `me emen senturi akọp gweregwen ene ewabe me emen mîwa iraka efie ita thumb egop agan̄`
246
 
247
  **Context Size 3:**
248
 
249
+ 1. `agan̄ ichep ura me agan̄ ichep ura ruwanda me agan̄ osiki me ido naijiria achubọk inyinyi òrom òkuku...`
250
+ 2. `me ido ya bene me senturi 16 re 19 emen awaji atilantik ore achubọk ebon ere ewe inyam`
251
+ 3. `me agan̄ osiki naijiria ama mkpulu ìtatap ikana ọmọ ìre ginì ikwetọ me agan̄ inyọn̄ mbum ura ido`
252
 
253
  **Context Size 4:**
254
 
255
+ 1. `me agan̄ mbum ura naija me agan̄ inyọn̄ mbum ura silovakia me agan̄ osiki mbum ura me lek`
256
+ 2. `me agan̄ ichep ura eyi amerika agan̄ inyọn̄ thumb ọrọsi thumb ọrọsi môkọt ikaan̄ esese mbet unwen mè...`
257
+ 3. `me ido ya ìre eyi ebọkọbe itap me 17 akọp mè onyan̄ ge otu ifuk ene ìluk me ido`
258
+
259
+
260
+ ### Generated Text Samples (Subword-based)
261
+
262
+ Below are text samples generated from each subword-based Markov chain model:
263
+
264
+ **Context Size 1:**
265
+
266
+ 1. `_ma_mè_mè_erirup`
267
+ 2. `e_ògan̄_chikilukp`
268
+ 3. `ituwupanwebọte_m`
269
+
270
+ **Context Size 2:**
271
+
272
+ 1. `e_lek_mè_ìkuria_m`
273
+ 2. `_ififuuke_si_ichọ`
274
+ 3. `_me_òkuk_use_agan̄`
275
+
276
+ **Context Size 3:**
277
+
278
+ 1. `_me_lek_ebi_kibert`
279
+ 2. `me_levan_obolo_pas`
280
+ 3. `an̄_echieen̄_ya_orọr`
281
+
282
+ **Context Size 4:**
283
+
284
+ 1. `_me_lek_ìmọnọ_ire_o`
285
+ 2. `_mè_ebi_kè_ama-ile_`
286
+ 3. `lek_<raw_mate>_igba`
287
 
288
 
289
  ### Key Findings
290
 
291
+ - **Best Predictability:** Context-4 (word) with 95.5% predictability
292
  - **Branching Factor:** Decreases with context size (more deterministic)
293
+ - **Memory Trade-off:** Larger contexts require more storage (44,651 contexts)
294
  - **Recommendation:** Context-3 or Context-4 for text generation
295
 
296
  ---
 
306
 
307
  | Metric | Value |
308
  |--------|-------|
309
+ | Vocabulary Size | 4,243 |
310
+ | Total Tokens | 93,606 |
311
+ | Mean Frequency | 22.06 |
312
  | Median Frequency | 4 |
313
+ | Frequency Std Dev | 154.88 |
314
 
315
  ### Most Common Words
316
 
317
  | Rank | Word | Frequency |
318
  |------|------|-----------|
319
+ | 1 | me | 7,683 |
320
+ | 2 | mè | 2,927 |
321
+ | 3 | agan̄ | 1,906 |
322
+ | 4 | ido | 1,757 |
323
+ | 5 | ebi | 1,749 |
324
+ | 6 | ìre | 1,621 |
325
+ | 7 | lek | 1,606 |
326
+ | 8 | eyi | 1,291 |
327
+ | 9 | ya | 1,169 |
328
+ | 10 | emen | 1,082 |
329
 
330
  ### Least Common Words (from vocabulary)
331
 
332
  | Rank | Word | Frequency |
333
  |------|------|-----------|
334
+ | 1 | iyaak | 2 |
335
+ | 2 | medvedev | 2 |
336
+ | 3 | race | 2 |
337
+ | 4 | lenin | 2 |
338
+ | 5 | walvis | 2 |
339
  | 6 | ọkọlọba | 2 |
340
+ | 7 | ǹkọọn̄ | 2 |
341
  | 8 | edeh | 2 |
342
  | 9 | ogwuile | 2 |
343
  | 10 | bruxelles | 2 |
 
346
 
347
  | Metric | Value |
348
  |--------|-------|
349
+ | Zipf Coefficient | 1.1690 |
350
+ | R² (Goodness of Fit) | 0.990906 |
351
  | Adherence Quality | **excellent** |
352
 
353
  ### Coverage Analysis
354
 
355
  | Top N Words | Coverage |
356
  |-------------|----------|
357
+ | Top 100 | 59.7% |
358
+ | Top 1,000 | 87.8% |
359
  | Top 5,000 | 0.0% |
360
  | Top 10,000 | 0.0% |
361
 
362
  ### Key Findings
363
 
364
+ - **Zipf Compliance:** R²=0.9909 indicates excellent adherence to Zipf's law
365
+ - **High Frequency Dominance:** Top 100 words cover 59.7% of corpus
366
+ - **Long Tail:** -5,757 words needed for remaining 100.0% coverage
367
 
368
  ---
369
  ## 5. Word Embeddings Evaluation
 
376
 
377
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
378
 
 
379
 
380
+ ### 5.1 Cross-Lingual Alignment
381
+
382
+ > *Note: Multilingual alignment visualization not available for this language.*
383
+
384
+
385
+ ### 5.2 Model Comparison
386
+
387
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
388
+ |-------|-----------|----------|------------------|---------------|----------------|
389
+ | **mono_32d** | 32 | 0.1947 🏆 | 0.5302 | N/A | N/A |
390
+ | **mono_64d** | 64 | 0.0325 | 0.5569 | N/A | N/A |
391
+ | **mono_128d** | 128 | 0.0071 | 0.5825 | N/A | N/A |
392
 
393
  ### Key Findings
394
 
395
+ - **Best Isotropy:** mono_32d with 0.1947 (more uniform distribution)
396
+ - **Semantic Density:** Average pairwise similarity of 0.5565. Lower values indicate better semantic separation.
397
+ - **Alignment Quality:** No aligned models evaluated in this run.
398
+ - **Recommendation:** 128d aligned for best cross-lingual performance
399
 
400
  ---
401
+ ## 6. Morphological Analysis (Experimental)
402
+
403
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
404
+
405
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
406
+
407
+ ### 6.1 Productivity & Complexity
408
+
409
+ | Metric | Value | Interpretation | Recommendation |
410
+ |--------|-------|----------------|----------------|
411
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
412
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
413
+
414
+ ### 6.2 Affix Inventory (Productive Units)
415
+
416
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
417
+
418
+ #### Productive Prefixes
419
+ | Prefix | Examples |
420
+ |--------|----------|
421
+ | `-ek` | ekefuk, ekwut, ekpọkbe |
422
+ | `-ik` | ikike, ikwuk, ikisip |
423
+
424
+ #### Productive Suffixes
425
+ | Suffix | Examples |
426
+ |--------|----------|
427
+ | `-n̄` | mun̄, òrọriọọn̄, ijejeen̄ |
428
+ | `-be` | îgebe, olobobe, eweekbe |
429
+
430
+ ### 6.3 Bound Stems (Lexical Roots)
431
+
432
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
433
+
434
+ | Stem | Cohesion | Substitutability | Examples |
435
+ |------|----------|------------------|----------|
436
+ | `gọọk` | 1.55x | 19 contexts | agọọk, igọọk, îgọọk |
437
+ | `tumu` | 1.48x | 21 contexts | ìtumu, òtumu, etumu |
438
+ | `kpul` | 1.58x | 16 contexts | ikpulu, ìkpulu, îkpulu |
439
+ | `sibi` | 1.52x | 18 contexts | ìsibi, osibi, îsibi |
440
+ | `kana` | 1.46x | 20 contexts | ikana, okana, ìkana |
441
+ | `kikp` | 1.44x | 19 contexts | ikikpa, ikikpọ, ìkikpa |
442
+ | `kisa` | 1.46x | 18 contexts | okisa, îkisa, ekisa |
443
+ | `chie` | 1.55x | 14 contexts | chief, echieek, ìchieek |
444
+ | `kpọk` | 1.42x | 17 contexts | okpọk, akpọk, ọkpọk |
445
+ | `gbaa` | 1.46x | 15 contexts | ogbaan̄, egbaan̄, îgbaan̄ |
446
+ | `ikaa` | 1.60x | 11 contexts | ikaan̄, ikikaan̄, ekikaan̄ |
447
+ | `riọọ` | 1.54x | 12 contexts | nriọọk, riọọn̄, oriọọn̄ |
448
+
449
+ ### 6.4 Affix Compatibility (Co-occurrence)
450
+
451
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
452
+
453
+ | Prefix | Suffix | Frequency | Examples |
454
+ |--------|--------|-----------|----------|
455
+ | `-ek` | `-be` | 15 words | ekpọkbe, ekifukbe |
456
+ | `-ik` | `-n̄` | 15 words | ikwaan̄, ikikaan̄ |
457
+ | `-ek` | `-n̄` | 10 words | ekimọọn̄, ekekaan̄ |
458
+
459
+ ### 6.5 Recursive Morpheme Segmentation
460
+
461
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
462
+
463
+ | Word | Suggested Split | Confidence | Stem |
464
+ |------|-----------------|------------|------|
465
+ | ekinyambe | **`ek-inyam-be`** | 6.0 | `inyam` |
466
+ | ekitumube | **`ek-itumu-be`** | 6.0 | `itumu` |
467
+ | ekigwenbe | **`ek-igwen-be`** | 6.0 | `igwen` |
468
+ | echichinibe | **`echichini-be`** | 4.5 | `echichini` |
469
+ | echieekbe | **`echieek-be`** | 4.5 | `echieek` |
470
+ | ekikpulube | **`ek-ik-pulu-be`** | 4.5 | `pulu` |
471
+ | ikichieek | **`ik-ichieek`** | 4.5 | `ichieek` |
472
+ | ekichichini | **`ek-ichichini`** | 4.5 | `ichichini` |
473
+ | ekekikpulu | **`ek-ek-ik-pulu`** | 4.5 | `pulu` |
474
+ | ekiweweek | **`ek-iweweek`** | 4.5 | `iweweek` |
475
+ | ikibieen̄ | **`ik-ibiee-n̄`** | 3.0 | `ibiee` |
476
+ | ekititiin̄ | **`ek-ititii-n̄`** | 3.0 | `ititii` |
477
+ | etitiin̄be | **`etitii-n̄-be`** | 3.0 | `etitii` |
478
+ | îriọọn̄be | **`îriọọ-n̄-be`** | 3.0 | `îriọọ` |
479
+ | ekikpukpo | **`ek-ik-pukpo`** | 3.0 | `pukpo` |
480
+
481
+ ### 6.6 Linguistic Interpretation
482
+
483
+ > **Automated Insight:**
484
+ The language ANN appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
485
+
486
+ ---
487
+ ## 7. Summary & Recommendations
488
 
489
  ![Performance Dashboard](visualizations/performance_dashboard.png)
490
 
 
492
 
493
  | Component | Recommended | Rationale |
494
  |-----------|-------------|-----------|
495
+ | Tokenizer | **16k BPE** | Best compression (4.35x) |
496
+ | N-gram | **2-gram** | Lowest perplexity (241) |
497
+ | Markov | **Context-4** | Highest predictability (95.5%) |
498
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
499
 
500
+
501
  ---
502
  ## Appendix: Metrics Glossary & Interpretation Guide
503
 
 
687
  author = {Kamali, Omar},
688
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
689
  year = {2025},
690
+ doi = {10.5281/zenodo.18073153},
691
+ publisher = {Zenodo},
692
  url = {https://huggingface.co/wikilangs}
693
  institution = {Omneity Labs}
694
  }
 
704
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
705
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
706
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
707
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
708
  ---
709
  *Generated by Wikilangs Models Pipeline*
710
 
711
+ *Report Date: 2026-01-03 05:13:43*
models/embeddings/monolingual/ann_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c8ac147d21bde736ae8d0674b799ef6f26441430d0e7754d09ab5da9be7f851
3
- size 1026025890
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce6196c912d733cb4ecafe0bf16e0d4c89052442dc36f0dbeb46533cf966be8c
3
+ size 1026069585
models/embeddings/monolingual/ann_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 1947
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 1989
15
  }
models/embeddings/monolingual/ann_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:89d9bc50c043837d5b812b9600be7dba0384f9a55413156e947a0bcbc30ca5c1
3
- size 256530594
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a08097b63d64495bc93a7d4418641dbfe3c922fe0270d5f7d226318ad60d9b5c
3
+ size 256542033
models/embeddings/monolingual/ann_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 1947
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 1989
15
  }
models/embeddings/monolingual/ann_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0da62b6af0039fbc61e23dffa4254301632136ee334f5519f04819ab39be4a25
3
- size 513029026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2afa2ec65fb8007208da56111bface363437e9bec87e59314877a23345b9ad5e
3
+ size 513051217
models/embeddings/monolingual/ann_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 1947
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 1989
15
  }
models/subword_markov/ann_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ecf547e06e5712d4b04214294e5fe958b9bf03373aa50626175860065f20428e
3
- size 26355
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:133284a0112df666902ff1c8b3f87c98d928ca5e21cc040ac90519fc9b835b57
3
+ size 24522
models/subword_markov/ann_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_contexts": 293,
6
- "total_transitions": 548630
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_contexts": 290,
6
+ "total_transitions": 538876
7
  }
models/subword_markov/ann_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1269622d9cb0765bc517988d1e40192230b066ef5a1629ea9cff8a1101e8a24
3
- size 117957
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e64bce1c8aaafc988a5935e2ed628caf16c7fa8930527c46f194ce12fe4dc84a
3
+ size 114074
models/subword_markov/ann_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_contexts": 2827,
6
- "total_transitions": 548137
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_contexts": 2537,
6
+ "total_transitions": 538383
7
  }
models/subword_markov/ann_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81bc0f939df0f38a1f5ec9338173240ea58f4d86fc0a431cf7504471584893c3
3
- size 363748
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72b96d3df529060de51aa1fc4dcb9ab588af0929fb1e0eb2b958bff6d7a6b633
3
+ size 338702
models/subword_markov/ann_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_contexts": 15977,
6
- "total_transitions": 547644
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_contexts": 13932,
6
+ "total_transitions": 537890
7
  }
models/subword_markov/ann_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:267ef9b402c45a76709f66b10ed72c3504d3ae9ca221e297e7c4c603c2d7470c
3
- size 840617
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e09659d50aeeca9954a013b1ca95d57253b9f038aa3a3b0c4df7c0d2fc0caa1e
3
+ size 785453
models/subword_markov/ann_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_contexts": 49379,
6
- "total_transitions": 547151
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_contexts": 44651,
6
+ "total_transitions": 537397
7
  }
models/subword_ngram/ann_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69538475aea5f87aef1184268f8bfc1a0b8dbb16c1a141680da52ca2adf5b059
3
- size 18665
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b7879bdc16bf8a3f29b779a338720f6ce381e91fa8ba5e247518dc7ca628446
3
+ size 17029
models/subword_ngram/ann_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_ngrams": 1421,
6
- "total_ngrams": 548630
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_ngrams": 1230,
6
+ "total_ngrams": 538876
7
  }
models/subword_ngram/ann_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1a148b3112af41ea4f39af82bc1c4be69f72c67b657e0eeb480f61328928eff9
3
- size 90871
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a22334c4abf593637f95f14f3195c7e9b977cab70ffe6f1ebd3ac954408c1e58
3
+ size 82404
models/subword_ngram/ann_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_ngrams": 7986,
6
- "total_ngrams": 548137
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_ngrams": 7165,
6
+ "total_ngrams": 538383
7
  }
models/subword_ngram/ann_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:57b1e35698e957b0f60223159efebdd60b66929722e830d1399e6b4612b37666
3
- size 314192
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edff385989d8cb6ddba4a7096924e12c5f750a23c337bf178ed91664b6d637f8
3
+ size 292528
models/subword_ngram/ann_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ann",
5
- "unique_ngrams": 25888,
6
- "total_ngrams": 547644
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ann",
5
+ "unique_ngrams": 24184,
6
+ "total_ngrams": 537890
7
  }
models/tokenizer/ann_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:abf621f3a63c6598066de3019333d0af711209d13793bd1949e9c2d8369cf043
3
- size 511896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:685a0e9ed7c0107b85e07b0286c03eb8b5cd6fc5ada26786ea117b602d0ca131
3
+ size 511767
models/tokenizer/ann_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ann_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b73ce8c5900da87366023f15d65ca67944aa8743a56de8103a1ad2424ebe8ada
3
- size 375004
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13e4e094459602ef55a87f4cad2bf9b7eaaeb3ca6f3d5f2b6a36760cf76610a2
3
+ size 374711
models/tokenizer/ann_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ann_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:58d080164da488d03c299a8e9a02ddb54cd4b2da9b5cb12db34d81e88b321982
3
- size 70840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8145b130fe314515c8d0e309cb26039c412cf86c9b7eeafd8670fe99f9cdda88
3
+ size 69333
models/vocabulary/ann_vocabulary_metadata.json CHANGED
@@ -1,16 +1,16 @@
1
  {
2
  "language": "ann",
3
- "vocabulary_size": 4397,
 
4
  "statistics": {
5
- "type_token_ratio": 0.10227580737453947,
6
  "coverage": {
7
- "top_100": 0.5540742674148956,
8
- "top_1000": 0.8211076867477136,
9
- "top_5000": 0.9479184443797496,
10
- "top_10000": 0.9981126961340387
11
  },
12
- "hapax_count": 5791,
13
- "hapax_ratio": 0.5684138201806046,
14
  "total_documents": 493
15
  }
16
  }
 
1
  {
2
  "language": "ann",
3
+ "vocabulary_size": 4243,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.09944472997349618,
7
  "coverage": {
8
+ "top_100": 0.562737450998176,
9
+ "top_1000": 0.8278562142878737,
10
+ "top_5000": 0.9509427497455433
 
11
  },
12
+ "hapax_count": 5625,
13
+ "hapax_ratio": 0.5700243210376976,
14
  "total_documents": 493
15
  }
16
  }
models/word_markov/ann_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d2e556d1fd7d9a59f1dffeb5b564dd904143106170ef739f02c6f80b56b5172c
3
- size 341591
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ba0963e3a66739914b91e29d2fbde25bf375a4973d157084f09f818b169e344
3
+ size 334500
models/word_markov/ann_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_contexts": 10250,
6
- "total_transitions": 133901
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_contexts": 9818,
6
+ "total_transitions": 98738
7
  }
models/word_markov/ann_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8281341dbefc67d701918a39a9e93bc92ab9c2e2461b6a982a5c37d16ed5ab04
3
- size 866375
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32de02b53a9ca990e598e15d5ee6ad630df14d4ec653fb740b4a7d42bdb26a5f
3
+ size 845652
models/word_markov/ann_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_contexts": 46339,
6
- "total_transitions": 133408
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_contexts": 45714,
6
+ "total_transitions": 98245
7
  }
models/word_markov/ann_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93e13f3f7c176e0031e47e25b899f5aed4bbd76fe9a9ac1d9c2f76379c8b8fd4
3
- size 1357580
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed08c5a00857486794ad9d25afd118b795f050f0abf1cd60420d0f37b096aa05
3
+ size 1239826
models/word_markov/ann_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_contexts": 82397,
6
- "total_transitions": 132915
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_contexts": 73222,
6
+ "total_transitions": 97752
7
  }
models/word_markov/ann_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2cbd65a9ec2496a97b6bb243bb584b5dd545fdfd674798917387d2da66487463
3
- size 1704982
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9952b21c610e474602739d4608bcdf47886b1e13139eb17d44e2d73ff21aff7f
3
+ size 1490446
models/word_markov/ann_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_contexts": 104013,
6
- "total_transitions": 132423
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_contexts": 86066,
6
+ "total_transitions": 97259
7
  }
models/word_ngram/ann_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14fbde219525c990b739a22b458dc11166917bc6789d7b73d11e5231eb380678
3
- size 45646
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c61da640686bd6bfeec4d6576ef9e29d217c1fa800dd2af2b9880484a2384b7
3
+ size 34828
models/word_ngram/ann_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_ngrams": 3472,
6
- "total_ngrams": 133901
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_ngrams": 2498,
6
+ "total_ngrams": 98738
7
  }
models/word_ngram/ann_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c41b88becd4a74c9903a966352910559008a2ae297ba677e163bdd3c91de9298
3
- size 84226
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5769309a279012b1cb2f9f14506c918160ad7147e5ffe7fbb92c676234d85cc
3
+ size 51996
models/word_ngram/ann_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_ngrams": 5764,
6
- "total_ngrams": 133408
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_ngrams": 3289,
6
+ "total_ngrams": 98245
7
  }
models/word_ngram/ann_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c947edc75b3e415ed42a680d46a1cbe7325d564a49d11895f12e31dba3adef32
3
- size 146597
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf708333454bbbf754ad5ccdfbe88eb025359ee3adeafccce332299eea699ef1
3
+ size 82752
models/word_ngram/ann_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ann",
5
- "unique_ngrams": 9244,
6
- "total_ngrams": 132915
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ann",
5
+ "unique_ngrams": 4802,
6
+ "total_ngrams": 97752
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: d2a151642e08c0daf7ef03d43418d91d8e7d1e943498650fd4d2b86b92182519
  • Pointer size: 131 Bytes
  • Size of remote file: 154 kB

Git LFS Details

  • SHA256: 919f9711b7e9da651da331bd214571c59b4e719ea224e71e8166970d50296435
  • Pointer size: 131 Bytes
  • Size of remote file: 153 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED
visualizations/markov_entropy.png CHANGED
visualizations/model_sizes.png CHANGED
visualizations/ngram_coverage.png CHANGED
visualizations/ngram_entropy.png CHANGED