omarkamali commited on
Commit
420c8c0
·
verified ·
1 Parent(s): 83fdf8f

Upload all models and assets for ami (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +314 -137
  2. models/embeddings/monolingual/ami_128d.bin +2 -2
  3. models/embeddings/monolingual/ami_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/ami_32d.bin +2 -2
  5. models/embeddings/monolingual/ami_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/ami_64d.bin +2 -2
  7. models/embeddings/monolingual/ami_64d_metadata.json +5 -3
  8. models/subword_markov/ami_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/ami_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/ami_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/ami_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/ami_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/ami_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/ami_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/ami_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/ami_2gram_subword.parquet +2 -2
  17. models/subword_ngram/ami_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/ami_3gram_subword.parquet +2 -2
  19. models/subword_ngram/ami_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/ami_4gram_subword.parquet +2 -2
  21. models/subword_ngram/ami_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/ami_tokenizer_16k.model +2 -2
  23. models/tokenizer/ami_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/ami_tokenizer_32k.model +2 -2
  25. models/tokenizer/ami_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/ami_tokenizer_64k.model +2 -2
  27. models/tokenizer/ami_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/ami_tokenizer_8k.model +2 -2
  29. models/tokenizer/ami_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/ami_vocabulary.parquet +2 -2
  31. models/vocabulary/ami_vocabulary_metadata.json +10 -9
  32. models/word_markov/ami_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/ami_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/ami_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/ami_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/ami_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/ami_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/ami_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/ami_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/ami_2gram_word.parquet +2 -2
  41. models/word_ngram/ami_2gram_word_metadata.json +2 -2
  42. models/word_ngram/ami_3gram_word.parquet +2 -2
  43. models/word_ngram/ami_3gram_word_metadata.json +2 -2
  44. models/word_ngram/ami_4gram_word.parquet +2 -2
  45. models/word_ngram/ami_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 3.626
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8477
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 31948
33
- generated: 2025-12-27
34
  ---
35
 
36
  # AMI - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,56 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.080x | 3.05 | 0.1304% | 771,216 |
76
- | **16k** | 3.309x | 3.28 | 0.1401% | 717,947 |
77
- | **32k** | 3.478x | 3.45 | 0.1473% | 682,898 |
78
- | **64k** | 3.626x 🏆 | 3.59 | 0.1536% | 655,126 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `sapal(幼苗)
85
-
86
- Maripa' no mako ko sapal no panay.
87
-
88
-
89
- Kasasiwasiw:Siwkulang 'Amis`
90
 
91
  | Vocab | Tokens | Count |
92
  |-------|--------|-------|
93
- | 8k | `▁sapal ( ) ▁mar ipa ' nomako ... (+10 more)` | 20 |
94
- | 16k | `▁sapal ( ) ▁mar ipa 'nomako ... (+10 more)` | 20 |
95
- | 32k | `▁sapal ( ) ▁mar ipa 'nomako ... (+10 more)` | 20 |
96
- | 64k | `▁sapal ( ) ▁maripa 'nomakoko ... (+9 more)` | 19 |
97
 
98
- **Sample 2:** `Talip(Sekato;Hakama^;Sukun;Sokato;Tarip、kuwaping a sowal: )`
99
 
100
  | Vocab | Tokens | Count |
101
  |-------|--------|-------|
102
- | 8k | `▁tal ip ( s ek ato ; hak ama ^ ... (+19 more)` | 29 |
103
- | 16k | `▁tal ip ( s ek ato ; hak ama ^ ... (+19 more)` | 29 |
104
- | 32k | `▁talip ( sek ato ; hak ama ^ ; s ... (+15 more)` | 25 |
105
- | 64k | `▁talip ( sek ato ; hak ama ^ ; s ... (+14 more)` | 24 |
106
 
107
- **Sample 3:** `taylin o pacaliwen no paylang a caciyaw, o Amilika no sowal i, plice mahaenay!`
108
 
109
  | Vocab | Tokens | Count |
110
  |-------|--------|-------|
111
- | 8k | `▁tay lino ▁pac aliw en ▁no ▁pay lang ▁a ... (+12 more)` | 22 |
112
- | 16k | `▁tay lin o ▁pacaliw en ▁no ▁pay lang ▁acaciyaw ... (+11 more)` | 21 |
113
- | 32k | `▁tay lin ▁o ▁pacaliwen ▁no ▁paylang ▁acaciyaw , o ... (+9 more)` | 19 |
114
- | 64k | `▁taylin ▁o ▁pacaliwen ▁no ▁paylang ▁acaciyaw , oamilika ... (+8 more)` | 18 |
115
 
116
 
117
  ### Key Findings
118
 
119
- - **Best Compression:** 64k achieves 3.626x compression
120
- - **Lowest UNK Rate:** 8k with 0.1304% unknown tokens
121
  - **Trade-off:** Larger vocabularies improve compression but increase model size
122
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
123
 
@@ -126,57 +129,89 @@ Kasasiwasiw:Siwkulang 'Amis`
126
 
127
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
128
 
 
 
129
  ![N-gram Coverage](visualizations/ngram_coverage.png)
130
 
131
  ### Results
132
 
133
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
134
- |--------|------------|---------|----------------|------------------|-------------------|
135
- | **2-gram** | 6,797 🏆 | 12.73 | 30,025 | 21.0% | 48.9% |
136
- | **2-gram** | 185 🏆 | 7.54 | 8,598 | 77.5% | 97.7% |
137
- | **3-gram** | 17,575 | 14.10 | 58,959 | 14.4% | 34.9% |
138
- | **3-gram** | 982 | 9.94 | 32,049 | 42.6% | 80.9% |
139
- | **4-gram** | 43,585 | 15.41 | 125,762 | 12.4% | 26.1% |
140
- | **4-gram** | 3,737 | 11.87 | 117,566 | 27.2% | 56.9% |
141
 
142
  ### Top 5 N-grams by Size
143
 
144
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
 
146
  | Rank | N-gram | Count |
147
  |------|--------|-------|
148
- | 1 | `, o` | 10,654 |
149
- | 2 | `i ,` | 9,935 |
150
- | 3 | `. o` | 6,491 |
151
- | 4 | `ira ko` | 5,079 |
152
- | 5 | `’ ad` | 4,361 |
153
 
154
- **3-grams:**
155
 
156
  | Rank | N-gram | Count |
157
  |------|--------|-------|
158
- | 1 | `romi ’ ad` | 4,026 |
159
- | 2 | `ka ’ aloman` | 2,293 |
160
- | 3 | `’ aloman no` | 2,134 |
161
- | 4 | `] . (` | 2,065 |
162
- | 5 | `sa ’ osi` | 1,873 |
163
 
164
- **4-grams:**
165
 
166
  | Rank | N-gram | Count |
167
  |------|--------|-------|
168
- | 1 | `ka aloman no` | 2,121 |
169
- | 2 | `a romi ’ ad` | 1,630 |
170
- | 3 | `ko ka ’ aloman` | 1,534 |
171
- | 4 | `sa osi no` | 1,530 |
172
- | 5 | `ko sa ’ osi` | 1,509 |
 
 
 
 
 
 
 
 
 
 
173
 
174
 
175
  ### Key Findings
176
 
177
- - **Best Perplexity:** 2-gram with 185
178
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
179
- - **Coverage:** Top-1000 patterns cover ~57% of corpus
180
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
181
 
182
  ---
@@ -184,55 +219,86 @@ Kasasiwasiw:Siwkulang 'Amis`
184
 
185
  ![Markov Entropy](visualizations/markov_entropy.png)
186
 
 
 
187
  ![Markov Branching](visualizations/markov_branching.png)
188
 
189
  ### Results
190
 
191
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
192
- |---------|-------------|------------|------------------|-----------------|----------------|
193
- | **1** | 0.5042 | 1.418 | 4.24 | 80,205 | 49.6% |
194
- | **1** | 1.6851 | 3.216 | 12.15 | 4,318 | 0.0% |
195
- | **2** | 0.3377 | 1.264 | 2.05 | 339,828 | 66.2% |
196
- | **2** | 0.4256 | 1.343 | 2.40 | 52,454 | 57.4% |
197
- | **3** | 0.1611 | 1.118 | 1.34 | 695,243 | 83.9% |
198
- | **3** | 0.3826 | 1.304 | 2.20 | 125,822 | 61.7% |
199
- | **4** | 0.0722 🏆 | 1.051 | 1.13 | 934,421 | 92.8% |
200
- | **4** | 0.3723 🏆 | 1.294 | 1.92 | 276,161 | 62.8% |
201
 
202
- ### Generated Text Samples
203
 
204
- Below are text samples generated from each Markov chain model:
205
 
206
  **Context Size 1:**
207
 
208
- 1. `, 75 % , kimolmolay dadingo sinpon i sra apong , nikawrira , - tinsikiw ,`
209
- 2. `’ atomo , senpitopaw si misaakoako misatapang ko 1 , likakawa haw i singko saadihay sato`
210
- 3. `a sofitay no harana asay amipa iked misingkiwan tamdaw mangalefay ko pakayraan ko roma`
211
 
212
  **Context Size 2:**
213
 
214
- 1. `, o congli tapang no naci - toic to sapifaolawaw to yotaya tamdaw mikapot to amilika sifo`
215
- 2. `i , pakawas . onini ko sakasaan no tiawcaci konini a satefoc 100 liyad pisalofan i`
216
- 3. `. o so elinay mafalic , halo tamdaw sato cangra a miharateng to nga ay`
217
 
218
  **Context Size 3:**
219
 
220
- 1. `romi ad nai inkiris misiiked . o iraq mihayda to nai 1913 mihecaan a misatatad`
221
- 2. `ka aloman no yincomin ( 原住民 ) , polong han i , 274 ko tamdaw . o`
222
- 3. `’ aloman no tamdaw no kasafinacadan ( 族群 ) i , ko bunun ( 布農族 ) 1 %`
223
 
224
  **Context Size 4:**
225
 
226
- 1. `ka aloman no roma a finacadan , polong han i , 71 ko tamdaw . o pa -`
227
- 2. `a romi ad . no papotalay a kakafit list of current heads of state and government kasasiwasiw :`
228
- 3. `ko ka aloman no yincomin ( 原住民 ) , polong han i , 838 ko tamdaw . o`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
229
 
230
 
231
  ### Key Findings
232
 
233
- - **Best Predictability:** Context-4 with 92.8% predictability
234
  - **Branching Factor:** Decreases with context size (more deterministic)
235
- - **Memory Trade-off:** Larger contexts require more storage (276,161 contexts)
236
  - **Recommendation:** Context-3 or Context-4 for text generation
237
 
238
  ---
@@ -248,64 +314,64 @@ Below are text samples generated from each Markov chain model:
248
 
249
  | Metric | Value |
250
  |--------|-------|
251
- | Vocabulary Size | 31,948 |
252
- | Total Tokens | 962,770 |
253
- | Mean Frequency | 30.14 |
254
  | Median Frequency | 3 |
255
- | Frequency Std Dev | 634.99 |
256
 
257
  ### Most Common Words
258
 
259
  | Rank | Word | Frequency |
260
  |------|------|-----------|
261
- | 1 | a | 59,912 |
262
- | 2 | no | 48,183 |
263
- | 3 | ko | 44,638 |
264
- | 4 | to | 40,008 |
265
- | 5 | i | 38,103 |
266
- | 6 | o | 30,368 |
267
- | 7 | ato | 10,842 |
268
- | 8 | tamdaw | 10,833 |
269
- | 9 | miheca | 6,862 |
270
- | 10 | sa | 6,789 |
271
 
272
  ### Least Common Words (from vocabulary)
273
 
274
  | Rank | Word | Frequency |
275
  |------|------|-----------|
276
- | 1 | hahihay | 2 |
277
- | 2 | hiay | 2 |
278
- | 3 | pasitenokay | 2 |
279
- | 4 | satsuma | 2 |
280
- | 5 | pisamawmaw | 2 |
281
- | 6 | saigo | 2 |
282
- | 7 | tsumoru | 2 |
283
- | 8 | vetoma | 2 |
284
- | 9 | mitingting | 2 |
285
- | 10 | kalosaasik | 2 |
286
 
287
  ### Zipf's Law Analysis
288
 
289
  | Metric | Value |
290
  |--------|-------|
291
- | Zipf Coefficient | 1.1668 |
292
- | R² (Goodness of Fit) | 0.995322 |
293
  | Adherence Quality | **excellent** |
294
 
295
  ### Coverage Analysis
296
 
297
  | Top N Words | Coverage |
298
  |-------------|----------|
299
- | Top 100 | 51.0% |
300
- | Top 1,000 | 75.7% |
301
- | Top 5,000 | 89.3% |
302
- | Top 10,000 | 93.7% |
303
 
304
  ### Key Findings
305
 
306
  - **Zipf Compliance:** R²=0.9953 indicates excellent adherence to Zipf's law
307
- - **High Frequency Dominance:** Top 100 words cover 51.0% of corpus
308
- - **Long Tail:** 21,948 words needed for remaining 6.3% coverage
309
 
310
  ---
311
  ## 5. Word Embeddings Evaluation
@@ -318,24 +384,132 @@ Below are text samples generated from each Markov chain model:
318
 
319
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
320
 
321
- ### Model Comparison
322
 
323
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
324
- |-------|------------|-----------|----------|----------|----------|
325
- | **mono_32d** | 12,970 | 32 | 3.455 | 0.855 | 0.8477 🏆 |
326
- | **mono_64d** | 12,970 | 64 | 3.941 | 0.764 | 0.8135 |
327
- | **mono_128d** | 12,970 | 128 | 4.314 | 0.719 | 0.5720 |
328
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
329
 
330
  ### Key Findings
331
 
332
- - **Best Isotropy:** mono_32d with 0.8477 (more uniform distribution)
333
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
334
- - **Vocabulary Coverage:** All models cover 12,970 words
335
- - **Recommendation:** 100d for balanced semantic capture and efficiency
336
 
337
  ---
338
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
339
 
340
  ![Performance Dashboard](visualizations/performance_dashboard.png)
341
 
@@ -343,11 +517,12 @@ Below are text samples generated from each Markov chain model:
343
 
344
  | Component | Recommended | Rationale |
345
  |-----------|-------------|-----------|
346
- | Tokenizer | **32k BPE** | Best compression (3.63x) with low UNK rate |
347
- | N-gram | **5-gram** | Lowest perplexity (185) |
348
- | Markov | **Context-4** | Highest predictability (92.8%) |
349
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
350
 
 
351
  ---
352
  ## Appendix: Metrics Glossary & Interpretation Guide
353
 
@@ -537,7 +712,8 @@ If you use these models in your research, please cite:
537
  author = {Kamali, Omar},
538
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
539
  year = {2025},
540
- publisher = {HuggingFace},
 
541
  url = {https://huggingface.co/wikilangs}
542
  institution = {Omneity Labs}
543
  }
@@ -553,7 +729,8 @@ MIT License - Free for academic and commercial use.
553
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
554
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
555
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
556
  ---
557
  *Generated by Wikilangs Models Pipeline*
558
 
559
- *Report Date: 2025-12-27 05:44:49*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 3.608
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8374
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # AMI - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.161x | 3.16 | 0.4493% | 709,382 |
84
+ | **16k** | 3.338x | 3.34 | 0.4744% | 671,788 |
85
+ | **32k** | 3.486x | 3.49 | 0.4954% | 643,295 |
86
+ | **64k** | 3.608x 🏆 | 3.61 | 0.5128% | 621,527 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `makomod(統治) I a mihecaan, misatapang a makomod ko Ripon to Taywan tangasa i a mi...`
 
 
 
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁makomod ( ) ▁i ▁amihecaan , misatapang ... (+11 more)` | 21 |
97
+ | 16k | `▁makomod ( 統治 ) ▁i ▁a ▁mihecaan , misatapanga ... (+10 more)` | 20 |
98
+ | 32k | `▁makomod ( 統治 ) ▁i ▁a ▁mihecaan , misatapanga ... (+10 more)` | 20 |
99
+ | 64k | `▁makomod ( 統治 ) ▁i ▁amihecaan , misatapanga ... (+10 more)` | 20 |
100
 
101
+ **Sample 2:** `malitengay(老人家) Romadiw ci malitengay. (老人家在唱歌) 'Amis`
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁malitengay ( ) ▁romadiw ▁ci ▁malitengay . ... (+10 more)` | 20 |
106
+ | 16k | `▁malitengay ( 老人家 ) ▁romadiw ▁ci ▁malitengay . ▁( 老人家 ... (+6 more)` | 16 |
107
+ | 32k | `▁malitengay ( 老人家 ) ▁romadiw ▁ci ▁malitengay . ▁( 老人家在 ... (+5 more)` | 15 |
108
+ | 64k | `▁malitengay ( 老人家 ) ▁romadiw ▁ci ▁malitengay . ▁( 老人家在 ... (+5 more)` | 15 |
109
 
110
+ **Sample 3:** `Sokoy 木鱉果 縮圖|sokoy Caay to ka'aloman ko mipaloma'ay to matiniay a sokay, carekah...`
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁so koy ▁縮圖 | so koy ... (+21 more)` | 31 |
115
+ | 16k | `▁sokoy ▁縮圖 | so koy caay ... (+19 more)` | 29 |
116
+ | 32k | `▁sokoy ▁木 ▁縮圖 | so koy caayto ... (+17 more)` | 27 |
117
+ | 64k | `▁sokoy ▁木 ▁縮圖 | sokoy caaytoka ... (+16 more)` | 26 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 3.608x compression
123
+ - **Lowest UNK Rate:** 8k with 0.4493% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 6,678 | 12.71 | 22,550 | 20.3% | 47.3% |
141
+ | **2-gram** | Subword | 207 🏆 | 7.69 | 6,731 | 78.5% | 98.2% |
142
+ | **3-gram** | Word | 12,757 | 13.64 | 35,948 | 17.2% | 36.4% |
143
+ | **3-gram** | Subword | 1,373 | 10.42 | 25,440 | 36.9% | 81.6% |
144
+ | **4-gram** | Word | 30,756 | 14.91 | 77,159 | 15.4% | 26.9% |
145
+ | **4-gram** | Subword | 6,401 | 12.64 | 95,881 | 18.2% | 53.7% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `ira ko` | 5,064 |
154
+ | 2 | `romi ad` | 4,019 |
155
+ | 3 | `i miheca` | 2,827 |
156
+ | 4 | `a tamdaw` | 2,806 |
157
+ | 5 | `a sowal` | 2,768 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `ka aloman no` | 2,123 |
164
+ | 2 | `a romi ad` | 1,671 |
165
+ | 3 | `ko tamdaw o` | 1,565 |
166
+ | 4 | `sa osi no` | 1,535 |
167
+ | 5 | `ko ka aloman` | 1,534 |
168
+
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `ko sa osi no` | 1,482 |
174
+ | 2 | `ko ka aloman no` | 1,395 |
175
+ | 3 | `nina angan tilid i` | 853 |
176
+ | 4 | `nano nina angan tilid` | 845 |
177
+ | 5 | `o roma sato i` | 766 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | `o _` | 200,857 |
184
+ | 2 | `a _` | 143,109 |
185
+ | 3 | `a n` | 139,584 |
186
+ | 4 | `_ k` | 106,296 |
187
+ | 5 | `a y` | 96,390 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `a y _` | 60,395 |
194
+ | 2 | `_ a _` | 58,815 |
195
+ | 3 | `a n _` | 54,544 |
196
+ | 4 | `n o _` | 54,458 |
197
+ | 5 | `t o _` | 53,668 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `_ n o _` | 47,644 |
204
+ | 2 | `_ k o _` | 44,141 |
205
+ | 3 | `_ t o _` | 37,131 |
206
+ | 4 | `o _ k a` | 18,566 |
207
+ | 5 | `a y _ a` | 15,366 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 207
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~54% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.6170 | 1.534 | 4.54 | 72,606 | 38.3% |
231
+ | **1** | Subword | 1.5133 | 2.855 | 9.96 | 4,060 | 0.0% |
232
+ | **2** | Word | 0.3021 | 1.233 | 1.87 | 329,508 | 69.8% |
233
+ | **2** | Subword | 0.4126 | 1.331 | 2.39 | 40,428 | 58.7% |
234
+ | **3** | Word | 0.1213 | 1.088 | 1.23 | 614,195 | 87.9% |
235
+ | **3** | Subword | 0.3832 | 1.304 | 2.24 | 96,490 | 61.7% |
236
+ | **4** | Word | 0.0415 🏆 | 1.029 | 1.07 | 756,623 | 95.8% |
237
+ | **4** | Subword | 0.3927 | 1.313 | 2.01 | 215,619 | 60.7% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `a niyaro ira itiya mitahidang ko nani sera ira ko tamdaw no i likisi kingkiwso a`
246
+ 2. `no kalingko posong kowan 395 satoko cilafas to sapitoripes paysin hananay a tayni i lalan matengil`
247
+ 3. `ko sowal 波札那共和國 o no switzerland 瑞士 anini a tapolo malowid no sici misatapang romakat cira`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `ira ko piawniya taerniya maciton ato seroys etal a cakoma tamdaw chakma o sangco fociyaw theravāda k...`
252
+ 2. `romi ad pi arawan a patefoc ano ca i kiwkay ato mimokongay foksi ci cang congmin zhang`
253
+ 3. `i miheca saka 4 folad 22 romi ad no papotalay a kakafit list of current heads of`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `ka aloman no tamdaw no kasafinacadan i ko ira ko picodadan 台東專科 原住民族部落大學 空中大學 i niyaro ira ko`
258
+ 2. `a romi ad pawsa sato kiya wina niya wawa a pasowal jiya wina ningra ya saan ya wina`
259
+ 3. `ko tamdaw o roma sato i 31 ko tamdaw o pasinto no ka aloman no roma a finacadan`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `ko sa osi no parod no loma 921 ko sa osi no tamdaw 98 ko ka aloman no roma`
264
+ 2. `ko ka aloman no yincomin polong han i 97 ko tamdaw o roma sato i 9 ko ka aloman`
265
+ 3. `nina angan tilid i 18 南アフリカ共和国 日本外務省 nano nina angan tilid pdf i 24 7 government of ireland article`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `ayayah_n_n_ka._k`
275
+ 2. `_no_(池上田部—“f_nc_`
276
+ 3. `omicecakoli_para`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `o_lay_tan_a_kitay`
281
+ 2. `a_ko_cininay_to_a`
282
+ 3. `analay_tok_atoker`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `ay_a_honti”_ni_kit`
287
+ 2. `_a_roman_no_maka,_`
288
+ 3. `an_of_stas_no_paka`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `_no_opi_lilay._sa’o`
293
+ 2. `_ko_pikinko-’aloma’`
294
+ 3. `_to_tasiya_finaca_a`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 95.8% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (215,619 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 29,996 |
318
+ | Total Tokens | 911,467 |
319
+ | Mean Frequency | 30.39 |
320
  | Median Frequency | 3 |
321
+ | Frequency Std Dev | 650.13 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | a | 59,636 |
328
+ | 2 | no | 47,923 |
329
+ | 3 | ko | 44,308 |
330
+ | 4 | to | 39,595 |
331
+ | 5 | i | 37,830 |
332
+ | 6 | o | 30,176 |
333
+ | 7 | ato | 10,792 |
334
+ | 8 | tamdaw | 10,688 |
335
+ | 9 | miheca | 6,765 |
336
+ | 10 | sa | 6,716 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | paiyo | 2 |
343
+ | 2 | parangalan | 2 |
344
+ | 3 | 對豐年祭的一些看法 | 2 |
345
+ | 4 | kalikowatan | 2 |
346
+ | 5 | pisifat | 2 |
347
+ | 6 | suise | 2 |
348
+ | 7 | pililafangan | 2 |
349
+ | 8 | sapikomod | 2 |
350
+ | 9 | piselong | 2 |
351
+ | 10 | ekelay | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 1.1663 |
358
+ | R² (Goodness of Fit) | 0.995345 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 52.9% |
366
+ | Top 1,000 | 76.5% |
367
+ | Top 5,000 | 89.8% |
368
+ | Top 10,000 | 94.1% |
369
 
370
  ### Key Findings
371
 
372
  - **Zipf Compliance:** R²=0.9953 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 52.9% of corpus
374
+ - **Long Tail:** 19,996 words needed for remaining 5.9% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8374 🏆 | 0.3299 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.7849 | 0.2563 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.4896 | 0.2197 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_32d with 0.8374 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2686. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
 
408
  ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+ | `-ma` | mapateli, matoasto, macekelay |
430
+ | `-mi` | milika, misahiraterateng, milafo |
431
+ | `-ka` | kariponan, kaorira, kalapaliw |
432
+ | `-pa` | pafatisay, paherekan, palecapu |
433
+ | `-sa` | safaniyotan, sarosaros, sakiikoray |
434
+ | `-pi` | piaw, pidemak, pirayray |
435
+ | `-ta` | tahaf, tanetekay, tadamarorayay |
436
+ | `-mal` | malamisiieday, malikiday, malatoloay |
437
+
438
+ #### Productive Suffixes
439
+ | Suffix | Examples |
440
+ |--------|----------|
441
+ | `-n` | balkan, iskawalian, otoman |
442
+ | `-y` | elay, qehuy, macekelay |
443
+ | `-ay` | elay, macekelay, tanetekay |
444
+ | `-an` | balkan, iskawalian, otoman |
445
+ | `-ng` | jinfeng, kopitahidang, arawang |
446
+ | `-en` | haratengen, adihayen, tatayalen |
447
+
448
+ ### 6.3 Bound Stems (Lexical Roots)
449
+
450
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
451
+
452
+ | Stem | Cohesion | Substitutability | Examples |
453
+ |------|----------|------------------|----------|
454
+ | `emak` | 2.36x | 36 contexts | demak, hemak, ademak |
455
+ | `alom` | 1.99x | 51 contexts | alomi, aloma, paloma |
456
+ | `ilid` | 2.22x | 32 contexts | tilid, atilid, pitilid |
457
+ | `dema` | 2.18x | 33 contexts | demak, ademak, odemak |
458
+ | `olon` | 1.98x | 47 contexts | kolon, tolon, polon |
459
+ | `iren` | 2.34x | 25 contexts | ireng, yiren, sairen |
460
+ | `ihec` | 2.13x | 28 contexts | niheca, kiheca, ciheci |
461
+ | `taki` | 2.23x | 15 contexts | takid, takimi, kitaki |
462
+ | `ngra` | 2.05x | 19 contexts | ingra, angra, cngra |
463
+ | `onga` | 1.49x | 55 contexts | fonga, ongay, tonga |
464
+ | `mihe` | 2.10x | 14 contexts | mihea, miheca, mihemek |
465
+ | `itak` | 1.81x | 22 contexts | kitakt, mitaka, kitaki |
466
+
467
+ ### 6.4 Affix Compatibility (Co-occurrence)
468
+
469
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
470
+
471
+ | Prefix | Suffix | Frequency | Examples |
472
+ |--------|--------|-----------|----------|
473
+ | `-ma` | `-y` | 239 words | masamaanay, malekoay |
474
+ | `-ma` | `-ay` | 238 words | masamaanay, malekoay |
475
+ | `-ka` | `-n` | 174 words | kalamkamen, kasopedan |
476
+ | `-mi` | `-y` | 173 words | misamoraday, mipelengay |
477
+ | `-mi` | `-ay` | 169 words | misamoraday, mipelengay |
478
+ | `-ka` | `-an` | 154 words | kasopedan, kacitiyadan |
479
+ | `-pa` | `-n` | 122 words | paecasan, pahapingan |
480
+ | `-pi` | `-n` | 122 words | pitokadan, pitengilan |
481
+ | `-pi` | `-an` | 117 words | pitokadan, pitengilan |
482
+ | `-pa` | `-y` | 81 words | papaysoay, pakaenay |
483
+
484
+ ### 6.5 Recursive Morpheme Segmentation
485
+
486
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
487
+
488
+ | Word | Suggested Split | Confidence | Stem |
489
+ |------|-----------------|------------|------|
490
+ | masataporoay | **`ma-sa-ta-poro-ay`** | 9.0 | `poro` |
491
+ | pipafilongan | **`pi-pa-filo-ng-an`** | 9.0 | `filo` |
492
+ | masamaamaanay | **`ma-sa-ma-amaan-ay`** | 9.0 | `amaan` |
493
+ | papisatoronen | **`pa-pi-sa-toron-en`** | 9.0 | `toron` |
494
+ | milinganganay | **`mi-linga-ng-an-ay`** | 9.0 | `linga` |
495
+ | mikapolongan | **`mi-ka-polo-ng-an`** | 9.0 | `polo` |
496
+ | kasakakitaan | **`ka-sa-ka-kita-an`** | 9.0 | `kita` |
497
+ | mapanganganay | **`ma-pa-ngang-an-ay`** | 9.0 | `ngang` |
498
+ | talolongay | **`ta-lolo-ng-ay`** | 7.5 | `lolo` |
499
+ | mipalawacoay | **`mi-pa-lawaco-ay`** | 7.5 | `lawaco` |
500
+ | sakalaloodan | **`sa-ka-lalood-an`** | 7.5 | `lalood` |
501
+ | masakapahay | **`ma-sa-ka-pahay`** | 7.5 | `pahay` |
502
+ | pisaomahan | **`pi-sa-omah-an`** | 7.5 | `omah` |
503
+ | pakapatayay | **`pa-ka-pa-tayay`** | 7.5 | `tayay` |
504
+ | mamipadoedo | **`ma-mi-pa-doedo`** | 7.5 | `doedo` |
505
+
506
+ ### 6.6 Linguistic Interpretation
507
+
508
+ > **Automated Insight:**
509
+ The language AMI appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
510
+
511
+ ---
512
+ ## 7. Summary & Recommendations
513
 
514
  ![Performance Dashboard](visualizations/performance_dashboard.png)
515
 
 
517
 
518
  | Component | Recommended | Rationale |
519
  |-----------|-------------|-----------|
520
+ | Tokenizer | **64k BPE** | Best compression (3.61x) |
521
+ | N-gram | **2-gram** | Lowest perplexity (207) |
522
+ | Markov | **Context-4** | Highest predictability (95.8%) |
523
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
524
 
525
+
526
  ---
527
  ## Appendix: Metrics Glossary & Interpretation Guide
528
 
 
712
  author = {Kamali, Omar},
713
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
714
  year = {2025},
715
+ doi = {10.5281/zenodo.18073153},
716
+ publisher = {Zenodo},
717
  url = {https://huggingface.co/wikilangs}
718
  institution = {Omneity Labs}
719
  }
 
729
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
730
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
731
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
732
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
733
  ---
734
  *Generated by Wikilangs Models Pipeline*
735
 
736
+ *Report Date: 2026-01-03 05:06:08*
models/embeddings/monolingual/ami_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:996aff94cf3bb9c728be627b571a1edeead0f09cda7df9bb07117bf42ce1e8f6
3
- size 1037511856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15905bdf3ef2a2c33080d911e43499747f02a98b908fe2fd8f60829e51b06e3b
3
+ size 1036991498
models/embeddings/monolingual/ami_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12970
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 12471
15
  }
models/embeddings/monolingual/ami_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d60b5f427ae174c4924ceeb6752f7bab8acecf6c833dd0975f135d607bb103a
3
- size 259550896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1637a213277140817cb0abdc86b03667570ab2c62b90fadaab2d4f67dd7d34c1
3
+ size 259413770
models/embeddings/monolingual/ami_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12970
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 12471
15
  }
models/embeddings/monolingual/ami_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f6d0645c29630c896eba9768226cbbdb875b22ef317a5545e1a401c3e2661be0
3
- size 518871216
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d86b494753732d38bf3c59ecf55e84334333d1b828be2bb495f8f20346d41fe7
3
+ size 518606346
models/embeddings/monolingual/ami_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 12970
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 12471
15
  }
models/subword_markov/ami_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:71450850ae7c3385727c3b3747d4c83449ea0f95fc516e229eaa814f5c4fcccd
3
- size 294376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd6c5414c5d99f255c9a24fd799f5715c61d98151829429da1c891c950c71b6c
3
+ size 237122
models/subword_markov/ami_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_contexts": 4318,
6
- "total_transitions": 6575362
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_contexts": 4060,
6
+ "total_transitions": 5416535
7
  }
models/subword_markov/ami_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ee631c071b1253baacaf7694f59e48c9b7771120278456a30f46ba270682f4a7
3
- size 1143059
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c95bebddb17ca6bb55229dc5f814136974b0cdf0bec76337ee41e0101f6d951d
3
+ size 896649
models/subword_markov/ami_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_contexts": 52454,
6
- "total_transitions": 6573527
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_contexts": 40428,
6
+ "total_transitions": 5414744
7
  }
models/subword_markov/ami_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db3b0de53581d8bf1696e13a9088efb66d3b9928df878c5ad2b30d560b51efb9
3
- size 2579415
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bdee46d76e034d877f43a9063eb7dced0030fd717bea264027433f451c9d30b
3
+ size 2005109
models/subword_markov/ami_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_contexts": 125822,
6
- "total_transitions": 6571692
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_contexts": 96490,
6
+ "total_transitions": 5412953
7
  }
models/subword_markov/ami_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c988764c01283eb4b190b72c06246b2298f1ef22bf8c14d81f60b7b568a0393
3
- size 5188374
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5583564cfdde5d8d793971d0b83f3491b51f34500300b4550db8c3f95b407fdc
3
+ size 4098231
models/subword_markov/ami_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_contexts": 276161,
6
- "total_transitions": 6569857
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_contexts": 215619,
6
+ "total_transitions": 5411162
7
  }
models/subword_ngram/ami_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:869792fa908529a263bfe81ecdcfe359b7706221509eaab21f94c7873a71c428
3
- size 113262
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1026db4cb824d7fb2d17207e6726e192995fccc42121a6deb342c7b4330714a
3
+ size 88989
models/subword_ngram/ami_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_ngrams": 8598,
6
- "total_ngrams": 6575362
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_ngrams": 6731,
6
+ "total_ngrams": 5416535
7
  }
models/subword_ngram/ami_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a3549ed31695504b9278dc147e5534a52e6d5fc06f9304ad730ba717a3f50a3
3
- size 422836
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75fd09f81b66ca76353d7986bbb6cbcf010ea89e9f05da2205a5da78535f5399
3
+ size 336416
models/subword_ngram/ami_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_ngrams": 32049,
6
- "total_ngrams": 6573527
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_ngrams": 25440,
6
+ "total_ngrams": 5414744
7
  }
models/subword_ngram/ami_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:40f67c05d8d68f78e02e07923b387ce8896954839d12c21b3f122450c3feb7b0
3
- size 1419899
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2dba14c53ee7a60b255aeac3e25837701b86d507511cedb46513b241b52709d2
3
+ size 1158460
models/subword_ngram/ami_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ami",
5
- "unique_ngrams": 117566,
6
- "total_ngrams": 6571692
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "ami",
5
+ "unique_ngrams": 95881,
6
+ "total_ngrams": 5412953
7
  }
models/tokenizer/ami_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92cc2985510da9d8ea3f08c9c2fe0987e1317dd39fdb442e977d73465247ddc8
3
- size 497352
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:717608566d5035e11e0b6a237c9dc8de306a618502b3c9c2d163cff3d89a1fd3
3
+ size 504074
models/tokenizer/ami_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ami_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:84774ac22437cd71ddffff17b52c20adc0fc83af168f378c75ec102a479a4062
3
- size 772651
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65580b9ce13917e013a326ec049db42b294a2d5b301ede6cece658a66bf01a32
3
+ size 812779
models/tokenizer/ami_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ami_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf4b9eb22cd684b3c74f359de1d2da1c756857c17d5ae812e601cbcd70722174
3
- size 1376238
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc91af1f68cf46d19a5cf0946b2630e7d893f6743a951a7b586d0bab9b304a41
3
+ size 1350707
models/tokenizer/ami_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/ami_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:62a8450b8350a9a3296893cca86f7be81850b877d639aa20e246afe3827fce8f
3
- size 363501
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:299585e3fae7c7c16e32939fafacf44061bf52adbb735a7cc3118845a7421988
3
+ size 367877
models/tokenizer/ami_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/ami_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:56d1101428b77726e336991569026e53477c19502cf4df6ee4b3ef0446ea2fd6
3
- size 556457
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b2554bb753aa34783bd74f1d3bc80f61a566f9e2273bab9bfdd7d5526d3aa32
3
+ size 519354
models/vocabulary/ami_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "ami",
3
- "vocabulary_size": 31948,
 
4
  "statistics": {
5
- "type_token_ratio": 0.07917903648473924,
6
  "coverage": {
7
- "top_100": 0.4856909110154611,
8
- "top_1000": 0.7209151406573209,
9
- "top_5000": 0.8507747355966844,
10
- "top_10000": 0.8927231340411788
11
  },
12
- "hapax_count": 48091,
13
- "hapax_ratio": 0.6008445882632216,
14
- "total_documents": 1835
15
  }
16
  }
 
1
  {
2
  "language": "ami",
3
+ "vocabulary_size": 29996,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.07616371567334841,
7
  "coverage": {
8
+ "top_100": 0.5051533209941497,
9
+ "top_1000": 0.7309960152681676,
10
+ "top_5000": 0.8578964137413508,
11
+ "top_10000": 0.8988609661874228
12
  },
13
+ "hapax_count": 42675,
14
+ "hapax_ratio": 0.5872356235637325,
15
+ "total_documents": 1791
16
  }
17
  }
models/word_markov/ami_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:497e8c15f7f0df3064b06bc6079d7fe2457f423038e4205aa304880db1c508cd
3
- size 3074954
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82994f99e1f273b3e05d36bc2663a6668f77084bae001566586edc6fa521ceac
3
+ size 2810453
models/word_markov/ami_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_contexts": 80205,
6
- "total_transitions": 1354574
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_contexts": 72606,
6
+ "total_transitions": 952351
7
  }
models/word_markov/ami_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf92aafa3f3d2a40c2d9ca33400c1d42bd30f230689e2bd061dff62c9a9b7c3c
3
- size 7158620
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c3d060cf7c94ddcc5b80137ae2381c825554392a922b3f095d2bc6335335c03
3
+ size 6568432
models/word_markov/ami_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_contexts": 339828,
6
- "total_transitions": 1352740
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_contexts": 329508,
6
+ "total_transitions": 950560
7
  }
models/word_markov/ami_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:692511fad35e939d0e112742c632a842165e2fe7c3b2a8b5216f6a86f7f9dc7c
3
- size 11871433
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c36780ba30648979c6f36aa779a8ceda3c56142eba2cb087811e6d38efb2225
3
+ size 10206751
models/word_markov/ami_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_contexts": 695243,
6
- "total_transitions": 1350906
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_contexts": 614195,
6
+ "total_transitions": 948769
7
  }
models/word_markov/ami_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:caa72e38f0bd63ad61b98e3fa42473f895753d6215e73dc0853470cde5ad829d
3
- size 15253116
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:649cf9d20bbed0ad2be2491e00a9bbbf30767d7fa1d44c83bfa1d10c47e620e5
3
+ size 12409860
models/word_markov/ami_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_contexts": 934421,
6
- "total_transitions": 1349074
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_contexts": 756623,
6
+ "total_transitions": 946978
7
  }
models/word_ngram/ami_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a6f3d72babd69a9005eb7c3f9ad5ca842fbc6123aa093f970b7cd777994d73f3
3
- size 402572
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34a3cb7ef204727a0d03453df07384a8f62a18d7f3dba653e2b66ce2ca76ea57
3
+ size 304726
models/word_ngram/ami_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_ngrams": 30025,
6
- "total_ngrams": 1354574
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_ngrams": 22550,
6
+ "total_ngrams": 952351
7
  }
models/word_ngram/ami_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf66cc0eda4cc22ffc8385a31de3e3e3052521973994d2e6f28427203ab4a9f1
3
- size 835271
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0d4d3ce7e45b0c943854efac16203bc64aa247604e0b004c3f885c1d84b995a
3
+ size 531269
models/word_ngram/ami_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_ngrams": 58959,
6
- "total_ngrams": 1352740
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_ngrams": 35948,
6
+ "total_ngrams": 950560
7
  }
models/word_ngram/ami_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:60863da19abbe4a1ebb1e300b0413297c9f13f369d4db484bbb83d656705dac2
3
- size 1759231
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95d6030bf37507ad465d99f9e8f7515ad90e1dbd38c9548f4093150077b6b398
3
+ size 1107511
models/word_ngram/ami_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ami",
5
- "unique_ngrams": 125762,
6
- "total_ngrams": 1350906
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "ami",
5
+ "unique_ngrams": 77159,
6
+ "total_ngrams": 948769
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 87115212bd0057207027962744a2d3ea8902fa62131409512ce9c1fae4441047
  • Pointer size: 131 Bytes
  • Size of remote file: 144 kB

Git LFS Details

  • SHA256: 25d13f3212f57056a27405d98964f444f9c22f9a6bf7720eb483656134c41688
  • Pointer size: 131 Bytes
  • Size of remote file: 146 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED