omarkamali commited on
Commit
6a2d0c9
·
verified ·
1 Parent(s): c80ebcd

Upload all models and assets for cdo (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +254 -131
  2. models/embeddings/monolingual/cdo_128d.bin +2 -2
  3. models/embeddings/monolingual/cdo_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/cdo_32d.bin +2 -2
  5. models/embeddings/monolingual/cdo_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/cdo_64d.bin +2 -2
  7. models/embeddings/monolingual/cdo_64d_metadata.json +5 -3
  8. models/subword_markov/cdo_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/cdo_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/cdo_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/cdo_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/cdo_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/cdo_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/cdo_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/cdo_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/cdo_2gram_subword.parquet +2 -2
  17. models/subword_ngram/cdo_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/cdo_3gram_subword.parquet +2 -2
  19. models/subword_ngram/cdo_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/cdo_4gram_subword.parquet +2 -2
  21. models/subword_ngram/cdo_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/cdo_tokenizer_32k.model +2 -2
  23. models/tokenizer/cdo_tokenizer_32k.vocab +0 -0
  24. models/tokenizer/cdo_tokenizer_64k.model +2 -2
  25. models/tokenizer/cdo_tokenizer_64k.vocab +0 -0
  26. models/vocabulary/cdo_vocabulary.parquet +2 -2
  27. models/vocabulary/cdo_vocabulary_metadata.json +10 -9
  28. models/word_markov/cdo_markov_ctx1_word.parquet +2 -2
  29. models/word_markov/cdo_markov_ctx1_word_metadata.json +2 -2
  30. models/word_markov/cdo_markov_ctx2_word.parquet +2 -2
  31. models/word_markov/cdo_markov_ctx2_word_metadata.json +2 -2
  32. models/word_markov/cdo_markov_ctx3_word.parquet +2 -2
  33. models/word_markov/cdo_markov_ctx3_word_metadata.json +2 -2
  34. models/word_markov/cdo_markov_ctx4_word.parquet +2 -2
  35. models/word_markov/cdo_markov_ctx4_word_metadata.json +2 -2
  36. models/word_ngram/cdo_2gram_word.parquet +2 -2
  37. models/word_ngram/cdo_2gram_word_metadata.json +2 -2
  38. models/word_ngram/cdo_3gram_word.parquet +2 -2
  39. models/word_ngram/cdo_3gram_word_metadata.json +2 -2
  40. models/word_ngram/cdo_4gram_word.parquet +2 -2
  41. models/word_ngram/cdo_4gram_word_metadata.json +2 -2
  42. visualizations/embedding_isotropy.png +0 -0
  43. visualizations/embedding_norms.png +0 -0
  44. visualizations/embedding_similarity.png +2 -2
  45. visualizations/markov_branching.png +0 -0
  46. visualizations/markov_contexts.png +0 -0
  47. visualizations/markov_entropy.png +0 -0
  48. visualizations/model_sizes.png +0 -0
  49. visualizations/nearest_neighbors.png +0 -0
  50. visualizations/ngram_coverage.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 2.796
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.5460
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 12714
33
- generated: 2025-12-28
34
  ---
35
 
36
  # CDO - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,50 +70,49 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **32k** | 2.562x | 2.54 | 0.0007% | 298,320 |
76
- | **64k** | 2.796x 🏆 | 2.77 | 0.0007% | 273,367 |
77
 
78
  ### Tokenization Examples
79
 
80
  Below are sample sentences tokenized with each vocabulary size:
81
 
82
- **Sample 1:** `Pender Gông (Ĭng-ngṳ̄: Pender County) Mī-guók North Carolina siŏh ciáh gôn...`
83
 
84
  | Vocab | Tokens | Count |
85
  |-------|--------|-------|
86
- | 32k | `▁pen der ▁gông ▁( ĭng - ngṳ̄ : pen der ... (+19 more)` | 29 |
87
- | 64k | `▁pender ▁gông ▁( ĭng - ngṳ̄ : ▁pender ▁county ) ... (+17 more)` | 27 |
88
-
89
- **Sample 2:** `Duâi dâi
90
 
91
- Chók-sié
92
-
93
- Guó-sié
94
-
95
-
96
- 分類:1170 nièng-dâi`
97
 
98
  | Vocab | Tokens | Count |
99
  |-------|--------|-------|
100
- | 32k | `▁duâidâichók - sié ▁guó - sié ▁分類 : ... (+7 more)` | 17 |
101
- | 64k | `▁duâidâichók - siéguó - sié ▁分類 : ... (+7 more)` | 17 |
102
 
103
- **Sample 3:** `1000 nièng-dâi téng 1000 nièng 1 nguŏk 1 hô̤ kăi-sṳ̄, gáu 1009 nièng 12 nguŏk 31...`
104
 
105
  | Vocab | Tokens | Count |
106
  |-------|--------|-------|
107
- | 32k | `▁ 1 0 0 0nièng - dâi ▁téng ... (+36 more)` | 46 |
108
- | 64k | `▁ 1 0 0 0 ▁nièng - dâi téng ▁ ... (+36 more)` | 46 |
109
 
110
 
111
  ### Key Findings
112
 
113
- - **Best Compression:** 64k achieves 2.796x compression
114
- - **Lowest UNK Rate:** 32k with 0.0007% unknown tokens
115
  - **Trade-off:** Larger vocabularies improve compression but increase model size
116
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
117
 
@@ -120,57 +121,89 @@ Guó-sié
120
 
121
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
122
 
 
 
123
  ![N-gram Coverage](visualizations/ngram_coverage.png)
124
 
125
  ### Results
126
 
127
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
128
- |--------|------------|---------|----------------|------------------|-------------------|
129
- | **2-gram** | 2,092 🏆 | 11.03 | 13,738 | 34.2% | 70.6% |
130
- | **2-gram** | 517 🏆 | 9.01 | 13,773 | 57.4% | 92.0% |
131
- | **3-gram** | 6,902 | 12.75 | 35,914 | 23.0% | 49.3% |
132
- | **3-gram** | 2,154 | 11.07 | 33,837 | 33.3% | 72.5% |
133
- | **4-gram** | 16,500 | 14.01 | 75,913 | 16.0% | 37.8% |
134
- | **4-gram** | 6,830 | 12.74 | 94,271 | 22.4% | 53.8% |
135
 
136
  ### Top 5 N-grams by Size
137
 
138
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
  | Rank | N-gram | Count |
141
  |------|--------|-------|
142
- | 1 | `分類 :` | 17,792 |
143
- | 2 | ng` | 9,653 |
144
- | 3 | `. 分類` | 8,000 |
145
- | 4 | `- guók` | 7,750 |
146
- | 5 | `- sié` | 7,747 |
147
 
148
- **3-grams:**
149
 
150
  | Rank | N-gram | Count |
151
  |------|--------|-------|
152
- | 1 | `. 分類 :` | 8,000 |
153
- | 2 | ` siŏh ciáh` | 5,565 |
154
- | 3 | `- ngṳ ̄` | 4,336 |
155
- | 4 | ` - guók` | 3,641 |
156
- | 5 | `gâe ̤ ng` | 3,480 |
157
 
158
- **4-grams:**
159
 
160
  | Rank | N-gram | Count |
161
  |------|--------|-------|
162
- | 1 | ` - guók` | 3,211 |
163
- | 2 | ` siŏh ciáh gông` | 3,000 |
164
- | 3 | `ciáh gông . 分類` | 3,000 |
165
- | 4 | `gông . 分類 :` | 3,000 |
166
- | 5 | `siŏh ciáh gông .` | 3,000 |
 
 
 
 
 
 
 
 
 
 
167
 
168
 
169
  ### Key Findings
170
 
171
- - **Best Perplexity:** 2-gram with 517
172
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
173
- - **Coverage:** Top-1000 patterns cover ~54% of corpus
174
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
175
 
176
  ---
@@ -178,55 +211,86 @@ Guó-sié
178
 
179
  ![Markov Entropy](visualizations/markov_entropy.png)
180
 
 
 
181
  ![Markov Branching](visualizations/markov_branching.png)
182
 
183
  ### Results
184
 
185
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
186
- |---------|-------------|------------|------------------|-----------------|----------------|
187
- | **1** | 0.2803 | 1.214 | 3.49 | 48,699 | 72.0% |
188
- | **1** | 0.3942 | 1.314 | 4.02 | 31,614 | 60.6% |
189
- | **2** | 0.1991 | 1.148 | 1.83 | 169,503 | 80.1% |
190
- | **2** | 0.3616 | 1.285 | 2.00 | 127,156 | 63.8% |
191
- | **3** | 0.1556 | 1.114 | 1.42 | 308,939 | 84.4% |
192
- | **3** | 0.2179 | 1.163 | 1.54 | 253,902 | 78.2% |
193
- | **4** | 0.0983 🏆 | 1.071 | 1.21 | 437,205 | 90.2% |
194
- | **4** | 0.1764 🏆 | 1.130 | 1.38 | 389,634 | 82.4% |
195
 
196
- ### Generated Text Samples
197
 
198
- Below are text samples generated from each Markov chain model:
199
 
200
  **Context Size 1:**
201
 
202
- 1. `- siék siŏh ciáh gông . 分類 : chĭng - uăng - -`
203
- 2. k sáng ĕu - ngiòng ( 螺洲路 ) guōng - dŏng - dōi -`
204
- 3. ` ̤ 18 ngiê - guók - guó - dók . chók -`
205
 
206
  **Context Size 2:**
207
 
208
- 1. `分類 : 1370 nièng - dâi lùng - dŭng - ngŏk liù - giù - dôi`
209
- 2. ng hók - gióng , dâi - biēu ̤ ṳng - sāng - dōng gâe`
210
- 3. `. 分類 : 200 nièng - dâi - mā 分類 : 1300年代`
211
 
212
  **Context Size 3:**
213
 
214
- 1. `. 分類 : minnesotagông`
215
- 2. ` siŏh ciáh - ngék - chê . 分類 : - báe ̤ k - chiă`
216
- 3. `- ngṳ ̄ : lafayette county ) sê mī - guók buô - hông gì sṳ ̆`
217
 
218
  **Context Size 4:**
219
 
220
- 1. `sê mī - guók colorado gì siŏh ciáh gông . 分類 : florida gì gông`
221
- 2. `gông . 分類 : michigan gì gông`
222
- 3. `siŏh ciáh gông . 分類 : indiana gì gông`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
223
 
224
 
225
  ### Key Findings
226
 
227
- - **Best Predictability:** Context-4 with 90.2% predictability
228
  - **Branching Factor:** Decreases with context size (more deterministic)
229
- - **Memory Trade-off:** Larger contexts require more storage (389,634 contexts)
230
  - **Recommendation:** Context-3 or Context-4 for text generation
231
 
232
  ---
@@ -242,64 +306,64 @@ Below are text samples generated from each Markov chain model:
242
 
243
  | Metric | Value |
244
  |--------|-------|
245
- | Vocabulary Size | 12,714 |
246
- | Total Tokens | 590,881 |
247
- | Mean Frequency | 46.47 |
248
  | Median Frequency | 3 |
249
- | Frequency Std Dev | 447.20 |
250
 
251
  ### Most Common Words
252
 
253
  | Rank | Word | Frequency |
254
  |------|------|-----------|
255
- | 1 | gì | 24,268 |
256
- | 2 | 分類 | 17,794 |
257
- | 3 | ng | 16,472 |
258
- | 4 | | 15,967 |
259
- | 5 | siŏh | 9,713 |
260
- | 6 | guók | 9,302 |
261
- | 7 | gông | 9,087 |
262
- | 8 | sié | 8,595 |
263
- | 9 | nièng | 7,825 |
264
- | 10 | dâi | 7,699 |
265
 
266
  ### Least Common Words (from vocabulary)
267
 
268
  | Rank | Word | Frequency |
269
  |------|------|-----------|
270
- | 1 | 燈泡厰 | 2 |
271
- | 2 | 搪瓷厰 | 2 |
272
- | 3 | 保溫瓶厰 | 2 |
273
- | 4 | 啤酒厰 | 2 |
274
- | 5 | 福大機械厰 | 2 |
275
- | 6 | 抗生素厰 | 2 |
276
- | 7 | kbo | 2 |
277
- | 8 | 우주항공청 | 2 |
278
- | 9 | cho | 2 |
279
- | 10 | chit | 2 |
280
 
281
  ### Zipf's Law Analysis
282
 
283
  | Metric | Value |
284
  |--------|-------|
285
  | Zipf Coefficient | 1.3995 |
286
- | R² (Goodness of Fit) | 0.979429 |
287
  | Adherence Quality | **excellent** |
288
 
289
  ### Coverage Analysis
290
 
291
  | Top N Words | Coverage |
292
  |-------------|----------|
293
- | Top 100 | 55.6% |
294
- | Top 1,000 | 90.8% |
295
- | Top 5,000 | 97.1% |
296
- | Top 10,000 | 99.1% |
297
 
298
  ### Key Findings
299
 
300
- - **Zipf Compliance:** R²=0.9794 indicates excellent adherence to Zipf's law
301
- - **High Frequency Dominance:** Top 100 words cover 55.6% of corpus
302
- - **Long Tail:** 2,714 words needed for remaining 0.9% coverage
303
 
304
  ---
305
  ## 5. Word Embeddings Evaluation
@@ -312,24 +376,80 @@ Below are text samples generated from each Markov chain model:
312
 
313
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
314
 
315
- ### Model Comparison
316
 
317
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
318
- |-------|------------|-----------|----------|----------|----------|
319
- | **mono_32d** | 7,009 | 32 | 4.149 | 1.118 | 0.5460 🏆 |
320
- | **mono_64d** | 7,009 | 64 | 4.243 | 1.106 | 0.2037 |
321
- | **mono_128d** | 7,009 | 128 | 4.233 | 1.119 | 0.0381 |
322
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
323
 
324
  ### Key Findings
325
 
326
- - **Best Isotropy:** mono_32d with 0.5460 (more uniform distribution)
327
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
328
- - **Vocabulary Coverage:** All models cover 7,009 words
329
- - **Recommendation:** 100d for balanced semantic capture and efficiency
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
330
 
331
  ---
332
- ## 6. Summary & Recommendations
333
 
334
  ![Performance Dashboard](visualizations/performance_dashboard.png)
335
 
@@ -337,11 +457,12 @@ Below are text samples generated from each Markov chain model:
337
 
338
  | Component | Recommended | Rationale |
339
  |-----------|-------------|-----------|
340
- | Tokenizer | **32k BPE** | Best compression (2.80x) with low UNK rate |
341
- | N-gram | **5-gram** | Lowest perplexity (517) |
342
- | Markov | **Context-4** | Highest predictability (90.2%) |
343
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
344
 
 
345
  ---
346
  ## Appendix: Metrics Glossary & Interpretation Guide
347
 
@@ -531,7 +652,8 @@ If you use these models in your research, please cite:
531
  author = {Kamali, Omar},
532
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
533
  year = {2025},
534
- publisher = {HuggingFace},
 
535
  url = {https://huggingface.co/wikilangs}
536
  institution = {Omneity Labs}
537
  }
@@ -547,7 +669,8 @@ MIT License - Free for academic and commercial use.
547
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
548
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
549
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
550
  ---
551
  *Generated by Wikilangs Models Pipeline*
552
 
553
- *Report Date: 2025-12-28 16:25:16*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 2.892
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.5551
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # CDO - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **32k** | 2.757x | 2.76 | 0.1034% | 257,325 |
84
+ | **64k** | 2.892x 🏆 | 2.90 | 0.1084% | 245,327 |
85
 
86
  ### Tokenization Examples
87
 
88
  Below are sample sentences tokenized with each vocabulary size:
89
 
90
+ **Sample 1:** `Bashkortostan Ngò̤-lò̤-sṳ̆ siŏh ciáh gê̤ṳng-huò-guók.gê̤ṳng-huò-guók`
91
 
92
  | Vocab | Tokens | Count |
93
  |-------|--------|-------|
94
+ | 32k | `▁b ash k ort os tan ▁sêngò ̤- ... (+17 more)` | 27 |
95
+ | 64k | `▁bash k ortos tan ▁sê ▁ngò ̤- ̤- sṳ̆ ... (+15 more)` | 25 |
 
 
96
 
97
+ **Sample 2:** `Montague Gông (Ĭng-ngṳ̄: Montague County) sê Mī-guók Texas gì siŏh ciáh gông. gì...`
 
 
 
 
 
98
 
99
  | Vocab | Tokens | Count |
100
  |-------|--------|-------|
101
+ | 32k | `▁mont a gue gông( ĭng - ngṳ̄ : ▁mont ... (+16 more)` | 26 |
102
+ | 64k | `▁montaguegông( ĭng - ngṳ̄ : montague ▁county ) ... (+12 more)` | 22 |
103
 
104
+ **Sample 3:** `Ochiltree Gông (Ĭng-ngṳ̄: Ochiltree County) Mī-guók Texas siŏh ciáh gông. ...`
105
 
106
  | Vocab | Tokens | Count |
107
  |-------|--------|-------|
108
+ | 32k | `▁o chi l t re e gông ▁( ĭng - ... (+22 more)` | 32 |
109
+ | 64k | `▁ochiltree ▁gông ▁( ĭng - ngṳ̄ :ochiltreecounty ) ... (+12 more)` | 22 |
110
 
111
 
112
  ### Key Findings
113
 
114
+ - **Best Compression:** 64k achieves 2.892x compression
115
+ - **Lowest UNK Rate:** 32k with 0.1034% unknown tokens
116
  - **Trade-off:** Larger vocabularies improve compression but increase model size
117
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
118
 
 
121
 
122
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
123
 
124
+ ![N-gram Unique](visualizations/ngram_unique.png)
125
+
126
  ![N-gram Coverage](visualizations/ngram_coverage.png)
127
 
128
  ### Results
129
 
130
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
131
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
132
+ | **2-gram** | Word | 3,105 | 11.60 | 11,679 | 27.7% | 59.2% |
133
+ | **2-gram** | Subword | 342 🏆 | 8.42 | 6,912 | 63.5% | 95.8% |
134
+ | **3-gram** | Word | 4,698 | 12.20 | 17,954 | 23.8% | 52.2% |
135
+ | **3-gram** | Subword | 1,659 | 10.70 | 21,000 | 36.1% | 75.8% |
136
+ | **4-gram** | Word | 8,483 | 13.05 | 30,938 | 18.6% | 45.4% |
137
+ | **4-gram** | Subword | 5,744 | 12.49 | 69,193 | 23.7% | 55.8% |
138
 
139
  ### Top 5 N-grams by Size
140
 
141
+ **2-grams (Word):**
142
+
143
+ | Rank | N-gram | Count |
144
+ |------|--------|-------|
145
+ | 1 | `gì siŏh` | 6,258 |
146
+ | 2 | `siŏh ciáh` | 6,232 |
147
+ | 3 | `mī guók` | 3,385 |
148
+ | 4 | `sê mī` | 3,191 |
149
+ | 5 | `gì gông` | 3,000 |
150
+
151
+ **3-grams (Word):**
152
+
153
+ | Rank | N-gram | Count |
154
+ |------|--------|-------|
155
+ | 1 | `gì siŏh ciáh` | 5,413 |
156
+ | 2 | `sê mī guók` | 3,173 |
157
+ | 3 | `siŏh ciáh gông` | 3,000 |
158
+ | 4 | `ciáh gông gì` | 2,557 |
159
+ | 5 | `gông gì gông` | 2,557 |
160
+
161
+ **4-grams (Word):**
162
 
163
  | Rank | N-gram | Count |
164
  |------|--------|-------|
165
+ | 1 | `gì siŏh ciáh gông` | 3,000 |
166
+ | 2 | `ciáh gông gì gông` | 2,557 |
167
+ | 3 | `siŏh ciáh gông gì` | 2,557 |
168
+ | 4 | `county sê mī guók` | 1,971 |
169
+ | 5 | `gông mī guók` | 1,029 |
170
 
171
+ **2-grams (Subword):**
172
 
173
  | Rank | N-gram | Count |
174
  |------|--------|-------|
175
+ | 1 | `n g` | 146,797 |
176
+ | 2 | `_ g` | 59,970 |
177
+ | 3 | `g -` | 55,946 |
178
+ | 4 | `g _` | 55,139 |
179
+ | 5 | `_ s` | 41,311 |
180
 
181
+ **3-grams (Subword):**
182
 
183
  | Rank | N-gram | Count |
184
  |------|--------|-------|
185
+ | 1 | `n g -` | 55,920 |
186
+ | 2 | `n g _` | 55,025 |
187
+ | 3 | `_ g ì` | 23,090 |
188
+ | 4 | `g ì _` | 22,312 |
189
+ | 5 | `_ s i` | 14,134 |
190
+
191
+ **4-grams (Subword):**
192
+
193
+ | Rank | N-gram | Count |
194
+ |------|--------|-------|
195
+ | 1 | `_ g ì _` | 22,161 |
196
+ | 2 | `_ s ê _` | 13,231 |
197
+ | 3 | `n g _ g` | 11,336 |
198
+ | 4 | `i ŏ h _` | 10,632 |
199
+ | 5 | `_ s i ŏ` | 9,391 |
200
 
201
 
202
  ### Key Findings
203
 
204
+ - **Best Perplexity:** 2-gram (subword) with 342
205
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
206
+ - **Coverage:** Top-1000 patterns cover ~56% of corpus
207
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
208
 
209
  ---
 
211
 
212
  ![Markov Entropy](visualizations/markov_entropy.png)
213
 
214
+ ![Markov Contexts](visualizations/markov_contexts.png)
215
+
216
  ![Markov Branching](visualizations/markov_branching.png)
217
 
218
  ### Results
219
 
220
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
221
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
222
+ | **1** | Word | 0.4882 | 1.403 | 4.73 | 29,670 | 51.2% |
223
+ | **1** | Subword | 0.3461 | 1.271 | 2.92 | 25,622 | 65.4% |
224
+ | **2** | Word | 0.3187 | 1.247 | 1.80 | 139,308 | 68.1% |
225
+ | **2** | Subword | 0.2753 | 1.210 | 1.79 | 74,780 | 72.5% |
226
+ | **3** | Word | 0.1201 | 1.087 | 1.23 | 249,012 | 88.0% |
227
+ | **3** | Subword | 0.2343 | 1.176 | 1.69 | 133,665 | 76.6% |
228
+ | **4** | Word | 0.0526 🏆 | 1.037 | 1.09 | 301,670 | 94.7% |
229
+ | **4** | Subword | 0.2290 | 1.172 | 1.53 | 225,577 | 77.1% |
230
 
231
+ ### Generated Text Samples (Word-based)
232
 
233
+ Below are text samples generated from each word-based Markov chain model:
234
 
235
  **Context Size 1:**
236
 
237
+ 1. `gì hiŏng 蘆洋鄉 bìng dēng có̤i kăi sṳ̄ háng nè̤ng gó̤ lō̤ 法老 gái`
238
+ 2. `sê dṳ̆ng huà ìng chê 邢臺市 lòng 退之 ĕng diŏh adelaide ô 2 nguŏk`
239
+ 3. `siŏh bĭk cék éng sáuk īng lĭk â̤ dé̤ṳng buŏng nàng áng turkic ngṳ̄`
240
 
241
  **Context Size 2:**
242
 
243
+ 1. `gì siŏh cṳ̄ng lòi ĭng ôi sĭng ô 79 ciáh mŭk sṳ̆ 佳木斯 dṳ̆ng guók sĭng`
244
+ 2. `siŏh ciáh chê hăk kṳ̆ 市轄區 lĭk sṳ̄ diē sié siŏh cṳ̄ng ciŏng muòng dò̤ lā̤`
245
+ 3. `mī guók montana siŏh ciáh gông gông`
246
 
247
  **Context Size 3:**
248
 
249
+ 1. `gì siŏh ciáh duâi kṳ̆ duâi kṳ̆`
250
+ 2. ` guók kansas siŏh ciáh mō̤ diŏh eta piăng âu gâe̤ng sigma sèng`
251
+ 3. `siŏh ciáh gônggông`
252
 
253
  **Context Size 4:**
254
 
255
+ 1. `gì siŏh ciáh gông gì gông`
256
+ 2. `siŏh ciáh gông gì gông`
257
+ 3. `county guók nebraska siŏh ciáh gông gì gông`
258
+
259
+
260
+ ### Generated Text Samples (Subword-based)
261
+
262
+ Below are text samples generated from each subword-based Markov chain model:
263
+
264
+ **Context Size 1:**
265
+
266
+ 1. `_ônià_hô̤_sênty)_`
267
+ 2. `gì_ciônièng,_sṳ̄:`
268
+ 3. `ngăngôner_g-ccáh`
269
+
270
+ **Context Size 2:**
271
+
272
+ 1. `nguô_ka_cĭ_gì_dâ̤_`
273
+ 2. `_gì_sié-sê_hŏk-cê`
274
+ 3. `g-hĭ_(獨聯體)_gông_n`
275
+
276
+ **Context Size 3:**
277
+
278
+ 1. `ng-hèng_biéng,_mac`
279
+ 2. `ng_adahoma_gì_(兩個聲`
280
+ 3. `_gì_siōng-dĕ̤ng-ŭk_`
281
+
282
+ **Context Size 4:**
283
+
284
+ 1. `_gì_«sṳ̀ng-kṳ̆_dĕk-bi`
285
+ 2. `_sê_„發現更大的世界“)_có̤_c`
286
+ 3. `ng_găk_chăng-muò_(𧋘`
287
 
288
 
289
  ### Key Findings
290
 
291
+ - **Best Predictability:** Context-4 (word) with 94.7% predictability
292
  - **Branching Factor:** Decreases with context size (more deterministic)
293
+ - **Memory Trade-off:** Larger contexts require more storage (225,577 contexts)
294
  - **Recommendation:** Context-3 or Context-4 for text generation
295
 
296
  ---
 
306
 
307
  | Metric | Value |
308
  |--------|-------|
309
+ | Vocabulary Size | 9,559 |
310
+ | Total Tokens | 467,385 |
311
+ | Mean Frequency | 48.89 |
312
  | Median Frequency | 3 |
313
+ | Frequency Std Dev | 395.71 |
314
 
315
  ### Most Common Words
316
 
317
  | Rank | Word | Frequency |
318
  |------|------|-----------|
319
+ | 1 | gì | 23,295 |
320
+ | 2 | | 14,068 |
321
+ | 3 | siŏh | 9,247 |
322
+ | 4 | gông | 9,087 |
323
+ | 5 | guók | 8,549 |
324
+ | 6 | ciáh | 7,131 |
325
+ | 7 | nièng | 5,854 |
326
+ | 8 | ngṳ̄ | 5,277 |
327
+ | 9 | sié | 4,616 |
328
+ | 10 | gáu | 4,179 |
329
 
330
  ### Least Common Words (from vocabulary)
331
 
332
  | Rank | Word | Frequency |
333
  |------|------|-----------|
334
+ | 1 | woolridge | 2 |
335
+ | 2 | imperiyası | 2 |
336
+ | 3 | abş | 2 |
337
+ | 4 | çox | 2 |
338
+ | 5 | dünyada | 2 |
339
+ | 6 | bütün | 2 |
340
+ | 7 | 嘉祿 | 2 |
341
+ | 8 | 六一路 | 2 |
342
+ | 9 | 神壇樹 | 2 |
343
+ | 10 | 신단수 | 2 |
344
 
345
  ### Zipf's Law Analysis
346
 
347
  | Metric | Value |
348
  |--------|-------|
349
  | Zipf Coefficient | 1.3995 |
350
+ | R² (Goodness of Fit) | 0.957431 |
351
  | Adherence Quality | **excellent** |
352
 
353
  ### Coverage Analysis
354
 
355
  | Top N Words | Coverage |
356
  |-------------|----------|
357
+ | Top 100 | 52.2% |
358
+ | Top 1,000 | 91.7% |
359
+ | Top 5,000 | 98.0% |
360
+ | Top 10,000 | 0.0% |
361
 
362
  ### Key Findings
363
 
364
+ - **Zipf Compliance:** R²=0.9574 indicates excellent adherence to Zipf's law
365
+ - **High Frequency Dominance:** Top 100 words cover 52.2% of corpus
366
+ - **Long Tail:** -441 words needed for remaining 100.0% coverage
367
 
368
  ---
369
  ## 5. Word Embeddings Evaluation
 
376
 
377
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
378
 
 
379
 
380
+ ### 5.1 Cross-Lingual Alignment
381
+
382
+ > *Note: Multilingual alignment visualization not available for this language.*
383
+
384
+
385
+ ### 5.2 Model Comparison
386
+
387
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
388
+ |-------|-----------|----------|------------------|---------------|----------------|
389
+ | **mono_32d** | 32 | 0.5551 🏆 | 0.4156 | N/A | N/A |
390
+ | **mono_64d** | 64 | 0.1856 | 0.4055 | N/A | N/A |
391
+ | **mono_128d** | 128 | 0.0279 | 0.4128 | N/A | N/A |
392
 
393
  ### Key Findings
394
 
395
+ - **Best Isotropy:** mono_32d with 0.5551 (more uniform distribution)
396
+ - **Semantic Density:** Average pairwise similarity of 0.4113. Lower values indicate better semantic separation.
397
+ - **Alignment Quality:** No aligned models evaluated in this run.
398
+ - **Recommendation:** 128d aligned for best cross-lingual performance
399
+
400
+ ---
401
+ ## 6. Morphological Analysis (Experimental)
402
+
403
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
404
+
405
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
406
+
407
+ ### 6.1 Productivity & Complexity
408
+
409
+ | Metric | Value | Interpretation | Recommendation |
410
+ |--------|-------|----------------|----------------|
411
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
412
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
413
+
414
+ ### 6.2 Affix Inventory (Productive Units)
415
+
416
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
417
+
418
+ *No productive affixes detected.*
419
+
420
+
421
+ ### 6.3 Bound Stems (Lexical Roots)
422
+
423
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
424
+
425
+ | Stem | Cohesion | Substitutability | Examples |
426
+ |------|----------|------------------|----------|
427
+ | `áung` | 1.97x | 9 contexts | táung, láung, dáung |
428
+ | `âung` | 1.96x | 9 contexts | câung, bâung, hâung |
429
+ | `iăng` | 1.80x | 7 contexts | hiăng, siăng, giăng |
430
+ | `iāng` | 1.55x | 8 contexts | liāng, biāng, ciāng |
431
+
432
+ ### 6.4 Affix Compatibility (Co-occurrence)
433
+
434
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
435
+
436
+ *No significant affix co-occurrences detected.*
437
+
438
+
439
+ ### 6.5 Recursive Morpheme Segmentation
440
+
441
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
442
+
443
+ *Insufficient data for recursive segmentation.*
444
+
445
+
446
+ ### 6.6 Linguistic Interpretation
447
+
448
+ > **Automated Insight:**
449
+ The language CDO appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
450
 
451
  ---
452
+ ## 7. Summary & Recommendations
453
 
454
  ![Performance Dashboard](visualizations/performance_dashboard.png)
455
 
 
457
 
458
  | Component | Recommended | Rationale |
459
  |-----------|-------------|-----------|
460
+ | Tokenizer | **64k BPE** | Best compression (2.89x) |
461
+ | N-gram | **2-gram** | Lowest perplexity (342) |
462
+ | Markov | **Context-4** | Highest predictability (94.7%) |
463
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
464
 
465
+
466
  ---
467
  ## Appendix: Metrics Glossary & Interpretation Guide
468
 
 
652
  author = {Kamali, Omar},
653
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
654
  year = {2025},
655
+ doi = {10.5281/zenodo.18073153},
656
+ publisher = {Zenodo},
657
  url = {https://huggingface.co/wikilangs}
658
  institution = {Omneity Labs}
659
  }
 
669
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
670
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
671
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
672
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
673
  ---
674
  *Generated by Wikilangs Models Pipeline*
675
 
676
+ *Report Date: 2026-01-03 09:43:04*
models/embeddings/monolingual/cdo_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:87273b834175c8ffca7ff96a3a21f5890c33eb81fd8bd600e3f7749a3c1efde6
3
- size 1031314655
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c366500bae149b287db9cc735aecf709e56990e0f1dd949aa01f743c5b542bf
3
+ size 1030106435
models/embeddings/monolingual/cdo_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 7009
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 5854
15
  }
models/embeddings/monolingual/cdo_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d65b7a0893696382e676534c0654d0fa33b57a4337091da15492cd7c0abf3b48
3
- size 257931743
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ee34c3106d7321309452b983807ebe84c3a153af734404b31b03ac0f6076d5c
3
+ size 257610563
models/embeddings/monolingual/cdo_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 7009
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 5854
15
  }
models/embeddings/monolingual/cdo_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70424ab1f058bf1eec92ffa8c17229b3451d02319f7295d7a82908645320bdb6
3
- size 515726047
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aad5fc0dca0e268941025dbedc1373061213d3a2602f82f5fb36a6a8ac4c5600
3
+ size 515109187
models/embeddings/monolingual/cdo_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 7009
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 5854
15
  }
models/subword_markov/cdo_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0f235b61b31e910d9f0a63bff6431ca1b061d7d60c93f009c3efb2d4dd4d8d94
3
- size 859391
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41c23bda939ee89e9ea22c7d220f77ade97cf29b5706a8ccea6a5a1649c35d5c
3
+ size 588804
models/subword_markov/cdo_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_contexts": 31614,
6
- "total_transitions": 2905389
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_contexts": 25622,
6
+ "total_transitions": 2209713
7
  }
models/subword_markov/cdo_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:95c5781711001d90b8d98035bad77dc2eafa9ca4d93f92cbe4e187b3ca6f467c
3
- size 2273669
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:635b5359d40f1fb9f72a9835efb0cf9b06c7ece40db0ae41691aa56bcf42a68f
3
+ size 1373275
models/subword_markov/cdo_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_contexts": 127156,
6
- "total_transitions": 2888681
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_contexts": 74780,
6
+ "total_transitions": 2199286
7
  }
models/subword_markov/cdo_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0112fc9e03100831bb45b13b125d60693af2b6111630b553ae0b8802b325840d
3
- size 4278912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edbc276446953ce254374098ddc144b562d74f64b6dc6b037712625c3474bcc7
3
+ size 2410456
models/subword_markov/cdo_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_contexts": 253902,
6
- "total_transitions": 2871973
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_contexts": 133665,
6
+ "total_transitions": 2188859
7
  }
models/subword_markov/cdo_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:863a8d123baa32acad56367516fba40e4df9fe4d98e2a22ce9e27a3dcadf37ff
3
- size 6583946
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73b83a95594a9de41178ee150977a694d8fe0fc9528952db1f4037278640135b
3
+ size 3940722
models/subword_markov/cdo_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_contexts": 389634,
6
- "total_transitions": 2855265
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_contexts": 225577,
6
+ "total_transitions": 2178432
7
  }
models/subword_ngram/cdo_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:429757e59b20addd0ca423c66011cf245936cb430796406a7e56ac432fbc78ff
3
- size 180569
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07438dbbdf7df68795d2567c55c6704ba9e2aa37a1e974f3796653e4ee46d715
3
+ size 93187
models/subword_ngram/cdo_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_ngrams": 13773,
6
- "total_ngrams": 2905389
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_ngrams": 6912,
6
+ "total_ngrams": 2209713
7
  }
models/subword_ngram/cdo_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb21e2e89d34f0d79e0ad314920dfc3371d0dbdc72b20e5448b52c64d94e3daf
3
- size 481494
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bc12fdb56f63acfec879929e218e1b08b69ed6bc6caaf23de5405818299a15a
3
+ size 290607
models/subword_ngram/cdo_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_ngrams": 33837,
6
- "total_ngrams": 2888681
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_ngrams": 21000,
6
+ "total_ngrams": 2199286
7
  }
models/subword_ngram/cdo_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f82d92e0227f12077424a14d88f6d50fbac2bc1d12b803abe7c34c6bc8a1e7e
3
- size 1234369
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7bbf6e008db71c8cab5f01f7b94dece221cc12e30b6c0bbde60267e28ad3d1b
3
+ size 893634
models/subword_ngram/cdo_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "cdo",
5
- "unique_ngrams": 94271,
6
- "total_ngrams": 2871973
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "cdo",
5
+ "unique_ngrams": 69193,
6
+ "total_ngrams": 2188859
7
  }
models/tokenizer/cdo_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29cc26ce42341f87bf600b104422987383bb16aab206d714d8025b627a81318a
3
- size 652498
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4ea04b7b5ac9eb6c0c47f8743b9ec573da4a66b11329b62767916c896096283
3
+ size 659402
models/tokenizer/cdo_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/cdo_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd9f7c3b27cd6d59d2600b4da0253681a1c8987f94a736a1458e336f86ae632c
3
- size 1220234
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:543fc6291e9f68ca726ae3c62da2bf8a9f2e8964f61ac8c8f47e7620b30680d4
3
+ size 1252522
models/tokenizer/cdo_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/cdo_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f67094c35332c0e243a9e0fa3b227560d5e7cbeb597f1f5f91eeaae915d87402
3
- size 220074
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13d428e7f371fae4d1817f7fd0cad404bb08c4eebfeba6b19d2159ec0858966d
3
+ size 161421
models/vocabulary/cdo_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "cdo",
3
- "vocabulary_size": 12714,
 
4
  "statistics": {
5
- "type_token_ratio": 0.07780709085921002,
6
  "coverage": {
7
- "top_100": 0.5244804991801553,
8
- "top_1000": 0.8557711325341176,
9
- "top_5000": 0.9148462073409597,
10
- "top_10000": 0.9338142876283201
11
  },
12
- "hapax_count": 36067,
13
- "hapax_ratio": 0.7393657366597651,
14
- "total_documents": 16708
15
  }
16
  }
 
1
  {
2
  "language": "cdo",
3
+ "vocabulary_size": 9559,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.06153581253100364,
7
  "coverage": {
8
+ "top_100": 0.4998278145152364,
9
+ "top_1000": 0.8788593121599849,
10
+ "top_5000": 0.9388721030817102,
11
+ "top_10000": 0.9589624594646672
12
  },
13
+ "hapax_count": 20461,
14
+ "hapax_ratio": 0.6815789473684211,
15
+ "total_documents": 10427
16
  }
17
  }
models/word_markov/cdo_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:216d791b7daa8b64656b1cba78bbbc02aa7f5ae7d9bf1ce3a3acf8fc6ca30ad8
3
- size 2218064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7c52cf701393a62ef55eb76e071bcc146cb575dcba8d7de8f629c6f59a0ffa9
3
+ size 1378282
models/word_markov/cdo_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_contexts": 48699,
6
- "total_transitions": 989207
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_contexts": 29670,
6
+ "total_transitions": 477419
7
  }
models/word_markov/cdo_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:59c0afaec27cafdf472078114f0c1e6330cbc442a0cf86a0de739eb654009647
3
- size 4081907
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e2d34224b73bb7d088188c6354a0a9b2ed3b721bfecbe369bd09f27bc4bf8eb
3
+ size 3117924
models/word_markov/cdo_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_contexts": 169503,
6
- "total_transitions": 972499
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_contexts": 139308,
6
+ "total_transitions": 466992
7
  }
models/word_markov/cdo_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9a9fad10b27aa485ee443b1b3fb11cf1c68941f562e62e4a5d392900045e177d
3
- size 6371893
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:751ecd29db06623ef008eb11136a6de55e6177de5d5106326042cf155bf67096
3
+ size 4876570
models/word_markov/cdo_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_contexts": 308939,
6
- "total_transitions": 955791
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_contexts": 249012,
6
+ "total_transitions": 456565
7
  }
models/word_markov/cdo_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e117de2ccb82cee04ee59547125507223f105436e0ac8437360575ade999d100
3
- size 8441712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be6fc8ceb5ad611ae4227c2c3e9eb01458860d29280986707129fb49af2ed1fc
3
+ size 5983737
models/word_markov/cdo_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_contexts": 437205,
6
- "total_transitions": 939084
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_contexts": 301670,
6
+ "total_transitions": 446138
7
  }
models/word_ngram/cdo_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2869b3ed62a803296d4b64cbd8fba21a1774ee3355a39d11e555520e9c5161b
3
- size 191655
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1665b5acdc0653c9ed337ac32fede9cd787b10dd4061bd94b629fa966ee9aaf1
3
+ size 178246
models/word_ngram/cdo_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_ngrams": 13738,
6
- "total_ngrams": 989207
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_ngrams": 11679,
6
+ "total_ngrams": 477419
7
  }
models/word_ngram/cdo_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5bfe056eb57d732812afef695d46bb07dbebf064ff917dfb6ec89e7a9cdec64d
3
- size 547998
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dbe777e25174fb6b91ddeadf4d3f6969ac19328aeeb43492d068cc47fffedb9
3
+ size 311342
models/word_ngram/cdo_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_ngrams": 35914,
6
- "total_ngrams": 972499
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_ngrams": 17954,
6
+ "total_ngrams": 466992
7
  }
models/word_ngram/cdo_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c9d589fedb9dc7dce0fa306987fb5585c1f152679ee6c6bf37390de81fc147cc
3
- size 1141850
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11d7a87e924b2a61e7e8acb9a1eb51754eb4c044273e29e85c59b97269584a38
3
+ size 556300
models/word_ngram/cdo_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "cdo",
5
- "unique_ngrams": 75913,
6
- "total_ngrams": 955791
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "cdo",
5
+ "unique_ngrams": 30938,
6
+ "total_ngrams": 456565
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 3e51e8ea19f71accbe5593b11cf0f52d9fe4b345336d587687e26e6f454a002f
  • Pointer size: 131 Bytes
  • Size of remote file: 159 kB

Git LFS Details

  • SHA256: 733a6b4b4bd92ce6a5cc804dff4d4c7fbbde09c8b805df67722bd54ad5b1c68c
  • Pointer size: 131 Bytes
  • Size of remote file: 163 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED
visualizations/markov_entropy.png CHANGED
visualizations/model_sizes.png CHANGED
visualizations/nearest_neighbors.png CHANGED
visualizations/ngram_coverage.png CHANGED