omarkamali commited on
Commit
02b9bba
·
verified ·
1 Parent(s): 1bcc2f6

Upload all models and assets for as (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +293 -142
  2. models/embeddings/monolingual/as_128d.bin +2 -2
  3. models/embeddings/monolingual/as_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/as_32d.bin +2 -2
  5. models/embeddings/monolingual/as_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/as_64d.bin +2 -2
  7. models/embeddings/monolingual/as_64d_metadata.json +5 -3
  8. models/subword_markov/as_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/as_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/as_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/as_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/as_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/as_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/as_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/as_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/as_2gram_subword.parquet +2 -2
  17. models/subword_ngram/as_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/as_3gram_subword.parquet +2 -2
  19. models/subword_ngram/as_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/as_4gram_subword.parquet +2 -2
  21. models/subword_ngram/as_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/as_tokenizer_16k.model +2 -2
  23. models/tokenizer/as_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/as_tokenizer_32k.model +2 -2
  25. models/tokenizer/as_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/as_tokenizer_64k.model +2 -2
  27. models/tokenizer/as_tokenizer_64k.vocab +0 -0
  28. models/tokenizer/as_tokenizer_8k.model +2 -2
  29. models/tokenizer/as_tokenizer_8k.vocab +0 -0
  30. models/vocabulary/as_vocabulary.parquet +2 -2
  31. models/vocabulary/as_vocabulary_metadata.json +10 -9
  32. models/word_markov/as_markov_ctx1_word.parquet +2 -2
  33. models/word_markov/as_markov_ctx1_word_metadata.json +2 -2
  34. models/word_markov/as_markov_ctx2_word.parquet +2 -2
  35. models/word_markov/as_markov_ctx2_word_metadata.json +2 -2
  36. models/word_markov/as_markov_ctx3_word.parquet +2 -2
  37. models/word_markov/as_markov_ctx3_word_metadata.json +2 -2
  38. models/word_markov/as_markov_ctx4_word.parquet +2 -2
  39. models/word_markov/as_markov_ctx4_word_metadata.json +2 -2
  40. models/word_ngram/as_2gram_word.parquet +2 -2
  41. models/word_ngram/as_2gram_word_metadata.json +2 -2
  42. models/word_ngram/as_3gram_word.parquet +2 -2
  43. models/word_ngram/as_3gram_word_metadata.json +2 -2
  44. models/word_ngram/as_4gram_word.parquet +2 -2
  45. models/word_ngram/as_4gram_word_metadata.json +2 -2
  46. visualizations/embedding_isotropy.png +0 -0
  47. visualizations/embedding_norms.png +0 -0
  48. visualizations/embedding_similarity.png +2 -2
  49. visualizations/markov_branching.png +0 -0
  50. visualizations/markov_contexts.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.519
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.8484
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 69383
33
- generated: 2025-12-27
34
  ---
35
 
36
  # AS - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,60 +70,57 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.390x | 3.36 | 0.0640% | 1,495,826 |
76
- | **16k** | 3.830x | 3.80 | 0.0723% | 1,324,012 |
77
- | **32k** | 4.212x | 4.18 | 0.0795% | 1,203,836 |
78
- | **64k** | 4.519x 🏆 | 4.48 | 0.0853% | 1,122,074 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `ইউৰেনাচ নামে তলৰ প্ৰৱন্ধসমূহ বুজাব পাৰে:
85
- ইউৰেনাচ: সৌৰজগতৰ সপ্তম গ্ৰহ
86
- ইউৰেনাচ (...`
87
 
88
  | Vocab | Tokens | Count |
89
  |-------|--------|-------|
90
- | 8k | `▁ইউ ৰে না ▁নামে ▁তলৰ ▁প্ৰৱ ন্ধ সমূহ ▁বুজাব ... (+28 more)` | 38 |
91
- | 16k | `▁ইউৰে না ▁নামে ▁তলৰ ▁প্ৰৱ ন্ধ সমূহ ▁বুজাব ▁পাৰে ... (+23 more)` | 33 |
92
- | 32k | `▁ইউৰে নাচ ▁নামে ▁তলৰ ▁প্ৰৱন্ধ সমূহ ▁বুজাব ▁পাৰে : ▁ইউৰে ... (+18 more)` | 28 |
93
- | 64k | `▁ইউৰে নাচ ▁নামে ▁তলৰ ▁প্ৰৱন্ধ সমূহ ▁বুজাব ▁পাৰে : ▁ইউৰে ... (+17 more)` | 27 |
94
 
95
- **Sample 2:** `শ্ৰেণী:ভাৰতীয় অভিনেত্ৰী
96
- শ্ৰেণী:জীৱিত ব্যক্তি
97
- শ্ৰেণী:ভাৰতীয় ৰাজনীতিবিদ
98
- শ্ৰেণী:ভ...`
99
 
100
  | Vocab | Tokens | Count |
101
  |-------|--------|-------|
102
- | 8k | `▁শ্ৰ���ণী : ভাৰতীয় ▁অভিনেত্ৰী ▁শ্ৰেণী : জীৱিত ▁ব্যক্তি ▁শ্ৰেণী : ... (+9 more)` | 19 |
103
- | 16k | `▁শ্ৰেণী : ভাৰতীয় ▁অভিনেত্ৰী ▁শ্ৰেণী : জীৱিত ▁ব্যক্তি ▁শ্ৰেণী : ... (+8 more)` | 18 |
104
- | 32k | `▁শ্ৰেণী : ভাৰতীয় ▁অভিনেত্ৰী ▁শ্ৰেণী : জীৱিত ▁ব্যক্তি ▁শ্ৰেণী : ... (+8 more)` | 18 |
105
- | 64k | `▁শ্ৰেণী : ভাৰতীয় ▁অভিনেত্ৰী ▁শ্ৰেণী : জীৱিত ▁ব্যক্তি ▁শ্ৰেণী : ... (+7 more)` | 17 |
106
 
107
- **Sample 3:** `তাল বুলিলে তলৰ পৃষ্ঠাসমূহ বুজাব পাৰে:
108
-
109
- তাল (বাদ্যযন্ত্ৰ)
110
- তাল (সংগীত)
111
- তাল (ফল)`
112
 
113
  | Vocab | Tokens | Count |
114
  |-------|--------|-------|
115
- | 8k | `▁তাল ▁বুল িলে ▁তলৰ ▁পৃষ্ঠ াসমূহ ▁বুজাব ▁পাৰে : ▁তাল ... (+13 more)` | 23 |
116
- | 16k | `▁তাল ▁বুলিলে ▁তলৰ ▁পৃষ্ঠ াসমূহ ▁বুজাব ▁পাৰে : ▁তাল ▁( ... (+11 more)` | 21 |
117
- | 32k | `▁তাল ▁বুলিলে ▁তলৰ ▁পৃষ্ঠ াসমূহ ▁বুজাব ▁পাৰে : ▁তাল ▁( ... (+11 more)` | 21 |
118
- | 64k | `▁তাল ▁বুলিলে ▁তলৰ ▁পৃষ্ঠাসমূহ ▁বুজাব ▁পাৰে : ▁তাল ▁( বাদ্যযন্ত্ৰ ... (+9 more)` | 19 |
119
 
120
 
121
  ### Key Findings
122
 
123
- - **Best Compression:** 64k achieves 4.519x compression
124
- - **Lowest UNK Rate:** 8k with 0.0640% unknown tokens
125
  - **Trade-off:** Larger vocabularies improve compression but increase model size
126
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
127
 
@@ -130,57 +129,89 @@ Below are sample sentences tokenized with each vocabulary size:
130
 
131
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
132
 
 
 
133
  ![N-gram Coverage](visualizations/ngram_coverage.png)
134
 
135
  ### Results
136
 
137
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
138
- |--------|------------|---------|----------------|------------------|-------------------|
139
- | **2-gram** | 2,255 🏆 | 11.14 | 144,083 | 37.8% | 74.6% |
140
- | **2-gram** | 712 🏆 | 9.48 | 14,933 | 47.2% | 91.5% |
141
- | **3-gram** | 20,803 | 14.34 | 558,435 | 14.3% | 40.2% |
142
- | **3-gram** | 6,279 | 12.62 | 135,792 | 17.8% | 52.9% |
143
- | **4-gram** | 109,133 | 16.74 | 1,749,119 | 7.8% | 23.0% |
144
- | **4-gram** | 34,064 | 15.06 | 727,754 | 9.4% | 29.5% |
145
 
146
  ### Top 5 N-grams by Size
147
 
148
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
 
150
  | Rank | N-gram | Count |
151
  |------|--------|-------|
152
- | 1 | `া ৰ` | 608,846 |
153
- | 2 | `য ়` | 511,382 |
154
- | 3 | `্ ৰ` | 409,497 |
155
- | 4 | `প ্` | 302,441 |
156
- | 5 | `ি ল` | 296,895 |
157
 
158
- **3-grams:**
159
 
160
  | Rank | N-gram | Count |
161
  |------|--------|-------|
162
- | 1 | `য া` | 192,871 |
163
- | 2 | `ি ়` | 190,336 |
164
- | 3 | `ী ়` | 164,267 |
165
- | 4 | `ত ও` | 138,798 |
166
- | 5 | `ে ঁ` | 138,171 |
167
 
168
- **4-grams:**
169
 
170
  | Rank | N-gram | Count |
171
  |------|--------|-------|
172
- | 1 | `ত ঁ` | 137,993 |
173
- | 2 | `ি া` | 118,996 |
174
- | 3 | `ছ ি ।` | 81,488 |
175
- | 4 | `ি ি ল` | 71,229 |
176
- | 5 | `শ ে` | 58,374 |
 
 
 
 
 
 
 
 
 
 
177
 
178
 
179
  ### Key Findings
180
 
181
- - **Best Perplexity:** 2-gram with 712
182
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
183
- - **Coverage:** Top-1000 patterns cover ~29% of corpus
184
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
185
 
186
  ---
@@ -188,55 +219,86 @@ Below are sample sentences tokenized with each vocabulary size:
188
 
189
  ![Markov Entropy](visualizations/markov_entropy.png)
190
 
 
 
191
  ![Markov Branching](visualizations/markov_branching.png)
192
 
193
  ### Results
194
 
195
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
196
- |---------|-------------|------------|------------------|-----------------|----------------|
197
- | **1** | 0.5873 | 1.502 | 6.01 | 178,570 | 41.3% |
198
- | **1** | 1.0627 | 2.089 | 7.96 | 5,555 | 0.0% |
199
- | **2** | 0.3625 | 1.286 | 2.70 | 1,073,346 | 63.7% |
200
- | **2** | 0.9491 | 1.931 | 6.70 | 44,224 | 5.1% |
201
- | **3** | 0.2925 | 1.225 | 2.07 | 2,893,182 | 70.7% |
202
- | **3** | 0.8311 | 1.779 | 4.51 | 296,427 | 16.9% |
203
- | **4** | 0.2472 🏆 | 1.187 | 1.70 | 5,984,484 | 75.3% |
204
- | **4** | 0.6476 🏆 | 1.567 | 2.96 | 1,337,967 | 35.2% |
205
 
206
- ### Generated Text Samples
207
 
208
- Below are text samples generated from each Markov chain model:
209
 
210
  **Context Size 1:**
211
 
212
- 1. `া লয যদ ি পৰ ি া উৰ`
213
- 2. `্ চত ১ট ল ্`
214
- 3. `ি , ইয কৰ ��কসকলক প`
215
 
216
  **Context Size 2:**
217
 
218
- 1. `া ি ৱত কচ ২০০৫ ২০১০ চনত`
219
- 2. `য ি ি ছত anthropological survey of hinduism today , march 2009`
220
- 3. `্ থম ি ি া ৰ ব ি`
221
 
222
  **Context Size 3:**
223
 
224
- 1. `য চনত আশ ৰফ চৰ১৯৬৮ফ ি ল ্ ল ী`
225
- 2. `ি হয আৰ মহ ি া ৰ দ ৌ`
226
- 3. `ী কগ ি হৰ ব`
227
 
228
  **Context Size 4:**
229
 
230
- 1. `ত ৰণ ি ি মৰ গছৰ পৰ া প ্ ৰ`
231
- 2. `ি ি বছৰত ে ও ঁ`
232
- 3. `ছ ি ইয অন ্ ঘদ ি ন`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
233
 
234
 
235
  ### Key Findings
236
 
237
- - **Best Predictability:** Context-4 with 75.3% predictability
238
  - **Branching Factor:** Decreases with context size (more deterministic)
239
- - **Memory Trade-off:** Larger contexts require more storage (1,337,967 contexts)
240
  - **Recommendation:** Context-3 or Context-4 for text generation
241
 
242
  ---
@@ -252,64 +314,64 @@ Below are text samples generated from each Markov chain model:
252
 
253
  | Metric | Value |
254
  |--------|-------|
255
- | Vocabulary Size | 69,383 |
256
- | Total Tokens | 22,699,994 |
257
- | Mean Frequency | 327.17 |
258
  | Median Frequency | 4 |
259
- | Frequency Std Dev | 13137.34 |
260
 
261
  ### Most Common Words
262
 
263
  | Rank | Word | Frequency |
264
  |------|------|-----------|
265
- | 1 | | 1,751,445 |
266
- | 2 | | 1,226,468 |
267
- | 3 | | 988,164 |
268
- | 4 | | 983,360 |
269
- | 5 | | 951,103 |
270
- | 6 | | 878,240 |
271
- | 7 | | 815,336 |
272
- | 8 | | 750,050 |
273
- | 9 | | 610,982 |
274
- | 10 | | 521,058 |
275
 
276
  ### Least Common Words (from vocabulary)
277
 
278
  | Rank | Word | Frequency |
279
  |------|------|-----------|
280
- | 1 | zns | 2 |
281
- | 2 | tuklas | 2 |
282
- | 3 | bilqees | 2 |
283
- | 4 | surbhi | 2 |
284
- | 5 | ullasamga | 2 |
285
- | 6 | utsahamga | 2 |
286
- | 7 | manchস | 2 |
287
- | 8 | swfl | 2 |
288
- | 9 | manhunt | 2 |
289
- | 10 | megamodel | 2 |
290
 
291
  ### Zipf's Law Analysis
292
 
293
  | Metric | Value |
294
  |--------|-------|
295
- | Zipf Coefficient | 1.5303 |
296
- | R² (Goodness of Fit) | 0.997399 |
297
  | Adherence Quality | **excellent** |
298
 
299
  ### Coverage Analysis
300
 
301
  | Top N Words | Coverage |
302
  |-------------|----------|
303
- | Top 100 | 78.6% |
304
- | Top 1,000 | 93.9% |
305
- | Top 5,000 | 97.8% |
306
- | Top 10,000 | 98.6% |
307
 
308
  ### Key Findings
309
 
310
- - **Zipf Compliance:** R²=0.9974 indicates excellent adherence to Zipf's law
311
- - **High Frequency Dominance:** Top 100 words cover 78.6% of corpus
312
- - **Long Tail:** 59,383 words needed for remaining 1.4% coverage
313
 
314
  ---
315
  ## 5. Word Embeddings Evaluation
@@ -322,24 +384,110 @@ Below are text samples generated from each Markov chain model:
322
 
323
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
324
 
325
- ### Model Comparison
326
 
327
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
328
- |-------|------------|-----------|----------|----------|----------|
329
- | **mono_32d** | 113,429 | 32 | 3.011 | 0.609 | 0.8484 🏆 |
330
- | **mono_64d** | 113,429 | 64 | 3.498 | 0.644 | 0.8482 |
331
- | **mono_128d** | 113,429 | 128 | 4.089 | 0.704 | 0.8254 |
332
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
333
 
334
  ### Key Findings
335
 
336
- - **Best Isotropy:** mono_32d with 0.8484 (more uniform distribution)
337
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
338
- - **Vocabulary Coverage:** All models cover 113,429 words
339
- - **Recommendation:** 100d for balanced semantic capture and efficiency
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
340
 
341
  ---
342
- ## 6. Summary & Recommendations
343
 
344
  ![Performance Dashboard](visualizations/performance_dashboard.png)
345
 
@@ -347,11 +495,12 @@ Below are text samples generated from each Markov chain model:
347
 
348
  | Component | Recommended | Rationale |
349
  |-----------|-------------|-----------|
350
- | Tokenizer | **32k BPE** | Best compression (4.52x) with low UNK rate |
351
- | N-gram | **5-gram** | Lowest perplexity (712) |
352
- | Markov | **Context-4** | Highest predictability (75.3%) |
353
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
354
 
 
355
  ---
356
  ## Appendix: Metrics Glossary & Interpretation Guide
357
 
@@ -541,7 +690,8 @@ If you use these models in your research, please cite:
541
  author = {Kamali, Omar},
542
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
543
  year = {2025},
544
- publisher = {HuggingFace},
 
545
  url = {https://huggingface.co/wikilangs}
546
  institution = {Omneity Labs}
547
  }
@@ -557,7 +707,8 @@ MIT License - Free for academic and commercial use.
557
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
558
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
559
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
560
  ---
561
  *Generated by Wikilangs Models Pipeline*
562
 
563
- *Report Date: 2025-12-27 18:45:07*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 4.534
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.8566
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # AS - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.446x | 3.45 | 0.0759% | 1,436,355 |
84
+ | **16k** | 3.889x | 3.89 | 0.0856% | 1,272,728 |
85
+ | **32k** | 4.259x | 4.26 | 0.0938% | 1,162,147 |
86
+ | **64k** | 4.534x 🏆 | 4.53 | 0.0999% | 1,091,630 |
87
 
88
  ### Tokenization Examples
89
 
90
  Below are sample sentences tokenized with each vocabulary size:
91
 
92
+ **Sample 1:** `জয়নগৰ মজিলপুৰ ভাৰতৰ পশ্চিমবংগ ৰাজ্যৰ দক্ষিণ চব্বিশ পৰগনা জিলাত অৱস্থিত এখন চহৰ।...`
 
 
93
 
94
  | Vocab | Tokens | Count |
95
  |-------|--------|-------|
96
+ | 8k | `▁জ য়ন গৰ ▁মজ িল পুৰ ▁ভাৰতৰ ▁পশ্চিমব ংগ ▁ৰাজ্যৰ ... (+14 more)` | 24 |
97
+ | 16k | `▁জ য়ন গৰ ▁মজ িল পুৰ ▁ভাৰতৰ ▁পশ্চিমবংগ ▁ৰাজ্যৰ ▁দক্ষিণ ... (+12 more)` | 22 |
98
+ | 32k | `▁জয়ন গৰ ▁মজ িল পুৰ ▁ভাৰতৰ ▁পশ্চিমবংগ ▁ৰাজ্যৰ ▁দক্ষিণ ▁চব্বিশ ... (+8 more)` | 18 |
99
+ | 64k | `▁জয়নগৰ ▁মজ িল পুৰ ▁ভাৰতৰ ▁পশ্চিমবংগ ▁ৰাজ্যৰ ▁দক্ষিণ ▁চব্বিশ ▁পৰগনা ... (+7 more)` | 17 |
100
 
101
+ **Sample 2:** `প্ৰদীপ আচাৰ্য্য একবিংশ শতাব্দীৰ অসমৰ এগৰাকী প্ৰসিদ্ধ লেখক, সমালোচক । সংক্ষিপ্ত জ...`
 
 
 
102
 
103
  | Vocab | Tokens | Count |
104
  |-------|--------|-------|
105
+ | 8k | `▁প্ৰদীপ ▁আচাৰ্য ্য ▁এক বিংশ ▁শতা ব্দ ীৰ ▁অসমৰ ▁এগৰাকী ... (+10 more)` | 20 |
106
+ | 16k | `▁প্ৰদীপ ▁আচাৰ্য ্য ▁একবিংশ ▁শতাব্দীৰ ▁অসমৰ ▁এগৰাকী ▁প্ৰসিদ্ধ ▁লেখক , ... (+7 more)` | 17 |
107
+ | 32k | `▁প্ৰদীপ ▁আচাৰ্য ্য ▁একবিংশ ▁শতাব্দীৰ ▁অসমৰ ▁এগৰাকী ▁প্ৰসিদ্ধ ▁লেখক , ... (+7 more)` | 17 |
108
+ | 64k | `▁প্ৰদীপ ▁আচাৰ্য ্য ▁একবিংশ ▁শতাব্দীৰ ▁অসমৰ ▁এগৰাকী ▁প্ৰসিদ্ধ ▁লেখক , ... (+7 more)` | 17 |
109
 
110
+ **Sample 3:** `মাটিকালি অৱস্থান কৰ্মচাৰী সা-সুবিধা তথ্যসূত্ৰ বিদ্যালয়সমূহ`
 
 
 
 
111
 
112
  | Vocab | Tokens | Count |
113
  |-------|--------|-------|
114
+ | 8k | `▁মাটিকালি ▁অৱস্থান ▁কৰ্মচাৰী ▁সা - সু বিধা ▁তথ্যসূত্ৰ ▁বিদ্যালয় সমূহ` | 10 |
115
+ | 16k | `▁মাটিকালি ▁অৱস্থান ▁কৰ্মচাৰী ▁সা - সু বিধা ▁তথ্যসূত্ৰ ▁বিদ্যালয়সমূহ` | 9 |
116
+ | 32k | `▁মাটিকালি ▁অৱস্থান ▁কৰ্মচাৰী ▁সা - সুবিধা ▁তথ্যসূত্ৰ ▁বিদ্যালয়সমূহ` | 8 |
117
+ | 64k | `▁মাটিকালি ▁অৱস্থান ▁কৰ্মচাৰী ▁সা - সুবিধা ▁তথ্যসূত্ৰ ▁বিদ্যালয়সমূহ` | 8 |
118
 
119
 
120
  ### Key Findings
121
 
122
+ - **Best Compression:** 64k achieves 4.534x compression
123
+ - **Lowest UNK Rate:** 8k with 0.0759% unknown tokens
124
  - **Trade-off:** Larger vocabularies improve compression but increase model size
125
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
126
 
 
129
 
130
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
131
 
132
+ ![N-gram Unique](visualizations/ngram_unique.png)
133
+
134
  ![N-gram Coverage](visualizations/ngram_coverage.png)
135
 
136
  ### Results
137
 
138
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
139
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
140
+ | **2-gram** | Word | 60,931 | 15.89 | 198,049 | 8.3% | 21.5% |
141
+ | **2-gram** | Subword | 2,317 🏆 | 11.18 | 62,544 | 34.0% | 69.3% |
142
+ | **3-gram** | Word | 105,867 | 16.69 | 226,215 | 4.9% | 14.7% |
143
+ | **3-gram** | Subword | 21,008 | 14.36 | 364,128 | 13.2% | 35.4% |
144
+ | **4-gram** | Word | 237,754 | 17.86 | 355,974 | 2.4% | 7.8% |
145
+ | **4-gram** | Subword | 113,775 | 16.80 | 1,477,005 | 7.8% | 20.9% |
146
 
147
  ### Top 5 N-grams by Size
148
 
149
+ **2-grams (Word):**
150
+
151
+ | Rank | N-gram | Count |
152
+ |------|--------|-------|
153
+ | 1 | `কৰা হয়` | 27,116 |
154
+ | 2 | `কৰা হৈছিল` | 11,596 |
155
+ | 3 | `হ ল` | 10,746 |
156
+ | 4 | `লাভ কৰে` | 10,053 |
157
+ | 5 | `কৰা হৈছে` | 9,448 |
158
+
159
+ **3-grams (Word):**
160
+
161
+ | Rank | N-gram | Count |
162
+ |------|--------|-------|
163
+ | 1 | `ব্যৱহাৰ কৰা হয়` | 3,039 |
164
+ | 2 | `হ ব পাৰে` | 3,023 |
165
+ | 3 | `বুলি কোৱা হয়` | 2,966 |
166
+ | 4 | `গণ্য কৰা হয়` | 2,121 |
167
+ | 5 | `ডিগ্ৰী লাভ কৰে` | 1,927 |
168
+
169
+ **4-grams (Word):**
170
 
171
  | Rank | N-gram | Count |
172
  |------|--------|-------|
173
+ | 1 | `তথ্য সংগ্ৰহ বাহ্যিক সংযোগ` | 1,636 |
174
+ | 2 | `বুলি গণ্য কৰা হয়` | 1,147 |
175
+ | 3 | `স্নাতক ডিগ্ৰী লাভ কৰে` | 819 |
176
+ | 4 | `তথ্য উৎস বাহ্যিক সংযোগ` | 772 |
177
+ | 5 | `হিচাপে গণ্য কৰা হয়` | 749 |
178
 
179
+ **2-grams (Subword):**
180
 
181
  | Rank | N-gram | Count |
182
  |------|--------|-------|
183
+ | 1 | `ৰ _` | 1,253,155 |
184
+ | 2 | `ত _` | 617,790 |
185
+ | 3 | `_ আ` | 557,646 |
186
+ | 4 | `। _` | 441,423 |
187
+ | 5 | `_ ক` | 431,976 |
188
 
189
+ **3-grams (Subword):**
190
 
191
  | Rank | N-gram | Count |
192
  |------|--------|-------|
193
+ | 1 | `আ ৰু _` | 234,191 |
194
+ | 2 | `_ ৰু` | 234,020 |
195
+ | 3 | `_ ৰি` | 132,035 |
196
+ | 4 | `_ তে ওঁ` | 130,105 |
197
+ | 5 | `ন_` | 119,581 |
198
+
199
+ **4-grams (Subword):**
200
+
201
+ | Rank | N-gram | Count |
202
+ |------|--------|-------|
203
+ | 1 | `_ আ ৰু _` | 233,600 |
204
+ | 2 | `ছি ল । _` | 95,977 |
205
+ | 3 | `_ ক ৰা _` | 84,715 |
206
+ | 4 | `_ তে ওঁ _` | 61,201 |
207
+ | 5 | `_ এ ই _` | 61,142 |
208
 
209
 
210
  ### Key Findings
211
 
212
+ - **Best Perplexity:** 2-gram (subword) with 2,317
213
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
214
+ - **Coverage:** Top-1000 patterns cover ~21% of corpus
215
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
216
 
217
  ---
 
219
 
220
  ![Markov Entropy](visualizations/markov_entropy.png)
221
 
222
+ ![Markov Contexts](visualizations/markov_contexts.png)
223
+
224
  ![Markov Branching](visualizations/markov_branching.png)
225
 
226
  ### Results
227
 
228
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
229
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
230
+ | **1** | Word | 0.8462 | 1.798 | 7.80 | 533,621 | 15.4% |
231
+ | **1** | Subword | 0.8352 | 1.784 | 12.14 | 14,852 | 16.5% |
232
+ | **2** | Word | 0.2679 | 1.204 | 1.70 | 4,157,114 | 73.2% |
233
+ | **2** | Subword | 0.7097 | 1.635 | 5.34 | 180,337 | 29.0% |
234
+ | **3** | Word | 0.0819 | 1.058 | 1.15 | 7,059,669 | 91.8% |
235
+ | **3** | Subword | 0.5596 | 1.474 | 3.48 | 962,394 | 44.0% |
236
+ | **4** | Word | 0.0273 🏆 | 1.019 | 1.04 | 8,117,676 | 97.3% |
237
+ | **4** | Subword | 0.4358 | 1.353 | 2.26 | 3,350,979 | 56.4% |
238
 
239
+ ### Generated Text Samples (Word-based)
240
 
241
+ Below are text samples generated from each word-based Markov chain model:
242
 
243
  **Context Size 1:**
244
 
245
+ 1. `আৰু আন্তঃৰাষ্ট্ৰীয় ত্ৰিবৰ্ষীয় নতুন আলোচনী বিভাগ আৰু তিনিটা ভাগত কেইটামান ষ্ট্ৰ মেল ফেৰাৰ হেলেনা দ্...`
246
+ 2. `কৰা আয়াতসমূহক সাধাৰণতে এই লিংগ জাতীয় উৎসৱ পৰ্ব অনুষ্ঠান হিচাপে ব্যৱহাৰ সংস্কৃতিক কেন্দ্ৰৰ centre s...`
247
+ 3. `হয় মিনিক থিয়েম ৩৬ চৌৰঙ্গী উপন্যাসৰ নতুন ব্ৰডগজ ইঞ্জিন বিদ্যুতৰ অনুমতি দিযা নি পকড় কাফীৰ`
248
 
249
  **Context Size 2:**
250
 
251
+ 1. `কৰা হয় পিছলৈ কানৱ ঋষিৰ আশ্ৰমত বাস কৰে স্থিতি তথা সংৰক্ষণ তথ্যসূত্ৰ বহিঃসংযোগ ইণ্টাৰনেট মুভি ডাটাবেছ...`
252
+ 2. `কৰা হৈছিল ৰডবোৰে এটা সংখ্যাৰ অংকৰ যোগফলৰ দ্বাৰা পোৱা গৈছিল উদ্ভিদ ৰসায়ন টিনোস্প ৰা কৰ্ডিফ লিয়াত এল...`
253
+ 3. `হ ছিৰাম চিক্‌নেছ সদৃশ লক্ষণ প্ৰদৰ্শনকাৰী মধ্যমীয়া আৰু যুক্তিসংগত বিশ্লেষণৰ জৰিয়তে শ্ৰীকৃষ্ণ কীৰ্...`
254
 
255
  **Context Size 3:**
256
 
257
+ 1. `ব্যৱহাৰ কৰা হয় এটা জনপ্ৰিয় কিংবদন্তি অনুসৰি বৈষ্ণৱ পণ্ডিতসকলে শৈৱ বুলি প্ৰত্যাখ্যান কৰাৰ পিছত তেওঁ...`
258
+ 2. `হ পাৰে ৰচনাৰ তাৰিখ ঐতৰেয় ব্ৰাহ্মণ কিছু নিশ্চিতভাৱে খ্ৰীষ্টপূৰ্ব ১ম সহস্ৰাব্দৰ সম্ভৱতঃ ইয়াৰ প্ৰথম...`
259
+ 3. `বুলি কোৱা হয় চনৰ ১১ ফেব্ৰুৱাৰীত বাংলাদেশত আলিৰ মৃত্যু হয় তেওঁৰ মৃত্যুৰ পিছত নিউয়ৰ্ক টাইমছে তেওঁক ...`
260
 
261
  **Context Size 4:**
262
 
263
+ 1. `তথ্য সংগ্ৰহ বাহ্যিক সংযোগ আনুষ্ঠানিক mamata banerjee official all india trinamool congress party pro...`
264
+ 2. `বুলি গণ্য কৰা হয় একশৰণ নাম ধৰ্মৰ অনুগামীসকলে গুণমালা পুথিখনক অতি পৱিত্ৰ জ্ঞান কৰি গুৰু আসনত প্ৰতিষ্...`
265
+ 3. `স্নাতক ডিগ্ৰী লাভ কৰে আৰু বেৰিষ্টাৰ ইজাজ হুছেইন বাটালৱীৰ চেম্বাৰত যোগদান কৰে চনত লাহোৰ উচ্চ ন্যায়াল...`
266
+
267
+
268
+ ### Generated Text Samples (Subword-based)
269
+
270
+ Below are text samples generated from each subword-based Markov chain model:
271
+
272
+ **Context Size 1:**
273
+
274
+ 1. `_ভাৱেশনত-_পলবাদলকৰা_`
275
+ 2. `ৰ_বিয়া_usis_ই_সেই_ধ্ব`
276
+ 3. `কবৰু_।_মহিলাৰশ্বি_ম_৩৬`
277
+
278
+ **Context Size 2:**
279
+
280
+ 1. `ৰ_ডে'_+_(spishaver`
281
+ 2. `ত_উপন্যাসৰ_ওপৰ_ওৰে_চক্ৰ`
282
+ 3. `_আৰু_পাৰে।_চলন_বোমা-কাশ্মী`
283
+
284
+ **Context Size 3:**
285
+
286
+ 1. `আৰু_মৰলৰ_মেক্সিমফৰ_এটা-আ`
287
+ 2. `_আৰু_লাইকা"_এটা_উত্তৰ_শ্ৰেষ্ঠ`
288
+ 3. `_কৰিবলৈ_গঢ়_লৈ_যোৱা_সমসাম`
289
+
290
+ **Context Size 4:**
291
+
292
+ 1. `_আৰু_সামাজিক_অৱ_ছিংগাপুৰত_২`
293
+ 2. `ছিল।_ইভান্সে_প্ৰাণীবিজ্ঞান,_���্ৰমবৰ্ধ`
294
+ 3. `_কৰা_হয়।_ষ্টাফ_ৰিপৰ্টাৰ_২৪_`
295
 
296
 
297
  ### Key Findings
298
 
299
+ - **Best Predictability:** Context-4 (word) with 97.3% predictability
300
  - **Branching Factor:** Decreases with context size (more deterministic)
301
+ - **Memory Trade-off:** Larger contexts require more storage (3,350,979 contexts)
302
  - **Recommendation:** Context-3 or Context-4 for text generation
303
 
304
  ---
 
314
 
315
  | Metric | Value |
316
  |--------|-------|
317
+ | Vocabulary Size | 219,027 |
318
+ | Total Tokens | 8,615,852 |
319
+ | Mean Frequency | 39.34 |
320
  | Median Frequency | 4 |
321
+ | Frequency Std Dev | 763.95 |
322
 
323
  ### Most Common Words
324
 
325
  | Rank | Word | Frequency |
326
  |------|------|-----------|
327
+ | 1 | আৰু | 234,273 |
328
+ | 2 | কৰা | 88,269 |
329
+ | 3 | হয় | 83,006 |
330
+ | 4 | কৰে | 74,637 |
331
+ | 5 | এই | 61,800 |
332
+ | 6 | তেওঁ | 61,727 |
333
+ | 7 | পৰা | 52,844 |
334
+ | 8 | কৰিছিল | 48,735 |
335
+ | 9 | বাবে | 48,165 |
336
+ | 10 | চনত | 47,181 |
337
 
338
  ### Least Common Words (from vocabulary)
339
 
340
  | Rank | Word | Frequency |
341
  |------|------|-----------|
342
+ | 1 | চেকিজাং | 2 |
343
+ | 2 | জটৱানী | 2 |
344
+ | 3 | জটৱানীৰ | 2 |
345
+ | 4 | ভিটাইৰ | 2 |
346
+ | 5 | সিন্ধীজ | 2 |
347
+ | 6 | দেৱচন্দ্ৰৰ | 2 |
348
+ | 7 | দেৱচন্দ্ৰ | 2 |
349
+ | 8 | প্ৰাণনাথৰ | 2 |
350
+ | 9 | প্ৰাণনাথে | 2 |
351
+ | 10 | গুৰদ্বাৰ | 2 |
352
 
353
  ### Zipf's Law Analysis
354
 
355
  | Metric | Value |
356
  |--------|-------|
357
+ | Zipf Coefficient | 1.0086 |
358
+ | R² (Goodness of Fit) | 0.989742 |
359
  | Adherence Quality | **excellent** |
360
 
361
  ### Coverage Analysis
362
 
363
  | Top N Words | Coverage |
364
  |-------------|----------|
365
+ | Top 100 | 25.4% |
366
+ | Top 1,000 | 50.8% |
367
+ | Top 5,000 | 71.8% |
368
+ | Top 10,000 | 79.6% |
369
 
370
  ### Key Findings
371
 
372
+ - **Zipf Compliance:** R²=0.9897 indicates excellent adherence to Zipf's law
373
+ - **High Frequency Dominance:** Top 100 words cover 25.4% of corpus
374
+ - **Long Tail:** 209,027 words needed for remaining 20.4% coverage
375
 
376
  ---
377
  ## 5. Word Embeddings Evaluation
 
384
 
385
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
386
 
 
387
 
388
+ ### 5.1 Cross-Lingual Alignment
389
+
390
+ > *Note: Multilingual alignment visualization not available for this language.*
391
+
392
+
393
+ ### 5.2 Model Comparison
394
+
395
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
396
+ |-------|-----------|----------|------------------|---------------|----------------|
397
+ | **mono_32d** | 32 | 0.8476 | 0.3643 | N/A | N/A |
398
+ | **mono_64d** | 64 | 0.8566 🏆 | 0.2729 | N/A | N/A |
399
+ | **mono_128d** | 128 | 0.8399 | 0.2134 | N/A | N/A |
400
 
401
  ### Key Findings
402
 
403
+ - **Best Isotropy:** mono_64d with 0.8566 (more uniform distribution)
404
+ - **Semantic Density:** Average pairwise similarity of 0.2836. Lower values indicate better semantic separation.
405
+ - **Alignment Quality:** No aligned models evaluated in this run.
406
+ - **Recommendation:** 128d aligned for best cross-lingual performance
407
+
408
+ ---
409
+ ## 6. Morphological Analysis (Experimental)
410
+
411
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
412
+
413
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
414
+
415
+ ### 6.1 Productivity & Complexity
416
+
417
+ | Metric | Value | Interpretation | Recommendation |
418
+ |--------|-------|----------------|----------------|
419
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
420
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
421
+
422
+ ### 6.2 Affix Inventory (Productive Units)
423
+
424
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
425
+
426
+ #### Productive Prefixes
427
+ | Prefix | Examples |
428
+ |--------|----------|
429
+
430
+ #### Productive Suffixes
431
+ | Suffix | Examples |
432
+ |--------|----------|
433
+ | `-ৰ` | নিৰ্ধাৰনৰ, স্পঞ্জৰ, চেমেলেৰ |
434
+ | `-াৰ` | শীতলাৰ, গুজ্জাৰ, ভেঁ‌ড়াৰ |
435
+
436
+ ### 6.3 Bound Stems (Lexical Roots)
437
+
438
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
439
+
440
+ | Stem | Cohesion | Substitutability | Examples |
441
+ |------|----------|------------------|----------|
442
+ | `ther` | 3.32x | 64 contexts | other, theri, there |
443
+ | `ight` | 3.29x | 55 contexts | bight, tight, might |
444
+ | `ress` | 3.32x | 41 contexts | dress, press, presse |
445
+ | `indi` | 3.32x | 39 contexts | hindi, indie, pindi |
446
+ | `vers` | 3.14x | 46 contexts | evers, overs, verse |
447
+ | `nter` | 3.21x | 38 contexts | inter, enter, hunter |
448
+ | `olog` | 3.28x | 34 contexts | oology, biology, zoology |
449
+ | `ment` | 3.17x | 38 contexts | cement, mentor, mentha |
450
+ | `ctio` | 3.34x | 31 contexts | action, diction, section |
451
+ | `atio` | 3.18x | 37 contexts | fatio, ratio, nation |
452
+ | `stor` | 3.19x | 33 contexts | storm, jstor, story |
453
+ | `iver` | 3.17x | 26 contexts | liver, river, giver |
454
+
455
+ ### 6.4 Affix Compatibility (Co-occurrence)
456
+
457
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
458
+
459
+ *No significant affix co-occurrences detected.*
460
+
461
+
462
+ ### 6.5 Recursive Morpheme Segmentation
463
+
464
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
465
+
466
+ | Word | Suggested Split | Confidence | Stem |
467
+ |------|-----------------|------------|------|
468
+ | লিথুৱানিয়াৰ | **`লিথুৱানিয়-াৰ`** | 1.5 | `লিথুৱানিয়` |
469
+ | পুনৰ্ব্যৱহাৰ | **`পুনৰ্ব্যৱহ-াৰ`** | 1.5 | `পুনৰ্ব্যৱহ` |
470
+ | প্লাছিয়াৰ | **`প্লাছিয়-াৰ`** | 1.5 | `প্লাছিয়` |
471
+ | ক্ষেত্ৰাধিকাৰ | **`ক্ষেত্ৰাধিক-াৰ`** | 1.5 | `ক্ষেত্ৰাধিক` |
472
+ | বিন্ধোৱাৰ | **`বিন্ধোৱ-াৰ`** | 1.5 | `বিন্ধোৱ` |
473
+ | চুলিক্‌ফাৰ | **`চুলিক্‌ফ-াৰ`** | 1.5 | `চুলিক্‌ফ` |
474
+ | ইউনিলিভাৰ | **`ইউনিলিভ-াৰ`** | 1.5 | `ইউনিলিভ` |
475
+ | চিৰস্তাদাৰ | **`চিৰস্তাদ-াৰ`** | 1.5 | `চিৰস্তাদ` |
476
+ | লাখটকীয়াৰ | **`লাখটকীয়-াৰ`** | 1.5 | `লাখটকীয়` |
477
+ | জাতিসত্তাৰ | **`জাতিসত্ত-াৰ`** | 1.5 | `জাতিসত্ত` |
478
+ | দৰিদ্ৰতাৰ | **`দৰিদ্ৰত-াৰ`** | 1.5 | `দৰিদ্ৰত` |
479
+ | ছিলভেষ্টাৰ | **`ছিলভেষ্ট-াৰ`** | 1.5 | `ছিলভেষ্ট` |
480
+ | চিলভেষ্টাৰ | **`চিলভেষ্ট-াৰ`** | 1.5 | `চিলভেষ্ট` |
481
+ | বাগ্মীতাৰ | **`বাগ্মীত-াৰ`** | 1.5 | `বাগ্মীত` |
482
+ | নিয়ন্ত্ৰণহীনতাৰ | **`নিয়ন্ত্ৰণহীনত-াৰ`** | 1.5 | `নিয়ন্ত্ৰণহীনত` |
483
+
484
+ ### 6.6 Linguistic Interpretation
485
+
486
+ > **Automated Insight:**
487
+ The language AS appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
488
 
489
  ---
490
+ ## 7. Summary & Recommendations
491
 
492
  ![Performance Dashboard](visualizations/performance_dashboard.png)
493
 
 
495
 
496
  | Component | Recommended | Rationale |
497
  |-----------|-------------|-----------|
498
+ | Tokenizer | **64k BPE** | Best compression (4.53x) |
499
+ | N-gram | **2-gram** | Lowest perplexity (2,317) |
500
+ | Markov | **Context-4** | Highest predictability (97.3%) |
501
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
502
 
503
+
504
  ---
505
  ## Appendix: Metrics Glossary & Interpretation Guide
506
 
 
690
  author = {Kamali, Omar},
691
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
692
  year = {2025},
693
+ doi = {10.5281/zenodo.18073153},
694
+ publisher = {Zenodo},
695
  url = {https://huggingface.co/wikilangs}
696
  institution = {Omneity Labs}
697
  }
 
707
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
708
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
709
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
710
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
711
  ---
712
  *Generated by Wikilangs Models Pipeline*
713
 
714
+ *Report Date: 2026-01-03 05:56:00*
models/embeddings/monolingual/as_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7294ee96f3b0c61ba7c46af8e348b551645a26d74537adc45861ba9171026d24
3
- size 1143382361
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ac87d06ad44c5f86054b2229ad119a6da7bf6318454bddbdc55c7518d8c4fed
3
+ size 1134845578
models/embeddings/monolingual/as_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 113429
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 105317
15
  }
models/embeddings/monolingual/as_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c0125149319853f09b65df93575e1bf1665cf0cf98a0a4e52df3248b94a776d
3
- size 288268889
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d92a65e15cacfccf9d549fee6b6a0ce808975a3c742f0ba231c9f7d4e6b2be4
3
+ size 285962122
models/embeddings/monolingual/as_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 113429
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 105317
15
  }
models/embeddings/monolingual/as_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1b271a32a3a16aa6c5905bd4ef669972e7fa9d2fcad872330bcbfe8211a6cbf6
3
- size 573306713
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:105129e3ce95f6d925453fd6d253ba75069fd36adaf12abf53d323c8a32ee134
3
+ size 568923274
models/embeddings/monolingual/as_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 113429
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 105317
15
  }
models/subword_markov/as_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:304b81823be6914953bac63df3fb3c2d3d5e01a1f9b4b8f4a8878717e903a2e3
3
- size 319610
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c80c8a3d77091dfe2da29d02e66a930b780f77dce7362509695133b8da1ea80
3
+ size 1182447
models/subword_markov/as_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_contexts": 5555,
6
- "total_transitions": 65140856
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_contexts": 14852,
6
+ "total_transitions": 39716105
7
  }
models/subword_markov/as_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:072a0a77d5f9b149b82195090a2f7ec01b569c51640f1c61e11c74d314891b20
3
- size 2244616
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:885badaceaa6a76e70eec40f59892815b915e219ae00559881567d3b989d3ef2
3
+ size 7340754
models/subword_markov/as_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_contexts": 44224,
6
- "total_transitions": 65120064
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_contexts": 180337,
6
+ "total_transitions": 39696289
7
  }
models/subword_markov/as_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d7c22336529a0ded174b0241383329d3154a9858bd06639bd78f20c1e786cf00
3
- size 10618800
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa1ba33509e5c790fdf467b449165c1f4accd50771e716b68c45863725e8653c
3
+ size 28776160
models/subword_markov/as_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_contexts": 296427,
6
- "total_transitions": 65099272
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_contexts": 962394,
6
+ "total_transitions": 39676473
7
  }
models/subword_markov/as_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fba617be292f432879d2b6f88a252644b452cbe07fc436c4d4b98898b0dd2f33
3
- size 34480970
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3c3fed04aa29c942baa4ca8eff754698cf97cb5a32b6ba2a80b814a80269e95
3
+ size 82473540
models/subword_markov/as_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_contexts": 1337967,
6
- "total_transitions": 65078480
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_contexts": 3350979,
6
+ "total_transitions": 39656657
7
  }
models/subword_ngram/as_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:13f2a80a67d004f512051da3fb0f08c9a906b943f4d8c6e170756448689a3095
3
- size 206387
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:305bff8b4892b68a20e77689ddf5bbd66be7385e136f504ab07ed4ec09661b44
3
+ size 935875
models/subword_ngram/as_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_ngrams": 14933,
6
- "total_ngrams": 65140856
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_ngrams": 62544,
6
+ "total_ngrams": 39716105
7
  }
models/subword_ngram/as_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:566ad2045c2c52180f92dc29b097fad25c84b062a65319808f9431fad7f538cb
3
- size 1714165
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67472e7d9683c3e8802771c68e0afe1ec9b79cf06fa64d62f682208c1565fa8f
3
+ size 5569851
models/subword_ngram/as_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_ngrams": 135792,
6
- "total_ngrams": 65120064
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_ngrams": 364128,
6
+ "total_ngrams": 39696289
7
  }
models/subword_ngram/as_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:645ec172809e1ba4475428602980c6d539af311c472d34ab195bc24111837333
3
- size 9556409
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c4153dc76e2de363499d7501db843a99bc9685c1f78f1712d49767c2d3edb5e
3
+ size 23557608
models/subword_ngram/as_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "as",
5
- "unique_ngrams": 727754,
6
- "total_ngrams": 65099272
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "as",
5
+ "unique_ngrams": 1477005,
6
+ "total_ngrams": 39676473
7
  }
models/tokenizer/as_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92c4a0d6d3435730960b337cbe00d6561bb754c82a66840b10b3effdf114a9c0
3
- size 627322
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75647425d4994600927e38e762eb4e12bd76a7d759b998a6b27d3ef72fb92134
3
+ size 629826
models/tokenizer/as_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/as_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cfd4440400274db9e0e200c32c56c9b2f8b5d1e4a80cd88b665224da4685840d
3
- size 1042352
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d675824377e9be300eedb755a0221aa145be09b86d891593a1c2428484b63864
3
+ size 1041081
models/tokenizer/as_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/as_tokenizer_64k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f1a070ac3d05309f6fc98145f97c0714ee5da0ecebf4b99231acd5001077e67f
3
- size 1892860
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3a5aae89065887f7e21c46af47bbb046edc46cdf2b663af9a2c1ae34034851f
3
+ size 1870438
models/tokenizer/as_tokenizer_64k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/as_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a3ac9ae44b7976a9a581b8c12e38d04180fcbc38165d660d6b96b537c7954700
3
- size 427062
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd471a819995e56dd4aefb6525b4fb7cb54d7cb5e996fa7e85acdd00b8586c74
3
+ size 429491
models/tokenizer/as_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/as_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:afd7c512db097eb79bf2751927085a424d4d4be0b149ff56f3b19b0846f20e25
3
- size 1236046
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55714b8ff99e197cb70a99feadc4f2eb8b928c63ec225a2ec5cce487f9b36d0d
3
+ size 3925398
models/vocabulary/as_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "as",
3
- "vocabulary_size": 69383,
 
4
  "statistics": {
5
- "type_token_ratio": 0.007789323101723796,
6
  "coverage": {
7
- "top_100": 0.7824795758310844,
8
- "top_1000": 0.9345050339631166,
9
- "top_5000": 0.9733725115168742,
10
- "top_10000": 0.9816343824731659
11
  },
12
- "hapax_count": 108278,
13
- "hapax_ratio": 0.6094640917252521,
14
- "total_documents": 20792
15
  }
16
  }
 
1
  {
2
  "language": "as",
3
+ "vocabulary_size": 219027,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.059760376668044714,
7
  "coverage": {
8
+ "top_100": 0.2446919080599598,
9
+ "top_1000": 0.48975445539765006,
10
+ "top_5000": 0.6928702663989404,
11
+ "top_10000": 0.7681790167555828
12
  },
13
+ "hapax_count": 314664,
14
+ "hapax_ratio": 0.5895995997684053,
15
+ "total_documents": 19816
16
  }
17
  }
models/word_markov/as_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d6086bc53947c43f34edcda8e0a034c9779c3d6c44de35c31cc2ba88f8aa774
3
- size 9625358
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dee794f4598493fa43d8881826c834158cd1924e7bf01f8b09883966c2c4551
3
+ size 51785769
models/word_markov/as_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_contexts": 178570,
6
- "total_transitions": 41886369
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_contexts": 533621,
6
+ "total_transitions": 8910700
7
  }
models/word_markov/as_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1542cf7905c52aefaff0db643b5107e37e6e10fccc17a863ba95e7ee172740fd
3
- size 27241048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e48a73c55b3245526277a461699b91064b48048043751385238d1834df5c660
3
+ size 152841623
models/word_markov/as_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_contexts": 1073346,
6
- "total_transitions": 41865578
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_contexts": 4157114,
6
+ "total_transitions": 8890884
7
  }
models/word_markov/as_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:09d22e9d5414b02524a41246b9410c78ab3bf52bcade25ce445168e3445ee4d8
3
- size 63382094
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:922936a8c2bed9b31b58c6a39ab1ac62bbb87ba920bce8481b34b94b18e54d6c
3
+ size 221316941
models/word_markov/as_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_contexts": 2893182,
6
- "total_transitions": 41844788
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_contexts": 7059669,
6
+ "total_transitions": 8871068
7
  }
models/word_markov/as_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05e3849b2bcb7ebe1f9ae63c37ed5ea9973336b012b7a2c60020150cd53b6a7d
3
- size 124869442
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec28d09ad62e65c263cebddcd4aa013d595cde5d3a84b87806a2678142794293
3
+ size 264956691
models/word_markov/as_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_contexts": 5984484,
6
- "total_transitions": 41824000
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_contexts": 8117676,
6
+ "total_transitions": 8851252
7
  }
models/word_ngram/as_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:46ca72b889b191be977809c46f9e5c601830c691249878aa061223811c6c065a
3
- size 1981156
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fedc9b5fb35b14c97554711e0c15f091fd551fc13bd0f58c65e3736a27873618
3
+ size 4559205
models/word_ngram/as_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_ngrams": 144083,
6
- "total_ngrams": 41886369
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_ngrams": 198049,
6
+ "total_ngrams": 8910700
7
  }
models/word_ngram/as_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b16c54cf2957c0d5c96809a5a7c39da4fe7c1dd6c13b3345c3073e0cc4a9da1
3
- size 8106622
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d4d06c699b30c3af5e1b2c5d4e9d86cf2bcddb46b24cacb7d1cdac931cdb9c0
3
+ size 6432611
models/word_ngram/as_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_ngrams": 558435,
6
- "total_ngrams": 41865578
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_ngrams": 226215,
6
+ "total_ngrams": 8890884
7
  }
models/word_ngram/as_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c567cb651bd6273d82124f43b6ac4426b9ece8a16302648d995617ab25da8d18
3
- size 26163353
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cdc71a6fa753ca07969e931702f409591c86bbbe430018acfacdd2b66258619
3
+ size 10901034
models/word_ngram/as_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "as",
5
- "unique_ngrams": 1749119,
6
- "total_ngrams": 41844788
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "as",
5
+ "unique_ngrams": 355974,
6
+ "total_ngrams": 8871068
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: 895fe35c6e5be9fd47380ce45e17bcbb0d42cb11dba0d6bc1ad4f595e4bd2c29
  • Pointer size: 131 Bytes
  • Size of remote file: 138 kB

Git LFS Details

  • SHA256: 2e27f01231cb262e90cb3752a8ecb4485a0e8479f69d16eccc8bead471abf29a
  • Pointer size: 131 Bytes
  • Size of remote file: 139 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED