omarkamali commited on
Commit
297bfad
·
verified ·
1 Parent(s): 5a9ff35

Upload all models and assets for awa (20251001)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +256 -134
  2. models/embeddings/monolingual/awa_128d.bin +2 -2
  3. models/embeddings/monolingual/awa_128d_metadata.json +5 -3
  4. models/embeddings/monolingual/awa_32d.bin +2 -2
  5. models/embeddings/monolingual/awa_32d_metadata.json +5 -3
  6. models/embeddings/monolingual/awa_64d.bin +2 -2
  7. models/embeddings/monolingual/awa_64d_metadata.json +5 -3
  8. models/subword_markov/awa_markov_ctx1_subword.parquet +2 -2
  9. models/subword_markov/awa_markov_ctx1_subword_metadata.json +2 -2
  10. models/subword_markov/awa_markov_ctx2_subword.parquet +2 -2
  11. models/subword_markov/awa_markov_ctx2_subword_metadata.json +2 -2
  12. models/subword_markov/awa_markov_ctx3_subword.parquet +2 -2
  13. models/subword_markov/awa_markov_ctx3_subword_metadata.json +2 -2
  14. models/subword_markov/awa_markov_ctx4_subword.parquet +2 -2
  15. models/subword_markov/awa_markov_ctx4_subword_metadata.json +2 -2
  16. models/subword_ngram/awa_2gram_subword.parquet +2 -2
  17. models/subword_ngram/awa_2gram_subword_metadata.json +2 -2
  18. models/subword_ngram/awa_3gram_subword.parquet +2 -2
  19. models/subword_ngram/awa_3gram_subword_metadata.json +2 -2
  20. models/subword_ngram/awa_4gram_subword.parquet +2 -2
  21. models/subword_ngram/awa_4gram_subword_metadata.json +2 -2
  22. models/tokenizer/awa_tokenizer_16k.model +2 -2
  23. models/tokenizer/awa_tokenizer_16k.vocab +0 -0
  24. models/tokenizer/awa_tokenizer_32k.model +2 -2
  25. models/tokenizer/awa_tokenizer_32k.vocab +0 -0
  26. models/tokenizer/awa_tokenizer_8k.model +2 -2
  27. models/tokenizer/awa_tokenizer_8k.vocab +0 -0
  28. models/vocabulary/awa_vocabulary.parquet +2 -2
  29. models/vocabulary/awa_vocabulary_metadata.json +10 -9
  30. models/word_markov/awa_markov_ctx1_word.parquet +2 -2
  31. models/word_markov/awa_markov_ctx1_word_metadata.json +2 -2
  32. models/word_markov/awa_markov_ctx2_word.parquet +2 -2
  33. models/word_markov/awa_markov_ctx2_word_metadata.json +2 -2
  34. models/word_markov/awa_markov_ctx3_word.parquet +2 -2
  35. models/word_markov/awa_markov_ctx3_word_metadata.json +2 -2
  36. models/word_markov/awa_markov_ctx4_word.parquet +2 -2
  37. models/word_markov/awa_markov_ctx4_word_metadata.json +2 -2
  38. models/word_ngram/awa_2gram_word.parquet +2 -2
  39. models/word_ngram/awa_2gram_word_metadata.json +2 -2
  40. models/word_ngram/awa_3gram_word.parquet +2 -2
  41. models/word_ngram/awa_3gram_word_metadata.json +2 -2
  42. models/word_ngram/awa_4gram_word.parquet +2 -2
  43. models/word_ngram/awa_4gram_word_metadata.json +2 -2
  44. visualizations/embedding_isotropy.png +0 -0
  45. visualizations/embedding_norms.png +0 -0
  46. visualizations/embedding_similarity.png +2 -2
  47. visualizations/markov_branching.png +0 -0
  48. visualizations/markov_contexts.png +0 -0
  49. visualizations/markov_entropy.png +0 -0
  50. visualizations/model_sizes.png +0 -0
README.md CHANGED
@@ -23,14 +23,14 @@ dataset_info:
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
- value: 4.121
27
  - name: best_isotropy
28
  type: isotropy
29
- value: 0.7531
30
  - name: vocabulary_size
31
  type: vocab
32
- value: 6221
33
- generated: 2025-12-27
34
  ---
35
 
36
  # AWA - Wikilangs Models
@@ -44,12 +44,13 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
- - N-gram models (2, 3, 4-gram)
48
- - Markov chains (context of 1, 2, 3 and 4)
49
  - Subword N-gram and Markov chains
50
- - Embeddings in various sizes and dimensions
51
  - Language Vocabulary
52
  - Language Statistics
 
53
  ![Performance Dashboard](visualizations/performance_dashboard.png)
54
 
55
  ### Analysis and Evaluation
@@ -59,7 +60,8 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
59
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
60
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
61
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
62
- - [6. Summary & Recommendations](#6-summary--recommendations)
 
63
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
64
  - [Visualizations Index](#visualizations-index)
65
 
@@ -68,51 +70,53 @@ We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and
68
 
69
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
70
 
 
 
 
 
 
 
71
  ### Results
72
 
73
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
74
  |------------|-------------|---------------|----------|--------------|
75
- | **8k** | 3.216x | 3.14 | 0.1062% | 117,684 |
76
- | **16k** | 3.545x | 3.46 | 0.1171% | 106,780 |
77
- | **32k** | 3.841x | 3.75 | 0.1268% | 98,555 |
78
- | **64k** | 4.121x 🏆 | 4.02 | 0.1361% | 91,849 |
79
 
80
  ### Tokenization Examples
81
 
82
  Below are sample sentences tokenized with each vocabulary size:
83
 
84
- **Sample 1:** `हिंदीनेस्ट डॉट कॉम हिन्दी भाषा कय एक्ठु पत्रिका होय ।`
85
 
86
  | Vocab | Tokens | Count |
87
  |-------|--------|-------|
88
- | 8k | `▁हिंदी ने स्ट ▁डॉ ▁कॉ ▁हिन्दी ▁भाषा ▁कय ... (+4 more)` | 14 |
89
- | 16k | `▁हिंदी नेस्ट ▁डॉ ▁कॉम ▁हिन्दी ▁भाषा ▁कय ▁एक्ठु ▁पत्रिका ... (+2 more)` | 12 |
90
- | 32k | `▁हिंदी नेस्ट ▁डॉ ▁कॉम ▁हिन्दी ▁भाषा ▁कय ▁एक्ठु ▁पत्रिका ... (+2 more)` | 12 |
91
- | 64k | `▁हिंदीनेस्ट ▁डॉट ▁कॉम ▁हिन्दी ▁भाषा ▁कय ▁एक्ठु ▁पत्रिका ▁होय ▁।` | 10 |
92
 
93
- **Sample 2:** `इशांत शर्मा भारतीय क्रिकेट खिलाड़ी होयँ। इशांत शर्मा कय जनम सितम्बर १९८८ को द...`
94
 
95
  | Vocab | Tokens | Count |
96
  |-------|--------|-------|
97
- | 8k | `▁इ ांत ▁शर्मा ▁भारतीय ▁क्रिकेट ▁खिलाड़ी ▁होयँ ▁इ ... (+15 more)` | 25 |
98
- | 16k | `▁इश ांत ▁शर्मा ▁भारतीय ▁क्रिकेट ▁खिलाड़ी ▁होयँ ▁इश ांत ... (+12 more)` | 22 |
99
- | 32k | `▁इशांत ▁शर्मा ▁भारतीय ▁क्रिकेट ▁खिलाड़ी ▁होयँ ▁इशांत ▁शर्मा ▁कय ... (+10 more)` | 20 |
100
- | 64k | `▁इशांत ▁शर्मा ▁भारतीय ▁क्रिकेट ▁खिलाड़ी ▁होयँ । ▁इशांत ▁शर्मा ▁कय ... (+10 more)` | 20 |
101
 
102
- **Sample 3:** `बीरबल साहनी (नवंबर, 1891 - 10 अप्रैल, 1949) पुरावनस्पति वैज्ञानिक रहे।`
103
 
104
  | Vocab | Tokens | Count |
105
  |-------|--------|-------|
106
- | 8k | `▁बी बल ▁साह नी ▁( वंबर , ... (+22 more)` | 32 |
107
- | 16k | `▁बीर बल ▁साहनी ▁( वंबर , 1 8 ... (+20 more)` | 30 |
108
- | 32k | `▁बीर बल ▁साहनी ▁( नवंबर , 1 8 9 ... (+18 more)` | 28 |
109
- | 64k | `▁बीरबल ▁साहनी ▁( नवंबर , ▁ 1 8 9 1 ... (+16 more)` | 26 |
110
 
111
 
112
  ### Key Findings
113
 
114
- - **Best Compression:** 64k achieves 4.121x compression
115
- - **Lowest UNK Rate:** 8k with 0.1062% unknown tokens
116
  - **Trade-off:** Larger vocabularies improve compression but increase model size
117
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
118
 
@@ -121,57 +125,89 @@ Below are sample sentences tokenized with each vocabulary size:
121
 
122
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
123
 
 
 
124
  ![N-gram Coverage](visualizations/ngram_coverage.png)
125
 
126
  ### Results
127
 
128
- | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
129
- |--------|------------|---------|----------------|------------------|-------------------|
130
- | **2-gram** | 1,385 🏆 | 10.44 | 12,281 | 39.7% | 79.7% |
131
- | **2-gram** | 675 🏆 | 9.40 | 4,587 | 48.8% | 91.0% |
132
- | **3-gram** | 8,332 | 13.02 | 41,199 | 17.5% | 46.9% |
133
- | **3-gram** | 5,112 | 12.32 | 31,251 | 20.4% | 54.9% |
134
- | **4-gram** | 26,093 | 14.67 | 106,862 | 12.1% | 31.7% |
135
- | **4-gram** | 20,452 | 14.32 | 106,978 | 12.4% | 33.6% |
136
 
137
  ### Top 5 N-grams by Size
138
 
139
- **2-grams:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
 
141
  | Rank | N-gram | Count |
142
  |------|--------|-------|
143
- | 1 | `् र` | 15,730 |
144
- | 2 | `क ा` | 10,109 |
145
- | 3 | `र ा` | 9,823 |
146
- | 4 | `क े` | 9,329 |
147
- | 5 | `ा र` | 9,129 |
148
 
149
- **3-grams:**
150
 
151
  | Rank | N-gram | Count |
152
  |------|--------|-------|
153
- | 1 | `् े` | 5,246 |
154
- | 2 | `श र` | 4,638 |
155
- | 3 | `म ं` | 4,298 |
156
- | 4 | `र ण` | 4,146 |
157
- | 5 | `े ी` | 4,113 |
158
 
159
- **4-grams:**
160
 
161
  | Rank | N-gram | Count |
162
  |------|--------|-------|
163
- | 1 | `श े` | 4,151 |
164
- | 2 | `् ण` | 4,138 |
165
- | 3 | `र ी` | 4,112 |
166
- | 4 | `े :` | 3,973 |
167
- | 5 | `ह ।` | 2,934 |
 
 
 
 
 
 
 
 
 
 
168
 
169
 
170
  ### Key Findings
171
 
172
- - **Best Perplexity:** 2-gram with 675
173
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
174
- - **Coverage:** Top-1000 patterns cover ~34% of corpus
175
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
176
 
177
  ---
@@ -179,55 +215,86 @@ Below are sample sentences tokenized with each vocabulary size:
179
 
180
  ![Markov Entropy](visualizations/markov_entropy.png)
181
 
 
 
182
  ![Markov Branching](visualizations/markov_branching.png)
183
 
184
  ### Results
185
 
186
- | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
187
- |---------|-------------|------------|------------------|-----------------|----------------|
188
- | **1** | 0.5922 | 1.508 | 5.79 | 13,918 | 40.8% |
189
- | **1** | 1.5439 | 2.916 | 14.22 | 676 | 0.0% |
190
- | **2** | 0.4119 | 1.330 | 2.61 | 80,553 | 58.8% |
191
- | **2** | 1.1858 | 2.275 | 6.79 | 9,605 | 0.0% |
192
- | **3** | 0.3135 | 1.243 | 1.85 | 210,504 | 68.7% |
193
- | **3** | 0.7675 | 1.702 | 3.28 | 65,205 | 23.2% |
194
- | **4** | 0.2012 🏆 | 1.150 | 1.41 | 389,838 | 79.9% |
195
- | **4** | 0.4559 🏆 | 1.372 | 2.00 | 214,041 | 54.4% |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
- ### Generated Text Samples
198
 
199
- Below are text samples generated from each Markov chain model:
200
 
201
  **Context Size 1:**
202
 
203
- 1. `ा । स े ं : उत ् व ा ब म ा कहत ि स`
204
- 2. `् ग ृ ष ् र ा गय ा न ौ क े ं द ्`
205
- 3. `े ट ा ल और 17 फ ा थ / localbodies . htm श ् ट`
206
 
207
  **Context Size 2:**
208
 
209
- 1. `् र ि श ी व ् द मह ा द े श क े अरब प`
210
- 2. `क ा स ् त ा थ खतम करय अउर सब ध ा न कय च ौ`
211
- 3. `र ा जन ै त ू र ् णय क ि ह ् मणन कय सबस े`
212
 
213
  **Context Size 3:**
214
 
215
- 1. `् र े ण ी : उत ् तर प ् रद े श कय नगर प ं`
216
- 2. `श ् र े ण ी : र ा ष ् ट ् र े ज ़ ी`
217
- 3. `म े ं प ं ज ा ब , स ि खय व ा ल ि आर ि`
218
 
219
  **Context Size 4:**
220
 
221
- 1. `श ् र े ण ी : नगर प ं च ा यत श ् र े ण ी`
222
- 2. `् र े ण ी : र ा जन ी त ि क दल ह ो य । स`
223
- 3. `र े ण ी : उत ् तर प ् रद े श स ं दर ् भ http`
224
 
225
 
226
  ### Key Findings
227
 
228
- - **Best Predictability:** Context-4 with 79.9% predictability
229
  - **Branching Factor:** Decreases with context size (more deterministic)
230
- - **Memory Trade-off:** Larger contexts require more storage (214,041 contexts)
231
  - **Recommendation:** Context-3 or Context-4 for text generation
232
 
233
  ---
@@ -243,64 +310,64 @@ Below are text samples generated from each Markov chain model:
243
 
244
  | Metric | Value |
245
  |--------|-------|
246
- | Vocabulary Size | 6,221 |
247
- | Total Tokens | 624,335 |
248
- | Mean Frequency | 100.36 |
249
- | Median Frequency | 4 |
250
- | Frequency Std Dev | 1247.50 |
251
 
252
  ### Most Common Words
253
 
254
  | Rank | Word | Frequency |
255
  |------|------|-----------|
256
- | 1 | | 42,910 |
257
- | 2 | | 40,728 |
258
- | 3 | | 27,096 |
259
- | 4 | | 26,749 |
260
- | 5 | | 24,046 |
261
- | 6 | | 23,543 |
262
- | 7 | | 23,175 |
263
- | 8 | | 22,215 |
264
- | 9 | | 20,708 |
265
- | 10 | | 20,269 |
266
 
267
  ### Least Common Words (from vocabulary)
268
 
269
  | Rank | Word | Frequency |
270
  |------|------|-----------|
271
- | 1 | dish | 2 |
272
- | 2 | uzbekistan | 2 |
273
- | 3 | travel | 2 |
274
- | 4 | lagman | 2 |
275
- | 5 | उबलत | 2 |
276
- | 6 | रगमन | 2 |
277
- | 7 | एमई | 2 |
278
- | 8 | उचक | 2 |
279
- | 9 | शबरक | 2 |
280
- | 10 | एसएन | 2 |
281
 
282
  ### Zipf's Law Analysis
283
 
284
  | Metric | Value |
285
  |--------|-------|
286
- | Zipf Coefficient | 1.4504 |
287
- | R² (Goodness of Fit) | 0.991083 |
288
  | Adherence Quality | **excellent** |
289
 
290
  ### Coverage Analysis
291
 
292
  | Top N Words | Coverage |
293
  |-------------|----------|
294
- | Top 100 | 81.6% |
295
- | Top 1,000 | 95.9% |
296
- | Top 5,000 | 99.6% |
297
- | Top 10,000 | 0.0% |
298
 
299
  ### Key Findings
300
 
301
- - **Zipf Compliance:** R²=0.9911 indicates excellent adherence to Zipf's law
302
- - **High Frequency Dominance:** Top 100 words cover 81.6% of corpus
303
- - **Long Tail:** -3,779 words needed for remaining 100.0% coverage
304
 
305
  ---
306
  ## 5. Word Embeddings Evaluation
@@ -313,24 +380,76 @@ Below are text samples generated from each Markov chain model:
313
 
314
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
315
 
316
- ### Model Comparison
317
 
318
- | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy |
319
- |-------|------------|-----------|----------|----------|----------|
320
- | **mono_32d** | 6,567 | 32 | 3.456 | 0.807 | 0.7531 🏆 |
321
- | **mono_64d** | 6,567 | 64 | 3.529 | 0.788 | 0.3641 |
322
- | **mono_128d** | 6,567 | 128 | 3.540 | 0.789 | 0.0857 |
323
- | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 |
 
 
 
 
 
 
324
 
325
  ### Key Findings
326
 
327
- - **Best Isotropy:** mono_32d with 0.7531 (more uniform distribution)
328
- - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy
329
- - **Vocabulary Coverage:** All models cover 6,567 words
330
- - **Recommendation:** 100d for balanced semantic capture and efficiency
331
 
332
  ---
333
- ## 6. Summary & Recommendations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
334
 
335
  ![Performance Dashboard](visualizations/performance_dashboard.png)
336
 
@@ -338,11 +457,12 @@ Below are text samples generated from each Markov chain model:
338
 
339
  | Component | Recommended | Rationale |
340
  |-----------|-------------|-----------|
341
- | Tokenizer | **32k BPE** | Best compression (4.12x) with low UNK rate |
342
- | N-gram | **5-gram** | Lowest perplexity (675) |
343
- | Markov | **Context-4** | Highest predictability (79.9%) |
344
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
345
 
 
346
  ---
347
  ## Appendix: Metrics Glossary & Interpretation Guide
348
 
@@ -532,7 +652,8 @@ If you use these models in your research, please cite:
532
  author = {Kamali, Omar},
533
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
534
  year = {2025},
535
- publisher = {HuggingFace},
 
536
  url = {https://huggingface.co/wikilangs}
537
  institution = {Omneity Labs}
538
  }
@@ -548,7 +669,8 @@ MIT License - Free for academic and commercial use.
548
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
549
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
550
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
 
551
  ---
552
  *Generated by Wikilangs Models Pipeline*
553
 
554
- *Report Date: 2025-12-27 20:46:37*
 
23
  metrics:
24
  - name: best_compression_ratio
25
  type: compression
26
+ value: 3.897
27
  - name: best_isotropy
28
  type: isotropy
29
+ value: 0.7129
30
  - name: vocabulary_size
31
  type: vocab
32
+ value: 0
33
+ generated: 2026-01-03
34
  ---
35
 
36
  # AWA - Wikilangs Models
 
44
  ### Models & Assets
45
 
46
  - Tokenizers (8k, 16k, 32k, 64k)
47
+ - N-gram models (2, 3, 4, 5-gram)
48
+ - Markov chains (context of 1, 2, 3, 4 and 5)
49
  - Subword N-gram and Markov chains
50
+ - Embeddings in various sizes and dimensions (aligned and unaligned)
51
  - Language Vocabulary
52
  - Language Statistics
53
+
54
  ![Performance Dashboard](visualizations/performance_dashboard.png)
55
 
56
  ### Analysis and Evaluation
 
60
  - [3. Markov Chain Evaluation](#3-markov-chain-evaluation)
61
  - [4. Vocabulary Analysis](#4-vocabulary-analysis)
62
  - [5. Word Embeddings Evaluation](#5-word-embeddings-evaluation)
63
+ - [6. Morphological Analysis (Experimental)](#6-morphological-analysis)
64
+ - [7. Summary & Recommendations](#7-summary--recommendations)
65
  - [Metrics Glossary](#appendix-metrics-glossary--interpretation-guide)
66
  - [Visualizations Index](#visualizations-index)
67
 
 
70
 
71
  ![Tokenizer Compression](visualizations/tokenizer_compression.png)
72
 
73
+ ![Tokenizer Fertility](visualizations/tokenizer_fertility.png)
74
+
75
+ ![Tokenizer OOV](visualizations/tokenizer_oov.png)
76
+
77
+ ![Total Tokens](visualizations/tokenizer_total_tokens.png)
78
+
79
  ### Results
80
 
81
  | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens |
82
  |------------|-------------|---------------|----------|--------------|
83
+ | **8k** | 3.327x | 3.33 | 0.1233% | 131,409 |
84
+ | **16k** | 3.614x | 3.62 | 0.1339% | 120,950 |
85
+ | **32k** | 3.897x 🏆 | 3.91 | 0.1444% | 112,178 |
 
86
 
87
  ### Tokenization Examples
88
 
89
  Below are sample sentences tokenized with each vocabulary size:
90
 
91
+ **Sample 1:** `मानवशास्त्र या नृविज्ञान (:en:Anthropology) मनईन, वनकय जेनेटिक्स, संस्कृति अउर स...`
92
 
93
  | Vocab | Tokens | Count |
94
  |-------|--------|-------|
95
+ | 8k | `▁मानव शास्त्र ▁या ▁न विज्ञान ▁(: en : an ... (+25 more)` | 35 |
96
+ | 16k | `▁मानवशास्त्र ▁या ▁नृ विज्ञान ▁(: en : an throp ology ... (+23 more)` | 33 |
97
+ | 32k | `▁मानवशास्त्र ▁या ▁नृविज्ञान ▁(: en : anthropology ) ▁मनईन , ... (+16 more)` | 26 |
 
98
 
99
+ **Sample 2:** `सिरसा, भारत देश के हरियाणा राज्य कय एक्ठु जिला अव नगर परिषद होय । कय नगर परिषद म...`
100
 
101
  | Vocab | Tokens | Count |
102
  |-------|--------|-------|
103
+ | 8k | `▁सिरसा , ▁भारत ▁देश ▁के ▁हरियाणा ▁राज्य ▁कय ▁एक्ठु ▁जिला ... (+11 more)` | 21 |
104
+ | 16k | `▁सिरसा , ▁भारत ▁देश ▁के ▁हरियाणा ▁राज्य ▁कय ▁एक्ठु ▁जिला ... (+11 more)` | 21 |
105
+ | 32k | `▁सिरसा , ▁भारत ▁देश ▁के ▁हरियाणा ▁राज्य ▁कय ▁एक्ठु ▁जिला ... (+11 more)` | 21 |
 
106
 
107
+ **Sample 3:** `अनूपशहर, भारत देश के उत्तर प्रदेश प्रान्त के बुलंदशहर जिला कय एक्ठु नगर पालिका प...`
108
 
109
  | Vocab | Tokens | Count |
110
  |-------|--------|-------|
111
+ | 8k | `▁अन ूप हर , ▁भारत ▁देश ▁के ▁उत्तर ▁प्रदेश ... (+21 more)` | 31 |
112
+ | 16k | `▁अनूपश हर , ▁भारत ▁देश ▁के ▁उत्तर ▁प्रदेश ▁प्रान्त ▁के ... (+19 more)` | 29 |
113
+ | 32k | `▁अनूपशहर , ▁भारत ▁देश ▁के ▁उत्तर ▁प्रदेश ▁प्रान्त ▁के ▁बुलंदशहर ... (+18 more)` | 28 |
 
114
 
115
 
116
  ### Key Findings
117
 
118
+ - **Best Compression:** 32k achieves 3.897x compression
119
+ - **Lowest UNK Rate:** 8k with 0.1233% unknown tokens
120
  - **Trade-off:** Larger vocabularies improve compression but increase model size
121
  - **Recommendation:** 32k vocabulary provides optimal balance for production use
122
 
 
125
 
126
  ![N-gram Perplexity](visualizations/ngram_perplexity.png)
127
 
128
+ ![N-gram Unique](visualizations/ngram_unique.png)
129
+
130
  ![N-gram Coverage](visualizations/ngram_coverage.png)
131
 
132
  ### Results
133
 
134
+ | N-gram | Variant | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage |
135
+ |--------|---------|------------|---------|----------------|------------------|-------------------|
136
+ | **2-gram** | Word | 2,211 | 11.11 | 5,376 | 29.5% | 59.6% |
137
+ | **2-gram** | Subword | 1,584 | 10.63 | 11,871 | 40.0% | 73.5% |
138
+ | **3-gram** | Word | 1,558 🏆 | 10.61 | 4,851 | 36.7% | 66.9% |
139
+ | **3-gram** | Subword | 9,994 | 13.29 | 42,588 | 17.4% | 41.6% |
140
+ | **4-gram** | Word | 3,905 | 11.93 | 12,076 | 28.3% | 51.2% |
141
+ | **4-gram** | Subword | 29,097 | 14.83 | 105,286 | 12.1% | 28.9% |
142
 
143
  ### Top 5 N-grams by Size
144
 
145
+ **2-grams (Word):**
146
+
147
+ | Rank | N-gram | Count |
148
+ |------|--------|-------|
149
+ | 1 | `प्रदेश कय` | 1,241 |
150
+ | 2 | `कय एक्ठु` | 1,217 |
151
+ | 3 | `नगर पंचायत` | 932 |
152
+ | 4 | `शहरी निकाय` | 837 |
153
+ | 5 | `उत्तर प्रदेश` | 773 |
154
+
155
+ **3-grams (Word):**
156
+
157
+ | Rank | N-gram | Count |
158
+ |------|--------|-------|
159
+ | 1 | `कय एक्ठु नगर` | 700 |
160
+ | 2 | `भारत देश के` | 696 |
161
+ | 3 | `जिला कय एक्ठु` | 680 |
162
+ | 4 | `कय शहरी निकाय` | 667 |
163
+ | 5 | `के उत्तर प्रदेश` | 586 |
164
+
165
+ **4-grams (Word):**
166
 
167
  | Rank | N-gram | Count |
168
  |------|--------|-------|
169
+ | 1 | `जिला कय एक्ठु नगर` | 661 |
170
+ | 2 | `के उत्तर प्रदेश प्रान्त` | 582 |
171
+ | 3 | `निकाय प��रदेश कय नगर` | 581 |
172
+ | 4 | `शहरी निकाय प्रदेश कय` | 581 |
173
+ | 5 | `कय शहरी निकाय प्रदेश` | 581 |
174
 
175
+ **2-grams (Subword):**
176
 
177
  | Rank | N-gram | Count |
178
  |------|--------|-------|
179
+ | 1 | `र _` | 18,112 |
180
+ | 2 | `य _` | 17,719 |
181
+ | 3 | `_ क` | 16,272 |
182
+ | 4 | `न _` | 12,852 |
183
+ | 5 | `। _` | 11,559 |
184
 
185
+ **3-grams (Subword):**
186
 
187
  | Rank | N-gram | Count |
188
  |------|--------|-------|
189
+ | 1 | `क _` | 10,797 |
190
+ | 2 | `_ य` | 10,549 |
191
+ | 3 | `_ के _` | 6,719 |
192
+ | 4 | `_ से _` | 3,956 |
193
+ | 5 | `_ में _` | 3,886 |
194
+
195
+ **4-grams (Subword):**
196
+
197
+ | Rank | N-gram | Count |
198
+ |------|--------|-------|
199
+ | 1 | `_ क य _` | 10,506 |
200
+ | 2 | `_ प्र दे श` | 2,241 |
201
+ | 3 | `प्र दे श _` | 2,190 |
202
+ | 4 | `_ है । _` | 2,071 |
203
+ | 5 | `भा र त _` | 2,019 |
204
 
205
 
206
  ### Key Findings
207
 
208
+ - **Best Perplexity:** 3-gram (word) with 1,558
209
  - **Entropy Trend:** Decreases with larger n-grams (more predictable)
210
+ - **Coverage:** Top-1000 patterns cover ~29% of corpus
211
  - **Recommendation:** 4-gram or 5-gram for best predictive performance
212
 
213
  ---
 
215
 
216
  ![Markov Entropy](visualizations/markov_entropy.png)
217
 
218
+ ![Markov Contexts](visualizations/markov_contexts.png)
219
+
220
  ![Markov Branching](visualizations/markov_branching.png)
221
 
222
  ### Results
223
 
224
+ | Context | Variant | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability |
225
+ |---------|---------|-------------|------------|------------------|-----------------|----------------|
226
+ | **1** | Word | 0.7301 | 1.659 | 4.17 | 37,356 | 27.0% |
227
+ | **1** | Subword | 1.0434 | 2.061 | 10.70 | 3,632 | 0.0% |
228
+ | **2** | Word | 0.1929 | 1.143 | 1.36 | 155,195 | 80.7% |
229
+ | **2** | Subword | 0.5412 | 1.455 | 3.46 | 38,845 | 45.9% |
230
+ | **3** | Word | 0.0474 | 1.033 | 1.07 | 209,159 | 95.3% |
231
+ | **3** | Subword | 0.4514 | 1.367 | 2.30 | 134,413 | 54.9% |
232
+ | **4** | Word | 0.0142 🏆 | 1.010 | 1.02 | 221,759 | 98.6% |
233
+ | **4** | Subword | 0.2387 | 1.180 | 1.51 | 308,778 | 76.1% |
234
+
235
+ ### Generated Text Samples (Word-based)
236
+
237
+ Below are text samples generated from each word-based Markov chain model:
238
+
239
+ **Context Size 1:**
240
+
241
+ 1. `कय एक्ठु राजनीतिक पार्टी पाकिस्तान कय एक्ठु जिला चुराचांदपुर जिला कय एक्ठो नगरपालिका सप्तरी जिला कय`
242
+ 2. `के नाम से गुवाहाटी सिलचर एन यू कि सिक्‍ख गुरू योगी आदित्यनाथ होइ सन्दर्भ कय शहरी`
243
+ 3. `में मौसम रहत है शरीर का अड्डा bho इहो देखा जाय रहा साथ जोश और निम्नतम`
244
+
245
+ **Context Size 2:**
246
+
247
+ 1. `प्रदेश कय मंडल होय एहमा 05 जिला आवत हँय फतेहाबाद जींद हिसार महेंद्रगढ़ गुड़गांव रोहतक और हिसार`
248
+ 2. `कय एक्ठु नगर पंचायत के पार्षद चुनाव में ६ राष्ट्रीय दल चुनाव लड़ रहे हैं भारतीय जनता`
249
+ 3. `उत्तर प्रदेश प्रान्त के बिजनौर जिला कय मुख्यालय अहै एह समाज मा खुदै आंतरिक सुधार कइके आपन`
250
+
251
+ **Context Size 3:**
252
+
253
+ 1. `कय एक्ठु नगर पंचायत होय संदर्भ प्रदेश कय शहरी निकाय प्रदेश कय नगर पंचायत पंचायत कय शहरी निकाय`
254
+ 2. `भारत देश के उत्तर प्र��ेश प्रान्त के आजमगढ़ जिला कय एक्ठु नगर पालिका होय संदर्भ कय नगर पालिका`
255
+ 3. `जिला कय एक्ठु नगर पालिका परिषद पालिका परिषद कय शहरी निकाय प्रदेश कय नगर पंचायत नगर`
256
+
257
+ **Context Size 4:**
258
+
259
+ 1. `जिला कय एक्ठु नगर पालिका परिषद होय संदर्भ प्रदेश कय शहरी निकाय प्रदेश कय नगर पालिका परिषद खीरी`
260
+ 2. `के उत्तर प्रदेश प्रान्त के आजमगढ़ जिला कय एक्ठु नगर पालिका परिषद होय संदर्भ प्रदेश कय शहरी निकाय प्र...`
261
+ 3. `शहरी निकाय प्रदेश कय नगर पालिका परिषद पालिका परिषद कय शहरी निकाय`
262
+
263
 
264
+ ### Generated Text Samples (Subword-based)
265
 
266
+ Below are text samples generated from each subword-based Markov chain model:
267
 
268
  **Context Size 1:**
269
 
270
+ 1. `_की_संग_कब्ज़ा.दिनसंलतः_इ`
271
+ 2. `र_५_के_कार_एथिरत_न,`
272
+ 3. `क_सिद्ध_भा_ए_आत्रेयत_प्रदे`
273
 
274
  **Context Size 2:**
275
 
276
+ 1. `र_पता_है।_विश्व_प्रथम_ई_`
277
+ 2. `य_छात्र_में_बने_मा_स्पेस),_`
278
+ 3. `_कय_राष्ट्रीय_है_सें._पुर_मा`
279
 
280
  **Context Size 3:**
281
 
282
+ 1. `कय_हाइड्रोकार्बन_कलायत_राज्य_`
283
+ 2. `_कय_लेकिन_वाली_एक_लोचनो_`
284
+ 3. `_के_प्रयाग,भारत_कय_क्रिस_मॉ`
285
 
286
  **Context Size 4:**
287
 
288
+ 1. `_कय_शहरी_निकाय_प्रदेश_कय_`
289
+ 2. `_प्रदेश_प्रान्त_के_झांसी_ललितपुर_`
290
+ 3. `प्रदेश_कय_लिए_रखे_गये_और_`
291
 
292
 
293
  ### Key Findings
294
 
295
+ - **Best Predictability:** Context-4 (word) with 98.6% predictability
296
  - **Branching Factor:** Decreases with context size (more deterministic)
297
+ - **Memory Trade-off:** Larger contexts require more storage (308,778 contexts)
298
  - **Recommendation:** Context-3 or Context-4 for text generation
299
 
300
  ---
 
310
 
311
  | Metric | Value |
312
  |--------|-------|
313
+ | Vocabulary Size | 15,883 |
314
+ | Total Tokens | 248,637 |
315
+ | Mean Frequency | 15.65 |
316
+ | Median Frequency | 3 |
317
+ | Frequency Std Dev | 134.68 |
318
 
319
  ### Most Common Words
320
 
321
  | Rank | Word | Frequency |
322
  |------|------|-----------|
323
+ | 1 | कय | 10,552 |
324
+ | 2 | के | 6,740 |
325
+ | 3 | में | 4,036 |
326
+ | 4 | से | 4,015 |
327
+ | 5 | है | 3,785 |
328
+ | 6 | मा | 3,358 |
329
+ | 7 | होय | 2,646 |
330
+ | 8 | का | 2,496 |
331
+ | 9 | प्रदेश | 2,219 |
332
+ | 10 | भारत | 1,992 |
333
 
334
  ### Least Common Words (from vocabulary)
335
 
336
  | Rank | Word | Frequency |
337
  |------|------|-----------|
338
+ | 1 | दृश्यता | 2 |
339
+ | 2 | दुर्घटनाग्रस्त | 2 |
340
+ | 3 | परिवारन | 2 |
341
+ | 4 | फेडरल | 2 |
342
+ | 5 | टेरिटरी | 2 |
343
+ | 6 | कुआला | 2 |
344
+ | 7 | लुंपुर | 2 |
345
+ | 8 | सेतापाक | 2 |
346
+ | 9 | पेटलिंग | 2 |
347
+ | 10 | ब्रुनेई | 2 |
348
 
349
  ### Zipf's Law Analysis
350
 
351
  | Metric | Value |
352
  |--------|-------|
353
+ | Zipf Coefficient | 1.0489 |
354
+ | R² (Goodness of Fit) | 0.990725 |
355
  | Adherence Quality | **excellent** |
356
 
357
  ### Coverage Analysis
358
 
359
  | Top N Words | Coverage |
360
  |-------------|----------|
361
+ | Top 100 | 38.4% |
362
+ | Top 1,000 | 66.7% |
363
+ | Top 5,000 | 87.7% |
364
+ | Top 10,000 | 95.3% |
365
 
366
  ### Key Findings
367
 
368
+ - **Zipf Compliance:** R²=0.9907 indicates excellent adherence to Zipf's law
369
+ - **High Frequency Dominance:** Top 100 words cover 38.4% of corpus
370
+ - **Long Tail:** 5,883 words needed for remaining 4.7% coverage
371
 
372
  ---
373
  ## 5. Word Embeddings Evaluation
 
380
 
381
  ![t-SNE Sentences](visualizations/tsne_sentences.png)
382
 
 
383
 
384
+ ### 5.1 Cross-Lingual Alignment
385
+
386
+ > *Note: Multilingual alignment visualization not available for this language.*
387
+
388
+
389
+ ### 5.2 Model Comparison
390
+
391
+ | Model | Dimension | Isotropy | Semantic Density | Alignment R@1 | Alignment R@10 |
392
+ |-------|-----------|----------|------------------|---------------|----------------|
393
+ | **mono_32d** | 32 | 0.7129 🏆 | 0.3782 | N/A | N/A |
394
+ | **mono_64d** | 64 | 0.3226 | 0.3543 | N/A | N/A |
395
+ | **mono_128d** | 128 | 0.0790 | 0.3513 | N/A | N/A |
396
 
397
  ### Key Findings
398
 
399
+ - **Best Isotropy:** mono_32d with 0.7129 (more uniform distribution)
400
+ - **Semantic Density:** Average pairwise similarity of 0.3612. Lower values indicate better semantic separation.
401
+ - **Alignment Quality:** No aligned models evaluated in this run.
402
+ - **Recommendation:** 128d aligned for best cross-lingual performance
403
 
404
  ---
405
+ ## 6. Morphological Analysis (Experimental)
406
+
407
+ > ⚠️ **Warning:** This language shows low morphological productivity. The statistical signals used for this analysis may be noisy or less reliable than for morphologically rich languages.
408
+
409
+ This section presents an automated morphological analysis derived from the statistical divergence between word-level and subword-level models. By analyzing where subword predictability spikes and where word-level coverage fails, we can infer linguistic structures without supervised data.
410
+
411
+ ### 6.1 Productivity & Complexity
412
+
413
+ | Metric | Value | Interpretation | Recommendation |
414
+ |--------|-------|----------------|----------------|
415
+ | Productivity Index | **0.000** | Low morphological productivity | ⚠️ Likely unreliable |
416
+ | Idiomaticity Gap | **-1.000** | Low formulaic content | - |
417
+
418
+ ### 6.2 Affix Inventory (Productive Units)
419
+
420
+ These are the most productive prefixes and suffixes identified by sampling the vocabulary for global substitutability patterns. A unit is considered an affix if stripping it leaves a valid stem that appears in other contexts.
421
+
422
+ *No productive affixes detected.*
423
+
424
+
425
+ ### 6.3 Bound Stems (Lexical Roots)
426
+
427
+ Bound stems are high-frequency subword units that are semantically cohesive but rarely appear as standalone words. These often correspond to the 'core' of a word that requires inflection or derivation to be valid.
428
+
429
+ *No significant bound stems detected.*
430
+
431
+
432
+ ### 6.4 Affix Compatibility (Co-occurrence)
433
+
434
+ This table shows which prefixes and suffixes most frequently co-occur on the same stems, revealing the 'stacking' rules of the language's morphology.
435
+
436
+ *No significant affix co-occurrences detected.*
437
+
438
+
439
+ ### 6.5 Recursive Morpheme Segmentation
440
+
441
+ Using **Recursive Hierarchical Substitutability**, we decompose complex words into their constituent morphemes. This approach handles nested affixes (e.g., `prefix-prefix-root-suffix`).
442
+
443
+ *Insufficient data for recursive segmentation.*
444
+
445
+
446
+ ### 6.6 Linguistic Interpretation
447
+
448
+ > **Automated Insight:**
449
+ The language AWA appears to be more isolating or has a highly fixed vocabulary. Word-level models perform nearly as well as subword models, indicating fewer productive morphological processes.
450
+
451
+ ---
452
+ ## 7. Summary & Recommendations
453
 
454
  ![Performance Dashboard](visualizations/performance_dashboard.png)
455
 
 
457
 
458
  | Component | Recommended | Rationale |
459
  |-----------|-------------|-----------|
460
+ | Tokenizer | **32k BPE** | Best compression (3.90x) |
461
+ | N-gram | **3-gram** | Lowest perplexity (1,558) |
462
+ | Markov | **Context-4** | Highest predictability (98.6%) |
463
  | Embeddings | **100d** | Balanced semantic capture and isotropy |
464
 
465
+
466
  ---
467
  ## Appendix: Metrics Glossary & Interpretation Guide
468
 
 
652
  author = {Kamali, Omar},
653
  title = {Wikilangs: Open NLP Models for Wikipedia Languages},
654
  year = {2025},
655
+ doi = {10.5281/zenodo.18073153},
656
+ publisher = {Zenodo},
657
  url = {https://huggingface.co/wikilangs}
658
  institution = {Omneity Labs}
659
  }
 
669
  - 🤗 Models: [huggingface.co/wikilangs](https://huggingface.co/wikilangs)
670
  - 📊 Data: [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
671
  - 👤 Author: [Omar Kamali](https://huggingface.co/omarkamali)
672
+ - 🤝 Sponsor: [Featherless AI](https://featherless.ai)
673
  ---
674
  *Generated by Wikilangs Models Pipeline*
675
 
676
+ *Report Date: 2026-01-03 05:27:10*
models/embeddings/monolingual/awa_128d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2134a3d39707a0e61b22d795679798fa20b422fdbb70c34dcdbafce442c55188
3
- size 1030890467
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8d0ee1e312f5e3791814211acf276b13b65aa885e4e5a95c71cac289f30625f
3
+ size 1030147731
models/embeddings/monolingual/awa_128d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 128,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 6567
13
  }
 
3
  "dimension": 128,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 128
13
  },
14
+ "vocab_size": 5860
15
  }
models/embeddings/monolingual/awa_32d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:03b687fb388d3d1b50f93a70036dcb1f05b12ed00918b4e3cdaedd42b14ce46f
3
- size 257847011
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d216891b957241c5e5afaef2d177855d2cb1d66c5343414a5fc2f4199c807971
3
+ size 257647251
models/embeddings/monolingual/awa_32d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 32,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 6567
13
  }
 
3
  "dimension": 32,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 32
13
  },
14
+ "vocab_size": 5860
15
  }
models/embeddings/monolingual/awa_64d.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3760ee4214954055389216fc2c5c800072a4c4bd87d656c91aee6cf8a781448c
3
- size 515528163
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5151750f712ffaa4369a74c3e8f893fe2f8fadabed61f754c75e3773fba76476
3
+ size 515147411
models/embeddings/monolingual/awa_64d_metadata.json CHANGED
@@ -3,11 +3,13 @@
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
- "dim": 64,
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
- "epochs": 5
 
 
11
  },
12
- "vocab_size": 6567
13
  }
 
3
  "dimension": 64,
4
  "version": "monolingual",
5
  "training_params": {
6
+ "algorithm": "skipgram",
7
  "min_count": 5,
8
  "window": 5,
9
  "negative": 5,
10
+ "epochs": 5,
11
+ "encoding_method": "rope",
12
+ "dim": 64
13
  },
14
+ "vocab_size": 5860
15
  }
models/subword_markov/awa_markov_ctx1_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:71e90ab50ff08b4a4acc795b7fe73231a08bab2bd54daf823bbd44866515fa80
3
- size 75933
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:271a40da9bb401252516694f1ddd0ed6e63fe5157677cf157cc751ab43de7160
3
+ size 249317
models/subword_markov/awa_markov_ctx1_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_contexts": 676,
6
- "total_transitions": 1804933
7
  }
 
2
  "context_size": 1,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_contexts": 3632,
6
+ "total_transitions": 1016594
7
  }
models/subword_markov/awa_markov_ctx2_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:67bd149dfcaf7d3495dc092a09462870ca5d0acdf88fed781138c910ca04903a
3
- size 448892
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98dd04397a244bec649c1467886404517928646d46bd56616366077081306470
3
+ size 1046154
models/subword_markov/awa_markov_ctx2_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_contexts": 9605,
6
- "total_transitions": 1801177
7
  }
 
2
  "context_size": 2,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_contexts": 38845,
6
+ "total_transitions": 1013808
7
  }
models/subword_markov/awa_markov_ctx3_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:761708f5e394a72fdac5044131d7b0e80264ee9745dd751ff90dfabafbd77a70
3
- size 1500356
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e59e71dcd671f534598174056219214277dba9269677f32b105b8f2259fd6891
3
+ size 2860240
models/subword_markov/awa_markov_ctx3_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_contexts": 65205,
6
- "total_transitions": 1797421
7
  }
 
2
  "context_size": 3,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_contexts": 134413,
6
+ "total_transitions": 1011022
7
  }
models/subword_markov/awa_markov_ctx4_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:622ff6876bbeef23f56e6323ef2265f4c833469295162d61d2bab7972e7c25f6
3
- size 3938365
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e144785343170a9509d1043785f2dd7bbb7df7fe73020aa276c66c93967a4d02
3
+ size 5625952
models/subword_markov/awa_markov_ctx4_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_contexts": 214041,
6
- "total_transitions": 1793665
7
  }
 
2
  "context_size": 4,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_contexts": 308778,
6
+ "total_transitions": 1008236
7
  }
models/subword_ngram/awa_2gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9261e6dccd50b7a4d81414b6f34f22dd3cc9f7a9ac446f9bf930c93c8e87cc48
3
- size 57668
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:989cbc777e82e6057dffa1dead21d90a773e1d56a5764b6000da9a4e3b842dd1
3
+ size 167016
models/subword_ngram/awa_2gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_ngrams": 4587,
6
- "total_ngrams": 1804933
7
  }
 
2
  "n": 2,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_ngrams": 11871,
6
+ "total_ngrams": 1016594
7
  }
models/subword_ngram/awa_3gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f766ae156ebafb96c712347ee1e7482db9e1521b9ba78cb3b31186aa751a422
3
- size 401856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:665623a662a43f315f72ad68154c03493d7df87f55876e69e0b3f72a6d291f5b
3
+ size 629586
models/subword_ngram/awa_3gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_ngrams": 31251,
6
- "total_ngrams": 1801177
7
  }
 
2
  "n": 3,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_ngrams": 42588,
6
+ "total_ngrams": 1013808
7
  }
models/subword_ngram/awa_4gram_subword.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c54094aa0dc468f8b2ca51287495a31a126e91650ed7c84d3b1eab6859776be4
3
- size 1329394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bec67a4e37346930b803666f70ab5257ee999e9bd39016ba78195b60ab06622a
3
+ size 1555001
models/subword_ngram/awa_4gram_subword_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "awa",
5
- "unique_ngrams": 106978,
6
- "total_ngrams": 1797421
7
  }
 
2
  "n": 4,
3
  "variant": "subword",
4
  "language": "awa",
5
+ "unique_ngrams": 105286,
6
+ "total_ngrams": 1011022
7
  }
models/tokenizer/awa_tokenizer_16k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:549bf3a2260ee062c6eecdc391eb803651a68253dfcc2f2f71e68654141faf5e
3
- size 612493
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:864217b8d0193812cc5aee4fe1cab467d78e99fcae9dacce41a60305c3571ac8
3
+ size 614761
models/tokenizer/awa_tokenizer_16k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/awa_tokenizer_32k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c75db6c318fa63d3b1714875e57903d644a0773f7d91be210b5661b08ef4af33
3
- size 1019677
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62f7c3149d9e99f10f8cb52328bf87eac31e23ec3a63f7d084f6baeadff4767b
3
+ size 1049188
models/tokenizer/awa_tokenizer_32k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/tokenizer/awa_tokenizer_8k.model CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0ecfd9f4200c7454b5e2a5f293587bb0d8938b384947708ba46f4a2771e724b4
3
- size 419840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20d7fcc006f3a667e8dd54ecedcd7aa6ed9e5eafd6c3a512d492148c0bd08496
3
+ size 423049
models/tokenizer/awa_tokenizer_8k.vocab CHANGED
The diff for this file is too large to render. See raw diff
 
models/vocabulary/awa_vocabulary.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b185a179a70f7ce1d5211d042ffa3ecf2ac042388e23ce966a8ac6d8a5c8007
3
- size 98312
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5554520a8312b47703ae8b6767875831aa218ec61d7d682949e0bb40e61df61
3
+ size 285293
models/vocabulary/awa_vocabulary_metadata.json CHANGED
@@ -1,16 +1,17 @@
1
  {
2
  "language": "awa",
3
- "vocabulary_size": 6221,
 
4
  "statistics": {
5
- "type_token_ratio": 0.02175327090379341,
6
  "coverage": {
7
- "top_100": 0.8062510781677558,
8
- "top_1000": 0.9471891672034426,
9
- "top_5000": 0.9842274937921277,
10
- "top_10000": 0.994073044777395
11
  },
12
- "hapax_count": 7524,
13
- "hapax_ratio": 0.5473990542015278,
14
- "total_documents": 3756
15
  }
16
  }
 
1
  {
2
  "language": "awa",
3
+ "vocabulary_size": 15883,
4
+ "variant": "full",
5
  "statistics": {
6
+ "type_token_ratio": 0.13851608939291873,
7
  "coverage": {
8
+ "top_100": 0.3530968472636558,
9
+ "top_1000": 0.6138212585776784,
10
+ "top_5000": 0.806893973602588,
11
+ "top_10000": 0.8767220128952025
12
  },
13
+ "hapax_count": 21541,
14
+ "hapax_ratio": 0.5755932022231723,
15
+ "total_documents": 2786
16
  }
17
  }
models/word_markov/awa_markov_ctx1_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:547a1d0bc04c5bdc57b9da50b12b1314c9e27ab160055111852a3e9b5f86815d
3
- size 568986
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73e2ba019d301bda283de84c77aff4fd0b8a4ed584b5f2007f9f1446f3ef1ea5
3
+ size 1486796
models/word_markov/awa_markov_ctx1_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_contexts": 13918,
6
- "total_transitions": 1156757
7
  }
 
2
  "context_size": 1,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_contexts": 37356,
6
+ "total_transitions": 267392
7
  }
models/word_markov/awa_markov_ctx2_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9f4ecc3b5dd1ec25a7362262b1112c268585d46537378074b220fffb50274064
3
- size 1749043
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e865dc423de365ded66b4a30914867dc27d054b14d58587d4a8bc7cb878c3b27
3
+ size 3675021
models/word_markov/awa_markov_ctx2_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_contexts": 80553,
6
- "total_transitions": 1153001
7
  }
 
2
  "context_size": 2,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_contexts": 155195,
6
+ "total_transitions": 264606
7
  }
models/word_markov/awa_markov_ctx3_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e8548a6290b983ef65f7e70de8617349551f2d343de81987a732f9aa2ffdfebc
3
- size 3946373
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:731b656111acaf4fe5a35a1088862f6130bc2ed9424e8943b4dd5a97d7d29f6a
3
+ size 4965463
models/word_markov/awa_markov_ctx3_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_contexts": 210504,
6
- "total_transitions": 1149246
7
  }
 
2
  "context_size": 3,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_contexts": 209159,
6
+ "total_transitions": 261820
7
  }
models/word_markov/awa_markov_ctx4_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:29431e78a3780d03cfd535f409c832282511511ed9034a9a7724e9b83dee3c4d
3
- size 6740784
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7bb59c8174451a6504aaa814138aecc464627ce55296f151295e8b7ee27816b
3
+ size 5813922
models/word_markov/awa_markov_ctx4_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_contexts": 389838,
6
- "total_transitions": 1145491
7
  }
 
2
  "context_size": 4,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_contexts": 221759,
6
+ "total_transitions": 259034
7
  }
models/word_ngram/awa_2gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7205d14fde081500a186427394bfbbaedc11bd03fbf38f7ae2709da963be8a62
3
- size 163970
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a54a18e0f5247b5c0bf4e50fe9f34313cecca88bc41ee7da5ddfdaf474e83d4c
3
+ size 107508
models/word_ngram/awa_2gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 2,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_ngrams": 12281,
6
- "total_ngrams": 1156757
7
  }
 
2
  "n": 2,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_ngrams": 5376,
6
+ "total_ngrams": 267392
7
  }
models/word_ngram/awa_3gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4d7fb7e5a2eb6cde2fc5e39665a97baa8467228e79b3fd1c7b9c908c0bcfa508
3
- size 558805
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822c838751c508caa1e7185abe59e79cb3e1b95ef96676dc3b46acc2f1c3d855
3
+ size 120081
models/word_ngram/awa_3gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 3,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_ngrams": 41199,
6
- "total_ngrams": 1153001
7
  }
 
2
  "n": 3,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_ngrams": 4851,
6
+ "total_ngrams": 264606
7
  }
models/word_ngram/awa_4gram_word.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e8b7a81bf9eba4f02d1f34643df98122eecc61f76c303f73ae4798386433fc7c
3
- size 1432471
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f925ca031e547cd749d95ce0076ff8c6e7a5d5cd632ba9e9345cea5d6377044f
3
+ size 316670
models/word_ngram/awa_4gram_word_metadata.json CHANGED
@@ -2,6 +2,6 @@
2
  "n": 4,
3
  "variant": "word",
4
  "language": "awa",
5
- "unique_ngrams": 106862,
6
- "total_ngrams": 1149246
7
  }
 
2
  "n": 4,
3
  "variant": "word",
4
  "language": "awa",
5
+ "unique_ngrams": 12076,
6
+ "total_ngrams": 261820
7
  }
visualizations/embedding_isotropy.png CHANGED
visualizations/embedding_norms.png CHANGED
visualizations/embedding_similarity.png CHANGED

Git LFS Details

  • SHA256: cf3c22b5505fe5f52959d40780c761b6873980fc2cb12be4238ee020d0519367
  • Pointer size: 131 Bytes
  • Size of remote file: 144 kB

Git LFS Details

  • SHA256: 382a9623fa9566861b5c14dcfce204f8bd3e7c21a5c0ac0dda166eef6684e7c4
  • Pointer size: 131 Bytes
  • Size of remote file: 141 kB
visualizations/markov_branching.png CHANGED
visualizations/markov_contexts.png CHANGED
visualizations/markov_entropy.png CHANGED
visualizations/model_sizes.png CHANGED