File size: 7,850 Bytes
395ebaa | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 | ---
language: en
language_name: English
language_family: germanic_west_anglofrisian
tags:
- wikilangs
- nlp
- tokenizer
- embeddings
- n-gram
- markov
- wikipedia
- feature-extraction
- sentence-similarity
- tokenization
- n-grams
- markov-chain
- text-mining
- fasttext
- babelvec
- vocabulous
- vocabulary
- monolingual
- family-germanic_west_anglofrisian
license: mit
library_name: wikilangs
pipeline_tag: text-generation
datasets:
- omarkamali/wikipedia-monthly
dataset_info:
name: wikipedia-monthly
description: Monthly snapshots of Wikipedia articles across 300+ languages
metrics:
- name: best_compression_ratio
type: compression
value: 4.699
- name: best_isotropy
type: isotropy
value: 0.7693
- name: vocabulary_size
type: vocab
value: 1867537
generated: 2026-03-03
---
# English — Wikilangs Models
Open-source tokenizers, n-gram & Markov language models, vocabulary stats, and word embeddings trained on **English** Wikipedia by [Wikilangs](https://wikilangs.org).
🌐 [Language Page](https://wikilangs.org/languages/en/) · 🎮 [Playground](https://wikilangs.org/playground/?lang=en) · 📊 [Full Research Report](RESEARCH_REPORT.md)
## Language Samples
Example sentences drawn from the English Wikipedia corpus:
> Alexander V may refer to: Alexander V of Macedon (died 294 BCE) Antipope Alexander V Alexander V of Imereti
> Alfonso IV may refer to: Alfonso IV of León (924–931) Afonso IV of Portugal Alfonso IV of Aragon Alfonso IV of Ribagorza Alfonso IV d'Este Duke of Modena and Regg
> Anastasius I or Anastasios I may refer to: Anastasius I Dicorus (–518), Roman emperor Anastasius I of Antioch (died 599), Patriarch of Antioch Pope Anastasius I (died 401), pope
> Angula may refer to: Aṅgula, a measure equal to a finger's breadth Eel, a biological order of fish Nahas Angula, former Prime Minister of Namibia Helmut Angula See also Angul (disambiguation)
> Two antipopes used the regnal name Victor IV: Antipope Victor IV Antipope Victor IV
## Quick Start
### Load the Tokenizer
```python
import sentencepiece as spm
sp = spm.SentencePieceProcessor()
sp.Load("en_tokenizer_32k.model")
text = "Albrecht Achilles may refer to: Albrecht III Achilles, Elector of Brandenburg Al"
tokens = sp.EncodeAsPieces(text)
ids = sp.EncodeAsIds(text)
print(tokens) # subword pieces
print(ids) # integer ids
# Decode back
print(sp.DecodeIds(ids))
```
<details>
<summary><b>Tokenization examples (click to expand)</b></summary>
**Sample 1:** `Albrecht Achilles may refer to: Albrecht III Achilles, Elector of Brandenburg Al…`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁alb recht ▁ach illes ▁may ▁refer ▁to : ▁alb recht … (+27 more)` | 37 |
| 16k | `▁alb recht ▁ach illes ▁may ▁refer ▁to : ▁alb recht … (+26 more)` | 36 |
| 32k | `▁albrecht ▁achilles ▁may ▁refer ▁to : ▁albrecht ▁iii ▁achilles , … (+17 more)` | 27 |
| 64k | `▁albrecht ▁achilles ▁may ▁refer ▁to : ▁albrecht ▁iii ▁achilles , … (+16 more)` | 26 |
**Sample 2:** `Alexander V may refer to: Alexander V of Macedon (died 294 BCE) Antipope Alexand…`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁alexander ▁v ▁may ▁refer ▁to : ▁alexander ▁v ▁of ▁maced … (+20 more)` | 30 |
| 16k | `▁alexander ▁v ▁may ▁refer ▁to : ▁alexander ▁v ▁of ▁macedon … (+18 more)` | 28 |
| 32k | `▁alexander ▁v ▁may ▁refer ▁to : ▁alexander ▁v ▁of ▁macedon … (+15 more)` | 25 |
| 64k | `▁alexander ▁v ▁may ▁refer ▁to : ▁alexander ▁v ▁of ▁macedon … (+15 more)` | 25 |
**Sample 3:** `Two antipopes used the regnal name Victor IV: Antipope Victor IV Antipope Victor…`
| Vocab | Tokens | Count |
|-------|--------|-------|
| 8k | `▁two ▁antip op es ▁used ▁the ▁reg nal ▁name ▁victor … (+8 more)` | 18 |
| 16k | `▁two ▁antip opes ▁used ▁the ▁reg nal ▁name ▁victor ▁iv … (+7 more)` | 17 |
| 32k | `▁two ▁antip opes ▁used ▁the ▁regnal ▁name ▁victor ▁iv : … (+6 more)` | 16 |
| 64k | `▁two ▁antipopes ▁used ▁the ▁regnal ▁name ▁victor ▁iv : ▁antipope … (+5 more)` | 15 |
</details>
### Load Word Embeddings
```python
from gensim.models import KeyedVectors
# Aligned embeddings (cross-lingual, mapped to English vector space)
wv = KeyedVectors.load("en_embeddings_128d_aligned.kv")
similar = wv.most_similar("word", topn=5)
for word, score in similar:
print(f" {word}: {score:.3f}")
```
### Load N-gram Model
```python
import pyarrow.parquet as pq
df = pq.read_table("en_3gram_word.parquet").to_pandas()
print(df.head())
```
## Models Overview

| Category | Assets |
|----------|--------|
| Tokenizers | BPE at 8k, 16k, 32k, 64k vocab sizes |
| N-gram models | 2 / 3 / 4 / 5-gram (word & subword) |
| Markov chains | Context 1–5 (word & subword) |
| Embeddings | 32d, 64d, 128d — mono & aligned |
| Vocabulary | Full frequency list + Zipf analysis |
| Statistics | Corpus & model statistics JSON |
## Metrics Summary
| Component | Model | Key Metric | Value |
|-----------|-------|------------|-------|
| Tokenizer | 8k BPE | Compression | 3.84x |
| Tokenizer | 16k BPE | Compression | 4.22x |
| Tokenizer | 32k BPE | Compression | 4.51x |
| Tokenizer | 64k BPE | Compression | 4.70x 🏆 |
| N-gram | 2-gram (subword) | Perplexity | 257 🏆 |
| N-gram | 2-gram (word) | Perplexity | 386,225 |
| N-gram | 3-gram (subword) | Perplexity | 2,180 |
| N-gram | 3-gram (word) | Perplexity | 4,093,782 |
| N-gram | 4-gram (subword) | Perplexity | 12,758 |
| N-gram | 4-gram (word) | Perplexity | 14,465,722 |
| N-gram | 5-gram (subword) | Perplexity | 55,700 |
| N-gram | 5-gram (word) | Perplexity | 12,820,936 |
| Markov | ctx-1 (subword) | Predictability | 0.0% |
| Markov | ctx-1 (word) | Predictability | 6.2% |
| Markov | ctx-2 (subword) | Predictability | 46.4% |
| Markov | ctx-2 (word) | Predictability | 48.3% |
| Markov | ctx-3 (subword) | Predictability | 45.8% |
| Markov | ctx-3 (word) | Predictability | 75.9% |
| Markov | ctx-4 (subword) | Predictability | 36.8% |
| Markov | ctx-4 (word) | Predictability | 89.2% 🏆 |
| Vocabulary | full | Size | 1,867,537 |
| Vocabulary | full | Zipf R² | 0.9862 |
| Embeddings | mono_32d | Isotropy | 0.7693 🏆 |
| Embeddings | mono_64d | Isotropy | 0.7388 |
| Embeddings | mono_128d | Isotropy | 0.6687 |
📊 **[Full ablation study, per-model breakdowns, and interpretation guide →](RESEARCH_REPORT.md)**
---
## About
Trained on [wikipedia-monthly](https://huggingface.co/datasets/omarkamali/wikipedia-monthly) — monthly snapshots of 300+ Wikipedia languages.
A project by **[Wikilangs](https://wikilangs.org)** · Maintainer: [Omar Kamali](https://omarkamali.com) · [Omneity Labs](https://omneitylabs.com)
### Citation
```bibtex
@misc{wikilangs2025,
author = {Kamali, Omar},
title = {Wikilangs: Open NLP Models for Wikipedia Languages},
year = {2025},
doi = {10.5281/zenodo.18073153},
publisher = {Zenodo},
url = {https://huggingface.co/wikilangs},
institution = {Omneity Labs}
}
```
### Links
- 🌐 [wikilangs.org](https://wikilangs.org)
- 🌍 [Language page](https://wikilangs.org/languages/en/)
- 🎮 [Playground](https://wikilangs.org/playground/?lang=en)
- 🤗 [HuggingFace models](https://huggingface.co/wikilangs)
- 📊 [wikipedia-monthly dataset](https://huggingface.co/datasets/omarkamali/wikipedia-monthly)
- 👤 [Omar Kamali](https://huggingface.co/omarkamali)
- 🤝 Sponsor: [Featherless AI](https://featherless.ai)
**License:** MIT — free for academic and commercial use.
---
*Generated by Wikilangs Pipeline · 2026-03-03 22:59:51*
|