Ash Vardanian commited on
Commit
cd0fc73
·
0 Parent(s):

Docs: Initial plans

Browse files
Files changed (3) hide show
  1. .gitignore +207 -0
  2. LICENSE +201 -0
  3. README.md +202 -0
.gitignore ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[codz]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py.cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # UV
98
+ # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ #uv.lock
102
+
103
+ # poetry
104
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
106
+ # commonly ignored for libraries.
107
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108
+ #poetry.lock
109
+ #poetry.toml
110
+
111
+ # pdm
112
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
113
+ # pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
114
+ # https://pdm-project.org/en/latest/usage/project/#working-with-version-control
115
+ #pdm.lock
116
+ #pdm.toml
117
+ .pdm-python
118
+ .pdm-build/
119
+
120
+ # pixi
121
+ # Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
122
+ #pixi.lock
123
+ # Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
124
+ # in the .venv directory. It is recommended not to include this directory in version control.
125
+ .pixi
126
+
127
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
128
+ __pypackages__/
129
+
130
+ # Celery stuff
131
+ celerybeat-schedule
132
+ celerybeat.pid
133
+
134
+ # SageMath parsed files
135
+ *.sage.py
136
+
137
+ # Environments
138
+ .env
139
+ .envrc
140
+ .venv
141
+ env/
142
+ venv/
143
+ ENV/
144
+ env.bak/
145
+ venv.bak/
146
+
147
+ # Spyder project settings
148
+ .spyderproject
149
+ .spyproject
150
+
151
+ # Rope project settings
152
+ .ropeproject
153
+
154
+ # mkdocs documentation
155
+ /site
156
+
157
+ # mypy
158
+ .mypy_cache/
159
+ .dmypy.json
160
+ dmypy.json
161
+
162
+ # Pyre type checker
163
+ .pyre/
164
+
165
+ # pytype static type analyzer
166
+ .pytype/
167
+
168
+ # Cython debug symbols
169
+ cython_debug/
170
+
171
+ # PyCharm
172
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
173
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
174
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
175
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
176
+ #.idea/
177
+
178
+ # Abstra
179
+ # Abstra is an AI-powered process automation framework.
180
+ # Ignore directories containing user credentials, local state, and settings.
181
+ # Learn more at https://abstra.io/docs
182
+ .abstra/
183
+
184
+ # Visual Studio Code
185
+ # Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
186
+ # that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
187
+ # and can be added to the global gitignore or merged into this file. However, if you prefer,
188
+ # you could uncomment the following to ignore the entire vscode folder
189
+ # .vscode/
190
+
191
+ # Ruff stuff:
192
+ .ruff_cache/
193
+
194
+ # PyPI configuration file
195
+ .pypirc
196
+
197
+ # Cursor
198
+ # Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
199
+ # exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
200
+ # refer to https://docs.cursor.com/context/ignore-files
201
+ .cursorignore
202
+ .cursorindexingignore
203
+
204
+ # Marimo
205
+ marimo/_static/
206
+ marimo/_lsp/
207
+ __marimo__/
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WikiVerse
2
+
3
+ Multi-model embedding dataset built on [HuggingFace FineWiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki), designed for approximate nearest neighbor (ANN) search benchmarking with [USearch](https://github.com/unum-cloud/usearch) and other vector search engines.
4
+
5
+ The same Wikipedia corpus — chunked, cleaned, and enriched with graph metadata — is embedded by multiple models spanning dense encoders, decoder-based LLMs, and late-interaction (ColBERT-style) architectures.
6
+ Each model's embeddings ship with precomputed ground-truth k-nearest neighbors, enabling reproducible recall and throughput benchmarks without re-running expensive exact search.
7
+
8
+ ## Why WikiVerse?
9
+
10
+ Existing ANN benchmarks suffer from three gaps:
11
+
12
+ 1. __Stale descriptors.__
13
+ The most popular benchmarks (SIFT-1B, Deep-1B, GloVe) use features from 2014-2021 — image descriptors and word vectors, not modern text embeddings.
14
+ 2. __Single-model datasets.__
15
+ Each benchmark is produced by one model.
16
+ You cannot compare how the _same_ retrieval engine handles different vector distributions without re-embedding.
17
+ 3. __No decoder embeddings.__
18
+ State-of-the-art embedding models (GTE-Qwen, Llama-Embed-Nemotron, Qwen3-Embedding) are decoder-based LLMs, yet no ANN benchmark uses their outputs.
19
+
20
+ WikiVerse fixes all three: one corpus, multiple models, modern architectures, with graph-structured metadata for filtered search.
21
+
22
+ ## Source Corpus
23
+
24
+ [HuggingFaceFW/finewiki](https://huggingface.co/datasets/HuggingFaceFW/finewiki) — August 2025 snapshot, 325 languages, 61.5M articles.
25
+
26
+ FineWiki is extracted from Wikimedia's __Enterprise HTML dumps__ (not raw wikitext), so templates are fully rendered by MediaWiki's own engine.
27
+ This avoids the well-known content loss that plagues wikitext-based parsers, as `mwparserfromhell` cannot expand templates.
28
+ Section headings, tables, math, and lists are preserved as Markdown.
29
+ Bot-generated stubs, disambiguation pages, and cross-language leakage are filtered out.
30
+
31
+ ### Text processing
32
+
33
+ No chunking is used.
34
+ Short-content models only process the abstract.
35
+ Long-context models are prioritized and receive the whole document in the original form.
36
+
37
+ ### Scale
38
+
39
+ | Scope | Articles | Parquet, GB | Avg Bytes/Article |
40
+ | :------------------------ | -------: | ----------: | ----------------: |
41
+ | English | 6.6M | 38 | 5,700 |
42
+ | Top 5: EN, DE, FR, ES, RU | 15.7M | 86 | 5,460 |
43
+ | Top 10 by text volume¹ | 22.9M | 120 | 5,250 |
44
+ | Top 20 | 41.6M | 149 | 3,580 |
45
+ | All 325 languages | 61.6M | ~170 | 2,740 |
46
+
47
+ ¹ EN, DE, FR, ES, RU, IT, JA, ZH, PL, UK — excluding bot-generated wikis (Cebuano, Swedish, Waray, Egyptian Arabic) which inflate article counts with minimal text.
48
+
49
+ Parquet weight includes both `text` and `wikitext` columns; pure text is roughly half.
50
+ Average bytes/article drops at wider scope because smaller wikis are dominated by stubs.
51
+
52
+ ## Embedding Models
53
+
54
+ Each model embeds the same article corpus independently.
55
+ No chunking is applied — short-context models see truncated articles, long-context models see the full text.
56
+ Dense models produce one vector per article.
57
+ ColBERT models produce one vector per token (~2,000 vectors per average article).
58
+
59
+ | Model | Year | Type | Dims | Context | Params | License | Base / Fine-tuned by | Perf |
60
+ | :------------------------------------------------------------------------------------ | ---: | :---------------- | ---: | ------: | ------: | :--------- | :--------------------------------------- | :------------- |
61
+ | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 2025 | Dense (decoder) | 1024 | 32 K | 600 M | Apache 2.0 | Qwen3 (Alibaba) | 70.7 MTEB v2 |
62
+ | [GTE-ModernColBERT-v1](https://huggingface.co/lightonai/GTE-ModernColBERT-v1) | 2025 | ColBERT (encoder) | 128 | 8-32 K | 139 M | Apache 2.0 | ModernBERT (Answer.AI) / LightOn | 88.4 LongEmbed |
63
+ | [arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) | 2024 | Dense (encoder) | 1024 | 8 K | 303 M ¹ | Apache 2.0 | XLM-R (Meta) → BGE-M3 (BAAI) / Snowflake | 55.6 BEIR |
64
+ | [nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) | 2024 | Dense (encoder) | 768 | 8 K | 137 M | Apache 2.0 | NomicBERT (Nomic) | 62.3 MTEB v1 |
65
+ | [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 2023 | Dense (decoder) | 4096 | 4 K | 7.1 B | MIT | Mistral-7B (Mistral AI) / Microsoft | 66.6 MTEB v1 |
66
+
67
+ ¹ 568M total, 303M active (non-embedding parameters).
68
+
69
+ ### Compute Estimates
70
+
71
+ All embeddings stored in FP16.
72
+ Decoder models use [vLLM](https://github.com/vllm-project/vllm) with `--task embed`.
73
+ Encoder models use [TEI](https://github.com/huggingface/text-embeddings-inference) with the Hopper Docker image.
74
+ FP8 quantization can improve throughput ~1.5× with negligible quality loss.
75
+
76
+ Token counts vary by tokenizer — CJK text produces ~1 token per 2-3 bytes, Latin/Cyrillic ~1 per 4-5 bytes.
77
+ Average article length across all languages is ~400 tokens, but this is dragged down by millions of stubs in smaller wikis; English articles average ~2,700 tokens.
78
+
79
+ | Model | Throughput | Total tokens | Time | Vectors | Storage | Notes |
80
+ | :--------------------- | ---------: | -----------: | -----: | ------: | ------: | :----------------------------- |
81
+ | Qwen3-Embedding-0.6B | 500 doc/s | 24 B | 1.4 d | 61.6 M | 126 GB | Full articles |
82
+ | GTE-ModernColBERT-v1 | 800 doc/s | 24 B | 0.9 d | 24.3 B | 6.2 TB | ~400 token vectors per article |
83
+ | arctic-embed-l-v2.0 | 800 doc/s | 28 B | 0.9 d | 61.6 M | 126 GB | Truncated at 8K tokens |
84
+ | nomic-embed-text-v1.5 | 1200 doc/s | 21 B | 0.6 d | 61.6 M | 95 GB | Truncated at 8K tokens |
85
+ | e5-mistral-7b-instruct | 50 doc/s | 21 B | 14.3 d | 61.6 M | 505 GB | Truncated at 4K tokens |
86
+
87
+ > Single H100 80 GB, full dataset — 61.6M articles, all 325 languages.
88
+
89
+ ## Metadata Enrichment
90
+
91
+ ### Compute Estimates
92
+
93
+ ## Dataset Layout
94
+
95
+ Data is published on HuggingFace with one __config per model__, so users download only what they need.
96
+ Embedding vectors are stored as `.fbin` (row-major binary: `uint32 rows, uint32 cols, float16/float32 data`) for direct compatibility with [USearch bench_cpp](https://github.com/unum-cloud/usearch/blob/main/BENCHMARKS.md) and the Big-ANN benchmark ecosystem.
97
+
98
+ ```
99
+ unum-cloud/WikiVerse/
100
+ ├── README.md
101
+
102
+ ├── corpus/
103
+ │ ├── passages-en-00000-of-00050.parquet # doc_id, title, text, section
104
+ │ ├── passages-en-00001-of-00050.parquet
105
+ │ └── ...
106
+
107
+ ├── graph/
108
+ │ ├── pagelinks-en.parquet # source_id, target_id
109
+ │ ├── categories-en.parquet # page_id, category
110
+ │ ├── wikidata-en.parquet # page_id, qid, entity_type
111
+ │ └── ...
112
+
113
+ ├── qwen3-embedding-0.6b/
114
+ │ ├── base.6.6M.f16bin # 6.6M × 1024, float16
115
+ │ ├── query.10K.f16bin # 10K × 1024, float16
116
+ │ └── groundtruth.10K.ibin # 10K × 100 neighbors
117
+
118
+ ├── e5-mistral-7b-instruct/
119
+ │ ├── base.6.6M.f16bin # 6.6M × 4096, float16
120
+ │ ├── query.10K.f16bin
121
+ │ └── groundtruth.10K.ibin
122
+
123
+ ├── arctic-embed-l-v2.0/
124
+ │ ├── base.6.6M.f16bin # 6.6M × 1024, float16
125
+ │ ├── query.10K.f16bin
126
+ │ └── groundtruth.10K.ibin
127
+
128
+ ├── nomic-embed-text-v1.5/
129
+ │ ├── base.6.6M.f16bin # 6.6M × 768, float16
130
+ │ ├── query.10K.f16bin
131
+ │ └── groundtruth.10K.ibin
132
+
133
+ └── gte-moderncolbert-v1/
134
+ ├── base.6.6M.f16bin # ~13.2B × 128, float16
135
+ ├── query.10K.f16bin
136
+ └── groundtruth.10K.ibin
137
+ ```
138
+
139
+ Load a specific model's embeddings:
140
+
141
+ ```python
142
+ from datasets import load_dataset
143
+ corpus = load_dataset("unum-cloud/WikiVerse", "corpus")
144
+ ```
145
+
146
+ Or download binary vectors directly for C++/Rust benchmarking:
147
+
148
+ ```sh
149
+ huggingface-cli download unum-cloud/WikiVerse \
150
+ qwen3-embedding-0.6b/base.6.6M.f16bin \
151
+ qwen3-embedding-0.6b/query.10K.f16bin \
152
+ qwen3-embedding-0.6b/groundtruth.10K.ibin
153
+ ```
154
+
155
+ ### Workflow
156
+
157
+ The embedding pipeline is designed for multi-day runs on GPU servers with checkpoint/resume:
158
+
159
+ ```sh
160
+ # 1. Download FineWiki articles
161
+ python corpus.py --lang en --output corpus/
162
+
163
+ # 2. Embed with each model (resume-safe — rerun after interruptions)
164
+ python embed.py --model qwen3-0.6b --input corpus/ --output embeddings/ --resume
165
+ python embed.py --model e5-mistral-7b --input corpus/ --output embeddings/ --resume
166
+ python embed.py --model arctic-embed-l-v2 --input corpus/ --output embeddings/ --resume
167
+ python embed.py --model nomic-v1.5 --input corpus/ --output embeddings/ --resume
168
+ python embed.py --model gte-moderncolbert --input corpus/ --output embeddings/ --resume
169
+
170
+ # 3. Extract graph metadata
171
+ python graph.py --lang en --output graph/
172
+
173
+ # 4. Compute ground truth for each model
174
+ python ground_truth.py --embeddings embeddings/qwen3-0.6b/ --k 100 --queries 10000
175
+ python ground_truth.py --embeddings embeddings/e5-mistral-7b/ --k 100 --queries 10000
176
+
177
+ # 5. Upload to HuggingFace
178
+ python upload.py --repo unum-cloud/WikiVerse
179
+ ```
180
+
181
+ Each step is idempotent.
182
+ Progress is tracked in `state/*.json` files — if a job dies (OOM, SSH drop, GPU error), rerunning the same command picks up from the last checkpoint.
183
+ Adding a new embedding model requires only step 2 + step 4 — the corpus and graph are shared.
184
+
185
+ ## Hosting
186
+
187
+ | Location | Storage/mo (1 TB) | Egress/GB | Notes |
188
+ | ---------------------------------------------------------------------------------- | ----------------- | --------- | ---------------------------------------------------------------- |
189
+ | [HuggingFace Hub](https://huggingface.co/unum-cloud/WikiVerse) | Free | Free | Primary. Xet storage, unlimited public downloads |
190
+ | [AWS S3](https://aws.amazon.com/s3/pricing/) Standard | $23.00 | $0.09 | S3-compatible mirror. Egress adds up fast for popular datasets |
191
+ | [Nebius Object Storage](https://docs.nebius.com/object-storage/resources/pricing/) | $15.05 | $0.015 | S3-compatible. ~35% cheaper storage, ~6× cheaper egress than AWS |
192
+
193
+ ## License
194
+
195
+ The embedding pipeline code in this repository is licensed under [Apache 2.0](LICENSE).
196
+
197
+ Dataset licensing depends on the components:
198
+
199
+ - __Wikipedia text__: [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
200
+ - __FineWiki extraction__: [Apache 2.0](https://huggingface.co/datasets/HuggingFaceFW/finewiki)
201
+ - __Embeddings__: Governed by each model's license (see table above — all selected models use Apache 2.0 or MIT)
202
+ - __Graph metadata__: Derived from Wikimedia/Wikidata dumps ([CC0](https://creativecommons.org/publicdomain/zero/1.0/) for Wikidata, [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) for Wikipedia)