wagnercosta commited on
Commit
e173ead
·
verified ·
1 Parent(s): a17f4e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +312 -3
README.md CHANGED
@@ -1,3 +1,312 @@
1
- ---
2
- license: cc-by-sa-3.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - text-retrieval
5
+ - feature-extraction
6
+ language:
7
+ - en
8
+ tags:
9
+ - wikipedia
10
+ - qdrant
11
+ - sparse-embeddings
12
+ - splade
13
+ - bm25
14
+ - vector-database
15
+ - semantic-search
16
+ size_categories:
17
+ - 10M<n<100M
18
+ ---
19
+
20
+ # Wikipedia English Embeddings Dataset (Qdrant Snapshot)
21
+
22
+ This dataset contains a complete Qdrant vector database snapshot of Wikipedia English articles with both sparse and dense embeddings for advanced semantic search capabilities.
23
+
24
+ ## Dataset Details
25
+
26
+ ### Dataset Description
27
+
28
+ This dataset provides a ready-to-use Qdrant vector database containing the complete English Wikipedia processed from the Cirrus dump. Each article section has been embedded using both SPLADE (Sparse Lexical and Expansion) and BM25 sparse embedding techniques, enabling hybrid search capabilities that combine lexical and semantic matching.
29
+
30
+ - **Curated by:** [Emergent Methods](https://emergentmethods.ai)
31
+ - **Language(s):** English
32
+ - **License:** CC-BY-SA-4.0 (same as Wikipedia)
33
+ - **Vector Database:** Qdrant
34
+ - **Embedding Models:**
35
+ - SPLADE: `naver/splade-v3` (Note: example code uses `prithivida/Splade_PP_en_v1` for FastEmbed compatibility)
36
+ - BM25: `Qdrant/bm25` (English)
37
+
38
+ ### Dataset Sources
39
+
40
+ - **Source Data:** Wikipedia English Cirrus Dump
41
+
42
+ ## Uses
43
+
44
+ ### Direct Use
45
+
46
+ This dataset is designed for:
47
+
48
+ - **Semantic Search Applications**: Build Wikipedia search engines with advanced semantic capabilities
49
+ - **Research in Information Retrieval**: Experiment with hybrid sparse-dense retrieval methods
50
+ - **Question Answering Systems**: Use as a knowledge base for RAG (Retrieval-Augmented Generation) pipelines
51
+ - **Educational Tools**: Create learning applications that can search and retrieve Wikipedia content contextually
52
+ - **Benchmarking**: Compare different retrieval approaches on a large-scale, real-world dataset
53
+
54
+ ### Out-of-Scope Use
55
+
56
+ - **Real-time Wikipedia Updates**: This snapshot represents Wikipedia at a specific point in time and doesn't include real-time updates
57
+ - **Non-English Content**: This dataset only contains English Wikipedia articles
58
+ - **Direct Commercial Use Without Attribution**: Must comply with Wikipedia's CC-BY-SA-3.0 license requirements
59
+
60
+ ## Dataset Structure
61
+
62
+ The Qdrant database contains vectors organized in collections with the following structure:
63
+
64
+ - **Collection Name**: `WIKIPEDIA_ENGLISH`
65
+ - **Vector Configurations**:
66
+ - `text-sparse`: SPLADE sparse embeddings for semantic matching
67
+ - `bm25`: BM25 sparse embeddings for lexical matching
68
+ - **Payload Fields**:
69
+ - `title`: Article title
70
+ - `title_section`: Section within the article
71
+ - `content`: Text content of the section
72
+ - `url`: Wikipedia URL
73
+ - Additional metadata fields
74
+
75
+ ## Getting Started
76
+
77
+ ### Download and Setup
78
+
79
+ 1. **Download the dataset parts**:
80
+
81
+ ```bash
82
+ # Download all tar parts from Hugging Face
83
+ huggingface-cli download EmergentMethods/en_qdrant_wikipedia --local-dir ./qdrant-snapshot
84
+ ```
85
+
86
+ 2. **Reconstruct the snapshot**:
87
+
88
+ ```bash
89
+ # Combine the tar parts
90
+ cat qdrant-snapshot.tar.part* > qdrant-snapshot.tar
91
+ tar -xf qdrant-snapshot.tar
92
+ ```
93
+
94
+ 3. **Restore to Qdrant**:
95
+
96
+ Follow the [Qdrant Snapshot Recovery Documentation](https://qdrant.tech/documentation/concepts/snapshots/#recover-from-a-snapshot) to restore the snapshot to your Qdrant instance.
97
+
98
+ ### Usage Example
99
+
100
+ Here's a complete example showing how to search the Wikipedia embeddings using SPLADE, BM25, and hybrid approaches:
101
+
102
+ ```python
103
+ import os
104
+ from typing import List, Optional
105
+
106
+ from dotenv import load_dotenv
107
+ from fastembed import SparseTextEmbedding
108
+ from fastembed.sparse.bm25 import Bm25
109
+ from qdrant_client import QdrantClient, models
110
+
111
+ # ---------------- Configuration (adjust inline, NOT via env for blog clarity) ----
112
+ COLLECTION_NAME = 'WIKIPEDIA_ENGLISH' # existing Qdrant collection
113
+ SPLADE_MODEL = 'prithivida/Splade_PP_en_v1' # FastEmbed compatible model (original dataset used: naver/splade-v3)
114
+ BM25_MODEL = 'Qdrant/bm25' # fastembed BM25 model card
115
+ BM25_LANGUAGE = 'english'
116
+
117
+ # Result sizes requested by the blog requirement
118
+ TOPK_SPLADE = 3
119
+ TOPK_BM25 = 3
120
+ TOPK_FUSED = 5
121
+ PREFETCH_PER_MODEL = 20 # how many raw candidates each model contributes before fusion
122
+
123
+ load_dotenv()
124
+
125
+
126
+ def build_qdrant_client() -> QdrantClient:
127
+ url = os.getenv('QDRANT_URL', 'http://localhost:6333')
128
+ api_key = os.getenv('QDRANT_API_KEY') # may be None / empty for local
129
+
130
+ if 'localhost' in url or '127.0.0.1' in url:
131
+ return QdrantClient(url=url, api_key=api_key)
132
+
133
+ # Cloud style: let SDK parse full URL; ensure HTTPS & sane timeout
134
+ return QdrantClient(url=url, https=True, timeout=60, api_key=api_key)
135
+
136
+
137
+ class HybridSparseSearcher:
138
+ """Encapsulates SPLADE, BM25, and hybrid (RRF) query logic."""
139
+
140
+ def __init__(self) -> None:
141
+ self.client = build_qdrant_client()
142
+ self.splade = SparseTextEmbedding(model_name=SPLADE_MODEL, device='cpu')
143
+ self.bm25 = Bm25(BM25_MODEL, language=BM25_LANGUAGE)
144
+
145
+ # ------------------------ Individual Model Searches ------------------
146
+ def _splade_query_vector(self, text: str) -> models.SparseVector:
147
+ if hasattr(self.splade, 'embed'):
148
+ # For SparseTextEmbedding (fallback model)
149
+ sparse_obj = next(self.splade.embed(text))
150
+ return models.SparseVector(**sparse_obj.as_object())
151
+ else:
152
+ # For SparseEncoder (original model)
153
+ return list(self.splade.embed(text))[0]
154
+
155
+ def _bm25_query_vector(self, text: str) -> models.SparseVector:
156
+ sparse_obj = next(self.bm25.query_embed(text))
157
+ return models.SparseVector(**sparse_obj.as_object())
158
+
159
+ def search_splade(self, query: str, limit: int = TOPK_SPLADE):
160
+ vector = self._splade_query_vector(query)
161
+ return self.client.query_points(
162
+ collection_name=COLLECTION_NAME,
163
+ query=vector,
164
+ using='text-sparse',
165
+ limit=limit,
166
+ ).points
167
+
168
+ def search_bm25(self, query: str, limit: int = TOPK_BM25):
169
+ vector = self._bm25_query_vector(query)
170
+ return self.client.query_points(
171
+ collection_name=COLLECTION_NAME,
172
+ query=vector,
173
+ using='bm25',
174
+ limit=limit,
175
+ ).points
176
+
177
+ def search_hybrid_rrf(
178
+ self, query: str, limit: int = TOPK_FUSED, per_model: int = PREFETCH_PER_MODEL
179
+ ):
180
+ prefetch = [
181
+ models.Prefetch(
182
+ query=self._splade_query_vector(query),
183
+ using='text-sparse',
184
+ limit=per_model,
185
+ ),
186
+ models.Prefetch(
187
+ query=self._bm25_query_vector(query), using='bm25', limit=per_model
188
+ ),
189
+ ]
190
+ return self.client.query_points(
191
+ collection_name=COLLECTION_NAME,
192
+ prefetch=prefetch,
193
+ query=models.FusionQuery(fusion=models.Fusion.RRF),
194
+ limit=limit,
195
+ ).points
196
+
197
+ @staticmethod
198
+ def _format(point) -> str:
199
+ payload = point.payload or {}
200
+ title = payload.get('title', '<no title>')
201
+ section = payload.get('title_section', 'Main')
202
+ url = payload.get('url')
203
+ content = (payload.get('content') or '').strip().replace('\n', ' ')
204
+ if len(content) > 220:
205
+ content = content[:220] + '...'
206
+ lines = [f'Score: {point.score:.4f}', f'Title: {title} [{section}]']
207
+ if url:
208
+ lines.append(f'URL: {url}')
209
+ lines.append(f'Snippet: {content}')
210
+ return '\n'.join(lines)
211
+
212
+ def pretty_print(self, header: str, points: List, limit: Optional[int] = None):
213
+ print('\n' + header)
214
+ print('-' * len(header))
215
+ if not points:
216
+ print('(no results)')
217
+ return
218
+ for idx, p in enumerate(points[: limit or len(points)], 1):
219
+ print(f'\n{idx}. {self._format(p)}')
220
+
221
+
222
+ def run(query: str):
223
+ """Execute the full demo: SPLADE, BM25, Hybrid fused."""
224
+ searcher = HybridSparseSearcher()
225
+ # Individual models
226
+ splade_points = searcher.search_splade(query)
227
+ bm25_points = searcher.search_bm25(query)
228
+ # Hybrid
229
+ hybrid_points = searcher.search_hybrid_rrf(query)
230
+
231
+ # Output
232
+ searcher.pretty_print(f'Top {TOPK_SPLADE} SPLADE Results', splade_points)
233
+ searcher.pretty_print(f'Top {TOPK_BM25} BM25 Results', bm25_points)
234
+ searcher.pretty_print(f'Top {TOPK_FUSED} Hybrid (RRF) Results', hybrid_points)
235
+
236
+
237
+ def main():
238
+ query = 'Albert Einstein'
239
+ run(query)
240
+
241
+
242
+ if __name__ == '__main__':
243
+ main()
244
+ ```
245
+
246
+ ## Dataset Creation
247
+
248
+ ### Curation Rationale
249
+
250
+ This dataset was created to provide a comprehensive, ready-to-use vector database for Wikipedia English content that supports both traditional lexical search (BM25) and modern semantic search (SPLADE) capabilities. The hybrid approach enables more accurate and diverse search results by combining the strengths of both methods.
251
+
252
+ ### Source Data
253
+
254
+ #### Data Collection and Processing
255
+
256
+ - **Source**: Wikipedia English Cirrus dump (JSON format)
257
+ - **Processing Pipeline**:
258
+ 1. Downloaded Wikipedia Cirrus dump containing structured article data
259
+ 2. Extracted and split articles into chunks for granular search
260
+ 3. Generated SPLADE sparse embeddings using `naver/splade-v3`
261
+ 4. Generated BM25 sparse embeddings using `Qdrant/bm25` with English language settings
262
+ 5. Stored in Qdrant vector database with optimized disk indexing
263
+ - **Tools**: Flowdapt orchestration platform, Qdrant vector database, FastEmbed library
264
+
265
+ ## Bias, Risks, and Limitations
266
+
267
+ ### Known Limitations
268
+
269
+ - **Temporal Snapshot**: This dataset represents Wikipedia at a specific point in time and may not reflect current information
270
+ - **Language Limitation**: Only English Wikipedia content is included
271
+
272
+ ### Recommendations
273
+
274
+ Users should be aware of these limitations and consider:
275
+
276
+ - Supplementing with real-time data sources for current information
277
+ - Being mindful of potential cultural and geographic biases in search results
278
+ - Validating critical information from primary sources
279
+ - Considering the vintage of the data when making time-sensitive queries
280
+
281
+ ## Technical Details
282
+
283
+ ### Requirements
284
+
285
+ - **Qdrant**: Version 1.15+ recommended
286
+ - **Python Dependencies**: `qdrant-client`, `fastembed`, `python-dotenv`
287
+ - **Hardware**: Minimum 16GB RAM recommended for optimal performance
288
+ - **Storage**: Approximately 400GB+ for the complete database
289
+
290
+ ### Performance
291
+
292
+ - **Search Latency**: Sub-second response times for typical queries
293
+ - **Throughput**: Optimized for concurrent search operations
294
+ - **Scalability**: Can be distributed across multiple Qdrant nodes
295
+
296
+ ## Citation
297
+
298
+ If you use this dataset in your research or applications, please cite:
299
+
300
+ ```bibtex
301
+ @dataset{wikipedia_english_qdrant_2024,
302
+ title={Wikipedia English Embeddings Dataset (Qdrant Snapshot)},
303
+ author={Emergent Methods},
304
+ year={2025},
305
+ url={https://huggingface.co/datasets/EmergentMethods/en_qdrant_wikipedia},
306
+ note={Wikipedia content under CC-BY-SA-4.0 license}
307
+ }
308
+ ```
309
+
310
+ ## Dataset Card Contact
311
+
312
+ For questions, issues, or contributions, please contact [Emergent Methods](https://emergentmethods.ai).