Ram-G commited on
Commit
45aec59
·
verified ·
1 Parent(s): df6d679

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +124 -3
README.md CHANGED
@@ -1,3 +1,124 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - feature-extraction
5
+ - text-retrieval
6
+ - question-answering
7
+ language:
8
+ - en
9
+ tags:
10
+ - wikipedia
11
+ - embeddings
12
+ - faiss
13
+ - vector-database
14
+ - rag
15
+ - ivf
16
+ - pq
17
+ - gpu
18
+ size_categories:
19
+ - 10M<n<100M
20
+ ---
21
+ dataset_info:
22
+ features:
23
+ - name: text
24
+ dtype: string
25
+ - name: embeddings
26
+ dtype: float32
27
+ shape: [384]
28
+ configs:
29
+ - config_name: default
30
+ data_files: "*.parquet"
31
+ ---
32
+
33
+ # Wikipedia IVF-OPQ-PQ Vector Database (GPU-Optimized)
34
+
35
+ A high-performance, GPU-accelerated FAISS vector database built from Wikipedia articles with pre-computed embeddings. This dataset contains approximately 35 million Wikipedia articles with 384-dimensional embeddings using the `all-MiniLM-L6-v2` model.
36
+
37
+ ## Dataset Overview
38
+
39
+ This vector database uses advanced compression techniques (IVF + OPQ + PQ) to provide fast similarity search over Wikipedia content while maintaining high recall. The database is optimized for Retrieval Augmented Generation (RAG) applications and large-scale semantic search.
40
+
41
+ **Key Features:**
42
+ - **GPU-accelerated FAISS index** with IVF, OPQ, and Product Quantization
43
+ - **SQLite text storage** with aligned vector IDs
44
+ - **Memory-efficient** compression (~64 bytes per vector)
45
+
46
+ ## Dataset Structure
47
+ wikipedia_vector_index_DB/
48
+ ├── index.faiss # Main FAISS index (CPU-serialized)
49
+ ├── meta.json # Index metadata and parameters
50
+ ├── docs.sqlite # Text storage (rowid = vector id)
51
+ ├── docs.sqlite-wal # SQLite WAL file (if present)
52
+ └── docs.sqlite-shm # SQLite shared memory (if present)
53
+
54
+ ### File Descriptions
55
+
56
+ - **`index.faiss`**: Complete FAISS index containing trained OPQ matrices, IVF centroids, PQ codebooks, and compressed vector codes
57
+ - **`meta.json`**: Checkpoint metadata including offset, ntotal, dimensions, and compression parameters
58
+ - **`docs.sqlite`**: SQLite database with schema `docs(id INTEGER PRIMARY KEY, text TEXT)` where `id` matches FAISS vector IDs
59
+ - **`*.parquet`**: Original embedding data in Parquet format for verification and rebuilding
60
+
61
+ ## Technical Specifications
62
+
63
+ | Parameter | Value | Description |
64
+ |-----------|-------|-------------|
65
+ | **Vectors** | ~35M | Total number of Wikipedia articles |
66
+ | **Dimensions** | 384 | Embedding dimensionality (all-MiniLM-L6-v2) |
67
+ | **Index Type** | IVF-OPQ-PQ | Inverted File + Optimized Product Quantization |
68
+ | **Compression** | ~64 bytes/vector | Memory-efficient storage |
69
+ | **nlist** | 131k-262k | Number of IVF clusters |
70
+ | **OPQ** | 64 subspaces | Optimized rotation matrix |
71
+ | **PQ** | 64×8 bits | Product quantization parameters |
72
+
73
+ ## Usage
74
+
75
+ ### Quick Start
76
+ ```python
77
+ from huggingface_hub import snapshot_download
78
+ import faiss
79
+ import sqlite3
80
+ import json
81
+
82
+ # Download the complete vector database
83
+ dataset_path = snapshot_download(
84
+ repo_id="your-username/wikipedia-vector-db",
85
+ repo_type="dataset",
86
+ cache_dir="./data"
87
+ )
88
+
89
+ # Load FAISS index
90
+ index = faiss.read_index(f"{dataset_path}/index.faiss")
91
+
92
+ # Load metadata
93
+ with open(f"{dataset_path}/meta.json", "r") as f:
94
+ meta = json.load(f)
95
+
96
+ # Connect to text database
97
+ conn = sqlite3.connect(f"{dataset_path}/docs.sqlite")
98
+
99
+ print(f"Loaded index with {index.ntotal:,} vectors")
100
+ print(f"Index dimension: {index.d}")
101
+
102
+ ###GPU Accelerated
103
+ import faiss
104
+
105
+ # Move index to GPU for faster queries
106
+ res = faiss.StandardGpuResources()
107
+ gpu_index = faiss.index_cpu_to_gpu(res, 0, index)
108
+
109
+ # Set search parameters
110
+ gpu_index.nprobe = 128 # Higher = better recall, slower search
111
+
112
+ # Perform similarity search
113
+ query_vector = get_query_embedding("your search query") # Shape: (1, 384)
114
+ distances, indices = gpu_index.search(query_vector, k=10)
115
+
116
+ # Retrieve corresponding text
117
+ cursor = conn.cursor()
118
+ for idx in indices[0]:
119
+ result = cursor.execute("SELECT text FROM docs WHERE id = ?", (int(idx),)).fetchone()
120
+ if result:
121
+ print(f"ID {idx}: {result[0][:200]}...")
122
+
123
+ Original Dataset
124
+ This vector database is built from maloyan/wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2, which contains pre-computed embeddings of Wikipedia articles using the sentence-transformers/all-MiniLM-L6-v2 model.