Wiki_Faiss_Indexes / README.md
Ram-G's picture
Update README.md
45aec59 verified
---
license: mit
task_categories:
- feature-extraction
- text-retrieval
- question-answering
language:
- en
tags:
- wikipedia
- embeddings
- faiss
- vector-database
- rag
- ivf
- pq
- gpu
size_categories:
- 10M<n<100M
---
dataset_info:
features:
- name: text
dtype: string
- name: embeddings
dtype: float32
shape: [384]
configs:
- config_name: default
data_files: "*.parquet"
---
# Wikipedia IVF-OPQ-PQ Vector Database (GPU-Optimized)
A high-performance, GPU-accelerated FAISS vector database built from Wikipedia articles with pre-computed embeddings. This dataset contains approximately 35 million Wikipedia articles with 384-dimensional embeddings using the `all-MiniLM-L6-v2` model.
## Dataset Overview
This vector database uses advanced compression techniques (IVF + OPQ + PQ) to provide fast similarity search over Wikipedia content while maintaining high recall. The database is optimized for Retrieval Augmented Generation (RAG) applications and large-scale semantic search.
**Key Features:**
- **GPU-accelerated FAISS index** with IVF, OPQ, and Product Quantization
- **SQLite text storage** with aligned vector IDs
- **Memory-efficient** compression (~64 bytes per vector)
## Dataset Structure
wikipedia_vector_index_DB/
├── index.faiss # Main FAISS index (CPU-serialized)
├── meta.json # Index metadata and parameters
├── docs.sqlite # Text storage (rowid = vector id)
├── docs.sqlite-wal # SQLite WAL file (if present)
└── docs.sqlite-shm # SQLite shared memory (if present)
### File Descriptions
- **`index.faiss`**: Complete FAISS index containing trained OPQ matrices, IVF centroids, PQ codebooks, and compressed vector codes
- **`meta.json`**: Checkpoint metadata including offset, ntotal, dimensions, and compression parameters
- **`docs.sqlite`**: SQLite database with schema `docs(id INTEGER PRIMARY KEY, text TEXT)` where `id` matches FAISS vector IDs
- **`*.parquet`**: Original embedding data in Parquet format for verification and rebuilding
## Technical Specifications
| Parameter | Value | Description |
|-----------|-------|-------------|
| **Vectors** | ~35M | Total number of Wikipedia articles |
| **Dimensions** | 384 | Embedding dimensionality (all-MiniLM-L6-v2) |
| **Index Type** | IVF-OPQ-PQ | Inverted File + Optimized Product Quantization |
| **Compression** | ~64 bytes/vector | Memory-efficient storage |
| **nlist** | 131k-262k | Number of IVF clusters |
| **OPQ** | 64 subspaces | Optimized rotation matrix |
| **PQ** | 64×8 bits | Product quantization parameters |
## Usage
### Quick Start
```python
from huggingface_hub import snapshot_download
import faiss
import sqlite3
import json
# Download the complete vector database
dataset_path = snapshot_download(
repo_id="your-username/wikipedia-vector-db",
repo_type="dataset",
cache_dir="./data"
)
# Load FAISS index
index = faiss.read_index(f"{dataset_path}/index.faiss")
# Load metadata
with open(f"{dataset_path}/meta.json", "r") as f:
meta = json.load(f)
# Connect to text database
conn = sqlite3.connect(f"{dataset_path}/docs.sqlite")
print(f"Loaded index with {index.ntotal:,} vectors")
print(f"Index dimension: {index.d}")
###GPU Accelerated
import faiss
# Move index to GPU for faster queries
res = faiss.StandardGpuResources()
gpu_index = faiss.index_cpu_to_gpu(res, 0, index)
# Set search parameters
gpu_index.nprobe = 128 # Higher = better recall, slower search
# Perform similarity search
query_vector = get_query_embedding("your search query") # Shape: (1, 384)
distances, indices = gpu_index.search(query_vector, k=10)
# Retrieve corresponding text
cursor = conn.cursor()
for idx in indices[0]:
result = cursor.execute("SELECT text FROM docs WHERE id = ?", (int(idx),)).fetchone()
if result:
print(f"ID {idx}: {result[0][:200]}...")
Original Dataset
This vector database is built from maloyan/wikipedia-22-12-en-embeddings-all-MiniLM-L6-v2, which contains pre-computed embeddings of Wikipedia articles using the sentence-transformers/all-MiniLM-L6-v2 model.