task_categories:
- text-retrieval
- image-to-text
- sentence-similarity
language:
- en
tags:
- embeddings
- vector-database
- benchmark
GAS Indexing Artifacts
Dataset Description
This dataset contains pre-computed deterministic centroids and associated geometric metadata generated using our GAS (Geometry-Aware Selection) algorithm. These artifacts are designed to benchmark Approximate Nearest Neighbor (ANN) search performance in privacy-preserving or dynamic vector database environments.
Purpose
To serve as a standardized benchmark resource for evaluating the efficiency and recall of vector databases implementing the GAS architecture. It is specifically designed for integration with VectorDBBench.
Dataset Summary
- Source Data:
- Wikipedia (Public Dataset)
- LAION0400M (Public Dataset)
- Embedding Model:
- google/embeddinggemma-300m
- sentence-transformers/clip-ViT-B-32
Dataset Structure
For each embedding model, the directory contains two key file:
| Data | Description |
|---|---|
centroids.npy |
centroids as followed IVF |
Data Fields
Centroids: centroids.npy
- Purpose: Finding the nearest clusters for IVF (Inverted File Index)
- Type: NumPy array (
np.ndarray) - Shape:
[32768, 768]or[1024, 512] - Description: 768-dimensional vectors representing 32,768 cluster centroids, or 512-dimensional vectors representing 1,024 cluster centroids.
- Normalization: L2-normalized (unit norm)
- Format: float32
Dataset Creation
Source Data
Source dataset is a large public dataset:
- Wikipedia: mixedbread-ai/wikipedia-data-en-2023-11
- LAION: LAION-400M.
Preprocessing
Create Centroids by GAS approach:
Description TBD
Chunking (for text): For texts exceeding 2048 tokens:
- Split into chunks with ~100 token overlap
- Embedded each chunk separately
- Averaged chunk embeddings for final representation
Normalization: All embeddings are L2-normalized
Embedding Generation
Text:
- Model: google/embeddinggemma-300m
- Dimension: 768
- Max Token Length: 2048
- Normalization: L2-normalized
Multi-Modal:
- Model: sentence-transformers/clip-ViT-B-32
- Dimension: 512
- Normalization: L2-normalized
Usage
import wget
def download_centroids(embedding_model: str, dataset_dir: str) -> None:
"""Download pre-computed centroids for IVF_GAS."""
dataset_link = f"https://huggingface.co/datasets/cryptolab-playground/gas-centroids/resolve/main/{embedding_model}"
wget.download(f"{dataset_link}/centroids.npy", out="centroids.npy")
License
Apache 2.0
Citation
If you use this dataset, please cite:
@dataset{gas-centroids,
author = {CryptoLab, Inc.},
title = {GAS Centroids},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/cryptolab-playground/gas-centroids}
}
Source Dataset Citation
@dataset{wikipedia_data_en_2023_11,
author = {mixedbread-ai},
title = {Wikipedia Data EN 2023 11},
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/mixedbread-ai/wikipedia-data-en-2023-11}
}
@dataset{laion400m,
author = {Schuhmann, Christoph and others},
title = {LAION-AI},
year = {2021},
publisher = {LAION},
url = {https://laion.ai/blog/laion-400-open-dataset}
}
Embedding Model Citation
@misc{embeddinggemma,
title={Embedding Gemma},
author={Google},
year={2024},
url={https://huggingface.co/google/embeddinggemma-300m}
}
@misc{clipvitb32,
title={CLIP ViT-B/32},
author={Open AI},
year={2021},
url={https://huggingface.co/sentence-transformers/clip-ViT-B-32}
}
Acknowledgments
- Original dataset:
- mixedbread-ai/wikipedia-data-en-2023-11
- LAION-400M
- Embedding model:
- google/embeddinggemma-300m
- sentence-transformers/clip-ViT-B-32
- Benchmark framework: VectorDBBench