Datasets:
metadata
license: other
task_categories:
- feature-extraction
tags:
- vector-search
- diskann
- nearest-neighbor
- benchmark
pretty_name: SIFT1B - Sharded DiskANN Indices
size_categories:
- 1B<n<10B
SIFT1B - Sharded DiskANN Indices
Pre-built DiskANN indices for the SIFT1B (BigANN) dataset, sharded for distributed vector search.
Dataset Info
- Source: BigANN Benchmarks
- Vectors: 1,000,000,000 (1 billion)
- Dimensions: 128
- Data type: uint8
- Queries: 10,000
- Distance: L2
DiskANN Parameters
- R (graph degree): 64
- L (build beam width): 100
- PQ bytes: 32
Shard Configurations
- shard_2: 2 shards x 500,000,000 vectors
- shard_3: 3 shards x ~333,333,333 vectors
- shard_5: 5 shards x 200,000,000 vectors
- shard_7: 7 shards x ~142,857,142 vectors
- shard_10: 10 shards x 100,000,000 vectors
File Structure
fbin/
base.u8bin # Base vectors (1B x 128 uint8)
queries.u8bin # Query vectors (10K x 128 uint8)
diskann/
gt_100.bin # Ground truth (100-NN)
shard_N/ # N-shard configuration
sift1b_64_100_32.shardX_disk.index # DiskANN disk index
sift1b_64_100_32.shardX_disk.index_512_none.indices # MinIO graph indices
sift1b_64_100_32.shardX_disk.index_base_none.vectors # MinIO vector data
sift1b_base.shardX.fbin # Shard base data
Chunked Files
Files larger than 49 GB are split into chunks for upload:
*_512_none.indices.part00,.part01, etc.*_base_none.vectors.part00, etc. (if applicable)fbin/base.u8bin.part00, etc.
To reassemble: cat file.part00 file.part01 ... > file
Usage
Download with huggingface_hub
from huggingface_hub import hf_hub_download
# Download a specific shard file
index = hf_hub_download(
repo_id="maknee/sift1b",
filename="diskann/shard_10/sift1b_64_100_32.shard0_disk.index",
repo_type="dataset"
)
Download with git-lfs
git lfs install
git clone https://huggingface.co/datasets/maknee/sift1b
License
Same as source dataset (BigANN Benchmarks).