File size: 4,066 Bytes
edc9d87 c972a57 85e9cf5 c972a57 9baf92a c972a57 9baf92a e96d7a2 9baf92a e96d7a2 9baf92a e96d7a2 9baf92a e96d7a2 edc9d87 a2e8c77 4c84867 a2e8c77 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
license: apache-2.0
dataset_info:
features:
- name: parent_asin
dtype: string
- name: value
list: float64
- name: main_category
dtype: string
- name: title
dtype: string
- name: average_rating
dtype: float64
- name: rating_number
dtype: float64
- name: description
dtype: string
- name: price
dtype: float64
- name: categories
dtype: string
- name: image_url
dtype: string
splits:
- name: train
num_bytes: 3482499106
num_examples: 100000
download_size: 2309398330
dataset_size: 3482499106
configs:
- config_name: 10k
data_files:
- split: train
path: "benchmark-10k/*.parquet"
- config_name: 100k
data_files:
- split: train
path: "benchmark-100k/*.parquet"
- config_name: 1M
data_files:
- split: train
path: "benchmark-1M/*.parquet"
- config_name: 10M
data_files:
- split: train
path: "benchmark-10M/*.parquet"
---
# Vector Search Benchmarks
This repo contains datasets for benchmarking vector search performance, to help Superlinked prioritize integration partners.
For performing actual benchmarking on this dataset, see the [github repository README](https://github.com/superlinked/external-benchmarks).
## Overview
We reviewed a number of publicly available datasets and noted 3 core problems + here is how this dataset fixes them:
|Problems of other vector search benchmarks| How this dataset solves it |
|-|--------------------------------------------------------------------|
|Not enough metadata of various types makes it hard to test filter performance| 3 number, 1 categorical, 3 text, 1 image column |
|Vectors too small, while SOTA models usually output 2k+ even 4k+ dims| 4154 dims |
|Dataset too small, especially if larger vectors are used| 100k, 1M and 10M item variants, all sampled from the large dataset |
## Available Datasets
### Product data
The `data_dir`s contain parquet files with the metadata and vectors.
| Dataset | Records | # Files | Size |
|----------------|------------|---------|---------|
| benchmark_10k | 10,000 | 100 | ~230 MB |
| benchmark_100k | 100,000 | 100 | ~2.3 GB |
| benchmark_1M | 1,000,000 | 100 | ~23 GB |
| benchmark_10M | 10,534,536 | 1000 | ~240 GB |
The structure of the files is the same throughout:
```
Schema([('parent_asin', String), # the id
('main_category', String),
('title', String),
('average_rating', Float64),
('rating_number', Float64),
('description', String),
('price', Float64),
('categories', String),
('image_url', String)])
('value', List(Float64)), # the vectors
```
## Data Access
The product metadata and vectors are available using [HF Datasets](https://huggingface.co/docs/datasets/en/index).
```python
from datasets import load_dataset
benchmark_10k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10k")
benchmark_100k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-100k")
benchmark_1M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-1M")
benchmark_10M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10M")
```
## Dataset Production
### Source Data
- **Origin**: [Amazon Reviews 2023 dataset](https://amazon-reviews-2023.github.io/)
- **Categories**: `["Books", "Automotive", "Tools and Home Improvement", "All Beauty", "Electronics", "Software", "Health and Household"]`
### Embeddings
The embeddings are created via a [superlinked config](https://github.com/superlinked/external-benchmarks/tree/main/superlinked_app). The resulting 4154 dim vector contains:
- 1 categorical,
- 3 number,
- 3 text (`Qwen/Qwen3-Embedding-0.6B`),
- and 1 image (`laion/CLIP-ViT-H-14-laion2B-s32B-b79K`)
embeddings concatenated. |