morcosharcos
commited on
Commit
·
a2e8c77
1
Parent(s):
332db01
feat: README update
Browse files
README.md
CHANGED
|
@@ -34,3 +34,75 @@ configs:
|
|
| 34 |
- split: train
|
| 35 |
path: data/train-*
|
| 36 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
- split: train
|
| 35 |
path: data/train-*
|
| 36 |
---
|
| 37 |
+
|
| 38 |
+
# Vector Search Benchmarks
|
| 39 |
+
|
| 40 |
+
This repo contains datasets for benchmarking vector search performance, to help Superlinked prioritize integration partners.
|
| 41 |
+
For performing actual benchmarking on this dataset, see the [github repository README](https://github.com/superlinked/external-benchmarks).
|
| 42 |
+
|
| 43 |
+
## Overview
|
| 44 |
+
|
| 45 |
+
We reviewed a number of publicly available datasets and noted 3 core problems + here is how this dataset fixes them:
|
| 46 |
+
|
| 47 |
+
|Problems of other vector search benchmarks| How this dataset solves it |
|
| 48 |
+
|-|--------------------------------------------------------------------|
|
| 49 |
+
|Not enough metadata of various types makes it hard to test filter performance| 3 number, 1 categorical, 3 text, 1 image column |
|
| 50 |
+
|Vectors too small, while SOTA models usually output 2k+ even 4k+ dims| 4154 dims |
|
| 51 |
+
|Dataset too small, especially if larger vectors are used| 100k, 1M and 10M item variants, all sampled from the large dataset |
|
| 52 |
+
|
| 53 |
+
## Available Datasets
|
| 54 |
+
|
| 55 |
+
### Product data
|
| 56 |
+
|
| 57 |
+
The `data_dir`s contain parquet files with the metadata and vectors.
|
| 58 |
+
|
| 59 |
+
| Dataset | Records | # Files | Size |
|
| 60 |
+
|----------------|------------|---------|---------|
|
| 61 |
+
| benchmark_10k | 10,000 | 100 | ~230 MB |
|
| 62 |
+
| benchmark_100k | 100,000 | 100 | ~2.3 GB |
|
| 63 |
+
| benchmark_1M | 1,000,000 | 100 | ~23 GB |
|
| 64 |
+
| benchmark_10M | 10,534,536 | 1000 | ~240 GB |
|
| 65 |
+
|
| 66 |
+
The structure of the files is the same throughout:
|
| 67 |
+
|
| 68 |
+
```
|
| 69 |
+
Schema([('parent_asin', String), # the id
|
| 70 |
+
('main_category', String),
|
| 71 |
+
('title', String),
|
| 72 |
+
('average_rating', Float64),
|
| 73 |
+
('rating_number', Float64),
|
| 74 |
+
('description', String),
|
| 75 |
+
('price', Float64),
|
| 76 |
+
('categories', String),
|
| 77 |
+
('image_url', String)])
|
| 78 |
+
('value', List(Float64)), # the vectors
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
## Data Access
|
| 82 |
+
|
| 83 |
+
The product metadata and vectors are available using [HF Datasets](https://huggingface.co/docs/datasets/en/index).
|
| 84 |
+
|
| 85 |
+
```python
|
| 86 |
+
from datasets import load_dataset
|
| 87 |
+
|
| 88 |
+
benchmark_10k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10k")
|
| 89 |
+
benchmark_100k = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-100k")
|
| 90 |
+
benchmark_1M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-1M")
|
| 91 |
+
benchmark_10M = load_dataset("superlinked/external-benchmarking", data_dir="benchmark-10M")
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Dataset Production
|
| 95 |
+
|
| 96 |
+
### Source Data
|
| 97 |
+
- **Origin**: [Amazon Reviews 2023 dataset](https://amazon-reviews-2023.github.io/)
|
| 98 |
+
- **Categories**: `["Books", "Automotive", "Tools and Home Improvement", "All Beauty", "Electronics", "Software", "Health and Household"]`
|
| 99 |
+
|
| 100 |
+
### Embeddings
|
| 101 |
+
|
| 102 |
+
The embeddings are created via a [superlinked config](superlinked_app). The resulting 4154 dim vector contains:
|
| 103 |
+
- 1 categorical,
|
| 104 |
+
- 3 number,
|
| 105 |
+
- 3 text (`Qwen/Qwen3-Embedding-0.6B`),
|
| 106 |
+
- and 1 image (`laion/CLIP-ViT-H-14-laion2B-s32B-b79K`)
|
| 107 |
+
|
| 108 |
+
embeddings concatenated.
|