File size: 8,535 Bytes
5fed0fc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
VDB Design Problem - Balanced Tier
===================================
Problem Setting
---------------
Design a Vector Database index optimized for **recall** subject to a **latency constraint**. This tier uses latency-gated scoring: solutions exceeding the latency threshold receive zero points, while solutions meeting the constraint are scored purely by recall@1.
**Optimization Goal**: Maximize recall@1 within latency constraint
$$
\text{score} = \begin{cases}
0 & \text{if } t_{\text{query}} > t_{\text{max}} \\
100 & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r \geq r_{\text{baseline}} \\
100 \cdot \frac{r - r_{\text{min}}}{r_{\text{baseline}} - r_{\text{min}}} & \text{if } t_{\text{query}} \leq t_{\text{max}} \text{ and } r < r_{\text{baseline}}
\end{cases}
$$
Where:
- $r$: Your recall@1
- $t_{\text{query}}$: Your average query latency (ms)
- $r_{\text{baseline}} = 0.9914$ (baseline recall)
- $r_{\text{min}} = 0.6939$ (minimum acceptable recall, 70% of baseline)
- $t_{\text{max}} = 5.775\text{ms}$ (maximum allowed latency, 150% of baseline 3.85ms)
**Key Insight**: Latency is a hard constraint. Only recall determines your score within the constraint.
Baseline Performance
--------------------
- Recall@1: **0.9914** (99.14%)
- Avg query time: **3.85ms**
- Baseline score: **100** (recall equals baseline within latency constraint)
Scoring Examples
----------------
Assuming all solutions meet latency constraint ($t \leq 5.775\text{ms}$):
| Recall@1 | Latency | Score Calculation | Score |
|----------|---------|-------------------|-------|
| 0.9914 | 3.85ms | $r = r_{\text{baseline}}$ → max score | **100** |
| 0.9950 | 3.00ms | $r > r_{\text{baseline}}$ → max score | **100** |
| 0.9500 | 2.50ms | $\frac{0.95 - 0.6939}{0.9914 - 0.6939} = 0.860$ | **86.0** |
| 0.8500 | 4.00ms | $\frac{0.85 - 0.6939}{0.9914 - 0.6939} = 0.524$ | **52.4** |
| 0.6939 | 5.00ms | $r = r_{\text{min}}$ → minimum score | **0** |
| 0.9900 | **6.00ms** | $t > t_{\text{max}}$ → latency gate fails | **0** |
**Note**: Faster latency does NOT increase score - only recall matters if constraint is met.
API Specification
-----------------
Implement a class with the following interface:
```python
import numpy as np
from typing import Tuple
class YourIndexClass:
def __init__(self, dim: int, **kwargs):
"""
Initialize the index for vectors of dimension `dim`.
Args:
dim: Vector dimensionality (e.g., 128 for SIFT1M)
**kwargs: Optional parameters (e.g., M, ef_construction for HNSW)
Example:
index = YourIndexClass(dim=128, M=16, ef_search=64)
"""
pass
def add(self, xb: np.ndarray) -> None:
"""
Add vectors to the index.
Args:
xb: Base vectors, shape (N, dim), dtype float32
Notes:
- Can be called multiple times (cumulative)
- Must handle large N (e.g., 1,000,000 vectors)
Example:
index.add(xb) # xb.shape = (1000000, 128)
"""
pass
def search(self, xq: np.ndarray, k: int) -> Tuple[np.ndarray, np.ndarray]:
"""
Search for k nearest neighbors of query vectors.
Args:
xq: Query vectors, shape (nq, dim), dtype float32
k: Number of nearest neighbors to return
Returns:
(distances, indices):
- distances: shape (nq, k), dtype float32, L2 distances
- indices: shape (nq, k), dtype int64, indices into base vectors
Notes:
- Must return exactly k neighbors per query
- Indices should refer to positions in the vectors passed to add()
- Lower distance = more similar
Example:
D, I = index.search(xq, k=1) # xq.shape = (10000, 128)
# D.shape = (10000, 1), I.shape = (10000, 1)
"""
pass
```
**Implementation Requirements**:
- Class can have any name (evaluator auto-discovers classes with `add` and `search` methods)
- Must handle SIFT1M dataset: 1M base vectors, 10K queries, 128 dimensions
- Your `search` must return tuple `(distances, indices)` with shapes `(nq, k)`
- Distances should be L2 (Euclidean) or L2-squared
- No need to handle dataset loading - evaluator provides numpy arrays
Evaluation Process
------------------
The evaluator follows these steps:
### 1. Load Dataset
```python
from faiss.contrib.datasets import DatasetSIFT1M
ds = DatasetSIFT1M()
xb = ds.get_database() # (1000000, 128) float32
xq = ds.get_queries() # (10000, 128) float32
gt = ds.get_groundtruth() # (10000, 100) int64 - ground truth indices
```
### 2. Build Index
```python
from solution import YourIndexClass # Auto-discovered
d = xb.shape[1] # 128 for SIFT1M
index = YourIndexClass(d) # Pass dimension as first argument
index.add(xb) # Add all 1M base vectors
```
### 3. Measure Performance (Batch Queries)
```python
import time
t0 = time.time()
D, I = index.search(xq, k=1) # Search all 10K queries at once
t1 = time.time()
# Calculate metrics
recall_at_1 = (I[:, :1] == gt[:, :1]).sum() / len(xq)
avg_query_time_ms = (t1 - t0) * 1000.0 / len(xq)
```
**Important**: `avg_query_time_ms` from **batch queries** is used for scoring. Batch queries benefit from CPU cache and vectorization, typically faster than single queries.
### 4. Calculate Score
```python
if avg_query_time_ms > 5.775:
score = 0.0
elif recall_at_1 >= 0.9914:
score = 100.0
else:
recall_range = 0.9914 - 0.6939
recall_proportion = (recall_at_1 - 0.6939) / recall_range
score = max(0.0, min(100.0, 100.0 * recall_proportion))
```
Dataset Details
---------------
- **Name**: SIFT1M
- **Base vectors**: 1,000,000 vectors of dimension 128
- **Query vectors**: 10,000 vectors
- **Ground truth**: Precomputed nearest neighbors (k=1)
- **Metric**: L2 (Euclidean distance)
- **Vector type**: float32
Runtime Platform
----------------
- **Infrastructure**: Evaluations run on SkyPilot-managed cloud instances (AWS, GCP, or Azure)
- **Compute**: CPU-only instances (no GPU required)
- **Environment**: Docker containerized execution with Python 3, NumPy ≥1.24, FAISS-CPU ≥1.7.4
Constraints
-----------
- **Timeout**: 1 hour for entire evaluation (index construction + queries)
- **Memory**: Use reasonable memory (index should fit in RAM)
- **Latency constraint**: avg_query_time_ms ≤ 5.775ms
- **Recall range**: 0.6939 ≤ recall@1 ≤ 1.0
Strategy Tips
-------------
1. **Focus on recall**: Latency only needs to meet threshold, doesn't improve score beyond that
2. **Batch optimization is key**: Your `search` should handle batch queries efficiently
3. **Parameter tuning**: Small changes (e.g., HNSW's M, ef_search) significantly affect recall
4. **Don't over-optimize latency**: Meeting 5.775ms is enough; focus energy on recall
Example: Simple Baseline
-------------------------
```python
import numpy as np
class SimpleIndex:
def __init__(self, dim: int, **kwargs):
self.dim = dim
self.xb = None
def add(self, xb: np.ndarray) -> None:
if self.xb is None:
self.xb = xb.copy()
else:
self.xb = np.vstack([self.xb, xb])
def search(self, xq: np.ndarray, k: int) -> tuple:
# Compute all pairwise L2 distances
# xq: (nq, dim), xb: (N, dim)
# distances: (nq, N)
distances = np.sqrt(((xq[:, np.newaxis, :] - self.xb[np.newaxis, :, :]) ** 2).sum(axis=2))
# Get k nearest neighbors
indices = np.argpartition(distances, k-1, axis=1)[:, :k]
sorted_indices = np.argsort(distances[np.arange(len(xq))[:, None], indices], axis=1)
final_indices = indices[np.arange(len(xq))[:, None], sorted_indices]
final_distances = distances[np.arange(len(xq))[:, None], final_indices]
return final_distances, final_indices
```
**Note**: This baseline achieves perfect recall (100%) but is too slow for large datasets. Use approximate methods like HNSW, IVF, or LSH for better speed-recall tradeoffs.
Debugging Tips
--------------
- **Test locally**: Use a subset of data (e.g., 10K vectors) for faster iteration
- **Verify shapes**: Ensure `search` returns `(nq, k)` shaped arrays
- **Check recall calculation**: `(I[:, :1] == gt[:, :1]).sum() / len(xq)`
- **Profile latency**: Measure batch vs single query performance separately
- **Validate before submit**: Run full 1M dataset locally if possible
|