Updated README to include task 2 and task 3.
#5
by
maumueller - opened
README.md
CHANGED
|
@@ -32,6 +32,66 @@ Datasets for previous editions:
|
|
| 32 |
- This is small version of WIKIPEDIA database for testing and developing purposes, more precisely, the `train` dataset is a 200k vector database.
|
| 33 |
- File: benchmark-dev-wikipedia-bge-m3-small.h5
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
Note: h5py/HDF5.jl packages read matrices in the expected platform order, so be careful since it could permute dimensions w.r.t what is here explained, however, the final order is what is expected anyway for fast implementations.
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
|
|
|
| 32 |
- This is small version of WIKIPEDIA database for testing and developing purposes, more precisely, the `train` dataset is a 200k vector database.
|
| 33 |
- File: benchmark-dev-wikipedia-bge-m3-small.h5
|
| 34 |
|
| 35 |
+
- LLAMA (Llama-3-8B-262k):
|
| 36 |
+
- repo: https://huggingface.co/datasets/vector-index-bench/vibe
|
| 37 |
+
- Model: Llama-3.2-8B
|
| 38 |
+
- File: llama-dev.h5
|
| 39 |
+
- similarity: Dot product (vectors are not normalized)
|
| 40 |
+
- Content of the h5 file:
|
| 41 |
+
- dataset `train`: a 256k vector database, i.e., a matrix of size $128 \times 256921$ (f32)
|
| 42 |
+
- group `test`: collection of development queries:
|
| 43 |
+
- `test/queries`: a 1'000 vector database, i.e., a matrix of size $128 \times 1000$ (f32)
|
| 44 |
+
- `test/knns`: the gold-standard identifiers for the 100 nearest neighbors of `test/queries` in `train`, i.e., a matrix $100 \times 1000$ (i64).
|
| 45 |
+
- `test/dists`: the gold-standard distances (dot product) for the 100 nearest neighbors of `test/queries` in `train`, i.e., a matrix $100 \times 1000$ (f64).
|
| 46 |
+
|
| 47 |
+
- NQ (Natural Questions):
|
| 48 |
+
- repo: <https://github.com/beir-cellar/beir>
|
| 49 |
+
- Model: SPLADE-v3 (sparse embeddings)
|
| 50 |
+
- File: nq.h5
|
| 51 |
+
- similarity: Dot product, vectors are not normalized
|
| 52 |
+
- Content of the h5 file:
|
| 53 |
+
- group `train`: a 2.68 million sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 2681468$ (f32). It contains `data`, `indices`, `indptr` datasets and a `shape` attribute.
|
| 54 |
+
- group `otest`: collection of development queries:
|
| 55 |
+
- `otest/queries`: 3452 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 3452$ (f32). It contains `data`, `indices`, `indptr` datasets and a `shape` attribute.
|
| 56 |
+
- `otest/knns`: the gold-standard identifiers for the 100 nearest neighbors of `otest/queries` in `train`, i.e., a matrix $100 \times 3452$ (i32).
|
| 57 |
+
- `otest/dists`: the gold-standard distances (dot product) for the 100 nearest neighbors of `otest/queries` in `train`, i.e., a matrix $100 \times 3452$ (f32).
|
| 58 |
+
- See example below to know how to work with the file
|
| 59 |
+
|
| 60 |
+
- FIQA (Financial Question Answering):
|
| 61 |
+
- repo: <https://github.com/beir-cellar/beir>
|
| 62 |
+
- Model: SPLADE-v3 (sparse embeddings)
|
| 63 |
+
- File: fiqa-dev.h5
|
| 64 |
+
- similarity: Dot product, vectors are not normalized
|
| 65 |
+
- Content of the h5 file:
|
| 66 |
+
- group `train`: a 57k sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 57638$ (f32). It contains `data`, `indices`, `indptr` datasets and a `shape` attribute.
|
| 67 |
+
- group `otest`: collection of development queries:
|
| 68 |
+
- `otest/queries`: 6648 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 6648$ (f32). It contains `data`, `indices`, `indptr` datasets and a `shape` attribute.
|
| 69 |
+
- `otest/knns`: the gold-standard identifiers for the 100 nearest neighbors of `otest/queries` in `train`, i.e., a matrix $100 \times 6648$ (i32).
|
| 70 |
+
- `otest/dists`: the gold-standard distances (dot product) for the 100 nearest neighbors of `otest/queries` in `train`, i.e., a matrix $100 \times 6648$ (f32).
|
| 71 |
+
- See example below to know how to work with the file
|
| 72 |
+
|
| 73 |
Note: h5py/HDF5.jl packages read matrices in the expected platform order, so be careful since it could permute dimensions w.r.t what is here explained, however, the final order is what is expected anyway for fast implementations.
|
| 74 |
+
|
| 75 |
+
### Python Example (Loading Sparse Matrices)
|
| 76 |
+
|
| 77 |
+
Here is a small example of how to load the sparse matrices from `nq.h5` and `fiqa-dev.h5` using `scipy`:
|
| 78 |
+
|
| 79 |
+
```python
|
| 80 |
+
import h5py
|
| 81 |
+
from scipy.sparse import csr_matrix
|
| 82 |
+
|
| 83 |
+
def load_sparse_matrix(h5_group):
|
| 84 |
+
indptr = h5_group['indptr'][:]
|
| 85 |
+
indices = h5_group['indices'][:]
|
| 86 |
+
data = h5_group['data'][:]
|
| 87 |
+
shape = tuple(h5_group.attrs['shape'])
|
| 88 |
+
return csr_matrix((data, indices, indptr), shape=shape)
|
| 89 |
+
|
| 90 |
+
with h5py.File('nq.h5', 'r') as f:
|
| 91 |
+
train_matrix = load_sparse_matrix(f['train'])
|
| 92 |
+
query_matrix = load_sparse_matrix(f['otest']['queries'])
|
| 93 |
+
|
| 94 |
+
print(f"Train shape: {train_matrix.shape}")
|
| 95 |
+
print(f"Query shape: {query_matrix.shape}")
|
| 96 |
+
```
|
| 97 |
|