Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: ArrowInvalid
Message: Column 1 named itest expected length 200000 but got length 10000
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2083, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 87, in _generate_tables
pa_table = _recursive_load_arrays(h5, self.info.features, start, end)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 283, in _recursive_load_arrays
return pa.Table.from_pydict(batch_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 1985, in pyarrow.lib._Tabular.from_pydict
File "pyarrow/table.pxi", line 6401, in pyarrow.lib._from_pydict
File "pyarrow/table.pxi", line 4912, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 4256, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named itest expected length 200000 but got length 10000Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This repository contains the development data files used in the SISAP2026 indexing challenge.
Datasets for previous editions:
Datasets
WIKIPEDIA (English articles):
- repo: https://huggingface.co/datasets/wikimedia/wikipedia
- BGE m3 model: https://huggingface.co/BAAI/bge-m3
- File: benchmark-dev-wikipedia-bge-m3.h5
- similarity: Cosine / dot product
- Content of the h5 file:
- dataset
train: a 6.35 million vector database, i.e., a matrix of size $1024 \times 6350000$ (f16) - group
itrain: collection of data related to in-distribution queries (articles removed from the English Wikipedia corpus):itest/queries: a 10'000 vector database, i.e., a matrix of size $1024 \times 10000$ (f16)itest/knns: the gold-standard identifiers for the 1000 nearest neighbors ofitest/queriesintrain, i.e., a matrix $1000 \times 10000$ (i32).itest/dists: the gold-standard distances (1-dot) for the 1000 nearest neighbors ofitest/queriesintrain, i.e., a matrix $1000 \times 10000$ (f32).
- group
otrain: collection of data related to out-of-distribution queries (same model in random articles from the Spanish Wikipedia, i.e., cross-lingual retrieval):otest/queries: a 10'000 vector database, i.e., a matrix of size $1024 \times 10000$ (f16)otest/knns: the gold-standard identifiers for the 1000 nearest neighbors ofitest/queriesintrain, i.e., a matrix $1000 \times 10000$ (i32).otest/dists: the gold-standard distances (1-dot) for the 1000 nearest neighbors ofitest/queriesintrain, i.e., a matrix $1000 \times 10000$ (f32).
- group
allknn:allknn/knns: the gold-standard identifiers for the all-knn graph oftraini.e., a matrix $32 \times 6350000$ (i32).allknn/dists: the gold-standard distances (1-dot) for the all-knn graph oftraini.e., a matrix $32 \times 6350000$ (f32).
- dataset
WIKIPEDIA Small (English articles):
- This is small version of WIKIPEDIA database for testing and developing purposes, more precisely, the
traindataset is a 200k vector database. - File: benchmark-dev-wikipedia-bge-m3-small.h5
- This is small version of WIKIPEDIA database for testing and developing purposes, more precisely, the
LLAMA (Llama-3-8B-262k):
- repo: https://huggingface.co/datasets/vector-index-bench/vibe
- Model: Llama-3.2-8B
- File: llama-dev.h5
- similarity: Dot product (vectors are not normalized)
- Content of the h5 file:
- dataset
train: a 256k vector database, i.e., a matrix of size $128 \times 256921$ (f32) - group
test: collection of development queries:test/queries: a 1'000 vector database, i.e., a matrix of size $128 \times 1000$ (f32)test/knns: the gold-standard identifiers for the 100 nearest neighbors oftest/queriesintrain, i.e., a matrix $100 \times 1000$ (i64).test/dists: the gold-standard distances (dot product) for the 100 nearest neighbors oftest/queriesintrain, i.e., a matrix $100 \times 1000$ (f64).
- dataset
NQ (Natural Questions):
- repo: https://github.com/beir-cellar/beir
- Model: SPLADE-v3 (sparse embeddings)
- File: nq.h5
- similarity: Dot product, vectors are not normalized
- Content of the h5 file:
- group
train: a 2.68 million sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 2681468$ (f32). It containsdata,indices,indptrdatasets and ashapeattribute. - group
otest: collection of development queries:otest/queries: 3452 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 3452$ (f32). It containsdata,indices,indptrdatasets and ashapeattribute.otest/knns: the gold-standard identifiers for the 100 nearest neighbors ofotest/queriesintrain, i.e., a matrix $100 \times 3452$ (i32).otest/dists: the gold-standard distances (dot product) for the 100 nearest neighbors ofotest/queriesintrain, i.e., a matrix $100 \times 3452$ (f32).
- group
- See example below to know how to work with the file
FIQA (Financial Question Answering):
- repo: https://github.com/beir-cellar/beir
- Model: SPLADE-v3 (sparse embeddings)
- File: fiqa-dev.h5
- similarity: Dot product, vectors are not normalized
- Content of the h5 file:
- group
train: a 57k sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 57638$ (f32). It containsdata,indices,indptrdatasets and ashapeattribute. - group
otest: collection of development queries:otest/queries: 6648 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 6648$ (f32). It containsdata,indices,indptrdatasets and ashapeattribute.otest/knns: the gold-standard identifiers for the 100 nearest neighbors ofotest/queriesintrain, i.e., a matrix $100 \times 6648$ (i32).otest/dists: the gold-standard distances (dot product) for the 100 nearest neighbors ofotest/queriesintrain, i.e., a matrix $100 \times 6648$ (f32).
- group
- See example below to know how to work with the file
Note: h5py/HDF5.jl packages read matrices in the expected platform order, so be careful since it could permute dimensions w.r.t what is here explained, however, the final order is what is expected anyway for fast implementations.
Python Example (Loading Sparse Matrices)
Here is a small example of how to load the sparse matrices from nq.h5 and fiqa-dev.h5 using scipy:
import h5py
from scipy.sparse import csr_matrix
def load_sparse_matrix(h5_group):
indptr = h5_group['indptr'][:]
indices = h5_group['indices'][:]
data = h5_group['data'][:]
shape = tuple(h5_group.attrs['shape'])
return csr_matrix((data, indices, indptr), shape=shape)
with h5py.File('nq.h5', 'r') as f:
train_matrix = load_sparse_matrix(f['train'])
query_matrix = load_sparse_matrix(f['otest']['queries'])
print(f"Train shape: {train_matrix.shape}")
print(f"Query shape: {query_matrix.shape}")
- Downloads last month
- 62