SISAP2026 / README.md
sadit's picture
Updated README to include task 2 and task 3. (#5)
67a012f
metadata
license: mit

This repository contains the development data files used in the SISAP2026 indexing challenge.

Datasets for previous editions:

Datasets

  • WIKIPEDIA (English articles):

    • repo: https://huggingface.co/datasets/wikimedia/wikipedia
    • BGE m3 model: https://huggingface.co/BAAI/bge-m3
    • File: benchmark-dev-wikipedia-bge-m3.h5
    • similarity: Cosine / dot product
    • Content of the h5 file:
      • dataset train: a 6.35 million vector database, i.e., a matrix of size $1024 \times 6350000$ (f16)
      • group itrain: collection of data related to in-distribution queries (articles removed from the English Wikipedia corpus):
        • itest/queries: a 10'000 vector database, i.e., a matrix of size $1024 \times 10000$ (f16)
        • itest/knns: the gold-standard identifiers for the 1000 nearest neighbors of itest/queries in train, i.e., a matrix $1000 \times 10000$ (i32).
        • itest/dists: the gold-standard distances (1-dot) for the 1000 nearest neighbors of itest/queries in train, i.e., a matrix $1000 \times 10000$ (f32).
      • group otrain: collection of data related to out-of-distribution queries (same model in random articles from the Spanish Wikipedia, i.e., cross-lingual retrieval):
        • otest/queries: a 10'000 vector database, i.e., a matrix of size $1024 \times 10000$ (f16)
        • otest/knns: the gold-standard identifiers for the 1000 nearest neighbors of itest/queries in train, i.e., a matrix $1000 \times 10000$ (i32).
        • otest/dists: the gold-standard distances (1-dot) for the 1000 nearest neighbors of itest/queries in train, i.e., a matrix $1000 \times 10000$ (f32).
      • group allknn:
        • allknn/knns: the gold-standard identifiers for the all-knn graph of train i.e., a matrix $32 \times 6350000$ (i32).
        • allknn/dists: the gold-standard distances (1-dot) for the all-knn graph of train i.e., a matrix $32 \times 6350000$ (f32).
  • WIKIPEDIA Small (English articles):

    • This is small version of WIKIPEDIA database for testing and developing purposes, more precisely, the train dataset is a 200k vector database.
    • File: benchmark-dev-wikipedia-bge-m3-small.h5
  • LLAMA (Llama-3-8B-262k):

    • repo: https://huggingface.co/datasets/vector-index-bench/vibe
    • Model: Llama-3.2-8B
    • File: llama-dev.h5
    • similarity: Dot product (vectors are not normalized)
    • Content of the h5 file:
      • dataset train: a 256k vector database, i.e., a matrix of size $128 \times 256921$ (f32)
      • group test: collection of development queries:
        • test/queries: a 1'000 vector database, i.e., a matrix of size $128 \times 1000$ (f32)
        • test/knns: the gold-standard identifiers for the 100 nearest neighbors of test/queries in train, i.e., a matrix $100 \times 1000$ (i64).
        • test/dists: the gold-standard distances (dot product) for the 100 nearest neighbors of test/queries in train, i.e., a matrix $100 \times 1000$ (f64).
  • NQ (Natural Questions):

    • repo: https://github.com/beir-cellar/beir
    • Model: SPLADE-v3 (sparse embeddings)
    • File: nq.h5
    • similarity: Dot product, vectors are not normalized
    • Content of the h5 file:
      • group train: a 2.68 million sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 2681468$ (f32). It contains data, indices, indptr datasets and a shape attribute.
      • group otest: collection of development queries:
        • otest/queries: 3452 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 3452$ (f32). It contains data, indices, indptr datasets and a shape attribute.
        • otest/knns: the gold-standard identifiers for the 100 nearest neighbors of otest/queries in train, i.e., a matrix $100 \times 3452$ (i32).
        • otest/dists: the gold-standard distances (dot product) for the 100 nearest neighbors of otest/queries in train, i.e., a matrix $100 \times 3452$ (f32).
    • See example below to know how to work with the file
  • FIQA (Financial Question Answering):

    • repo: https://github.com/beir-cellar/beir
    • Model: SPLADE-v3 (sparse embeddings)
    • File: fiqa-dev.h5
    • similarity: Dot product, vectors are not normalized
    • Content of the h5 file:
      • group train: a 57k sparse vector database, i.e., a sparse matrix (CSR) of size $30522 \times 57638$ (f32). It contains data, indices, indptr datasets and a shape attribute.
      • group otest: collection of development queries:
        • otest/queries: 6648 query embeddings, i.e., a sparse matrix (CSR) of size $30522 \times 6648$ (f32). It contains data, indices, indptr datasets and a shape attribute.
        • otest/knns: the gold-standard identifiers for the 100 nearest neighbors of otest/queries in train, i.e., a matrix $100 \times 6648$ (i32).
        • otest/dists: the gold-standard distances (dot product) for the 100 nearest neighbors of otest/queries in train, i.e., a matrix $100 \times 6648$ (f32).
    • See example below to know how to work with the file

Note: h5py/HDF5.jl packages read matrices in the expected platform order, so be careful since it could permute dimensions w.r.t what is here explained, however, the final order is what is expected anyway for fast implementations.

Python Example (Loading Sparse Matrices)

Here is a small example of how to load the sparse matrices from nq.h5 and fiqa-dev.h5 using scipy:

import h5py
from scipy.sparse import csr_matrix

def load_sparse_matrix(h5_group):
    indptr = h5_group['indptr'][:]
    indices = h5_group['indices'][:]
    data = h5_group['data'][:]
    shape = tuple(h5_group.attrs['shape'])
    return csr_matrix((data, indices, indptr), shape=shape)

with h5py.File('nq.h5', 'r') as f:
    train_matrix = load_sparse_matrix(f['train'])
    query_matrix = load_sparse_matrix(f['otest']['queries'])
    
    print(f"Train shape: {train_matrix.shape}")
    print(f"Query shape: {query_matrix.shape}")