The Dataset Viewer has been disabled on this dataset.

Polymorphic Sybil Benchmark: Retrieval Indexes

Companion retrieval artifacts for the Failure-Mode-Aware Evaluation Framework for Polymorphic Sybil Retrieval Poisoning in Grounded QA (NeurIPS 2026 Datasets & Benchmarks Track submission, under review). This repository hosts the large pre-built indexes that are impractical to distribute via the code repository.

Code, manifests, evaluator, and audit artifacts live in a separate repository: anonymous.4open.science/r/polymorphic-sybil-benchmark-code-8ED0

⚠️ Anonymity notice

This repository is an anonymous mirror for NeurIPS 2026 double-blind review. Author identity, affiliation, and acknowledgements are intentionally withheld. A de-anonymized release with full citation will be published after the review period.

Contents

Artifact Size Format Use
bm25/ ~11 GB Lucene index BM25 top-1000 → ColBERTv2 rerank → top-10
e5_faiss/ ~81 GB FAISS IndexFlatIP (dim 1024) E5-large-v2 top-200 → cross-encoder rerank → top-10

ColBERTv2 weights are downloaded at runtime from the official HuggingFace checkpoint (colbert-ir/colbertv2.0) and do not require a pre-built index in this repository.

The Wikipedia DPR 100-word corpus (21,015,324 passages) is embedded within the BM25 Lucene index; passage text is retrievable via Pyserini's LuceneSearcher.doc() interface.

Quick start

from huggingface_hub import snapshot_download

# Download only the BM25 index (~11 GB)
local = snapshot_download(
    repo_id="anon-neurips-ed-2026/polymorphic-sybil-benchmark-data",
    repo_type="dataset",
    allow_patterns=["bm25/*"],
)
print(f"BM25 index at: {local}/bm25/")

To download everything (~92 GB):

snapshot_download(
    repo_id="anon-neurips-ed-2026/polymorphic-sybil-benchmark-data",
    repo_type="dataset",
)

Or via CLI:

huggingface-cli download anon-neurips-ed-2026/polymorphic-sybil-benchmark-data \
    --repo-type=dataset --local-dir ./hf_indexes

After download, point the runners (in the code repository) at the indexes:

export WIKI_INDEX=./hf_indexes/bm25
export FAISS_INDEX=./hf_indexes/e5_faiss

Source data

The corpus is the Wikipedia DPR 100-word split (Karpukhin et al., 2020) — 21,015,324 passages. The benchmark draws questions from four open-domain QA datasets (the questions/gold answers themselves are in the code repository's manifest, not this index repository):

Dataset Split Pool size
Natural Questions (NQ-open) validation 3,610
HotpotQA distractor dev 7,405
TriviaQA (unfiltered.nocontext) validation 11,313
2WikiMultiHopQA dev 12,576

Reproducibility

These indexes are standard pre-built indexes over a public corpus — not a contribution of this work. They are reconstructible from upstream sources (Wikipedia DPR 100w corpus + Pyserini BM25 builder + E5-large-v2 + FAISS) and are redistributed here only as a reproducibility convenience to spare reviewers and users the ~10 GPU-hour index build.

For full reproduction (paper Tables 3, 5, 8) see the code repository.

License

Hosted retrieval indexes (BM25, E5 FAISS): CC BY-SA 4.0, inheriting from the Wikipedia DPR 100-word corpus (Karpukhin et al., 2020), which is itself derived from Wikipedia under CC BY-SA 3.0/4.0.

The four QA datasets referenced above are not redistributed in this repository — only the underlying Wikipedia corpus is indexed here. Questions and gold answers are in the code repository's manifest, under their respective upstream licenses:

  • Natural Questions: CC BY-SA 3.0
  • HotpotQA: CC BY-SA 4.0
  • TriviaQA: Apache 2.0
  • 2WikiMultiHopQA: Apache 2.0

Users redistributing must comply with applicable upstream licenses.

Citation

@misc{polymorphic-sybil-2026-anon,
  title  = {Failure-Mode-Aware Evaluation Framework for Polymorphic Sybil
            Retrieval Poisoning in Grounded QA},
  author = {Anonymous},
  year   = {2026},
  note   = {NeurIPS 2026 Datasets and Benchmarks Track submission, under review;
            citation will be updated after de-anonymization}
}
Downloads last month
72