beir_nfcorpus / README.md
orgrctera's picture
Upload README.md with huggingface_hub
996507a verified
metadata
license: cc-by-sa-4.0
language:
  - en
pretty_name: BEIR NFCorpus (Retrieval)
size_categories:
  - 1K<n<10K
tags:
  - biomedical
  - information-retrieval
  - beir
  - retrieval
  - rag
  - nfcorpus

BEIR NFCorpus (orgrctera/beir_nfcorpus)

Overview

This release packages NFCorpus from the BEIR (Benchmarking IR) benchmark as a single table-oriented dataset for retrieval evaluation and tooling (e.g. Langfuse-exported runs). NFCorpus is a biomedical information retrieval task: natural-language queries in plain English are matched to PubMed-style documents, with relevance judgments (qrels) indicating which documents support each query.

NFCorpus was introduced as a full-text learning-to-rank resource for medical IR: queries reflect how non-experts ask health questions (sourced from NutritionFacts.org content), while documents are scientific abstracts/articles—creating a deliberate lexical and stylistic gap between query and corpus that mirrors real consumer health search.

BEIR aggregates multiple heterogeneous IR datasets under one protocol so dense/sparse/neural retrievers can be compared—including in zero-shot settings where models are not trained on the target domain. NFCorpus is one of the Bio-Medical IR tasks in BEIR (alongside e.g. TREC-COVID and BioASQ).

This Hub dataset contains 3,237 query-level rows with train / dev / test splits, aligned with the standard BEIR NFCorpus split.

Task

  • Task type: Retrieval (document retrieval against an external corpus identified by BEIR IDs).
  • Input (input): The user query text (natural-language question or topic string).
  • Reference (expected_output): A JSON string encoding the list of relevant document IDs with relevance scores (BEIR qrels: here typically binary 1 for relevant pairs), e.g. [{"id": "MED-5002", "score": 1}, ...].
    Evaluators rank a candidate pool (the full NFCorpus corpus in BEIR) and score overlap with these IDs using standard IR metrics (nDCG, MRR, Recall@k, etc.).
  • Metadata: Original BEIR identifiers (query_id) and split name are preserved for traceability.

The retrieval system’s job is to return the correct MED-* (or corpus-specific) document IDs for each query when scored against the full NFCorpus corpus distributed with BEIR—not included row-wise in this table.

Background

NFCorpus (original dataset)

The NFCorpus paper (Boteva et al., ECIR 2016) describes building a dataset where queries come from consumer-facing health topics and documents from PubMed, with relevance labels derived from site structure (e.g. direct citations, indirect links, topic/tag relations). The goal is to study learning-to-rank and semantic retrieval when queries are in lay language and documents are technical.

BEIR reformulation

BEIR (Thakur et al., NeurIPS 2021 Datasets & Benchmarks) re-hosts NFCorpus in a standardized layout: corpus (JSONL: _id, title, text), queries (JSONL: _id, text), and qrels (TSV: query-id, corpus-id, score). That common format enables cross-dataset benchmarks and zero-shot evaluation of neural retrieval models.

This release

Rows were exported from Langfuse (CTERA AI evaluation pipeline) in a flat, parquet-friendly schema: one row per query with gold relevant document IDs in expected_output for downstream scoring and observability.

Data fields

Column Type Description
id string Stable UUID for this row in this Hub release.
input string Query text (natural-language question or topic).
expected_output string JSON string: list of objects {"id": "<corpus-doc-id>", "score": <int>} — qrels for that query.
metadata.query_id string BEIR NFCorpus query identifier (e.g. PLAIN-3337).
metadata.split string Split name: train, dev, or test.

Splits

Split Rows
train 2,590
dev 324
test 323
Total 3,237

Examples

Illustrative rows (truncated expected_output where long).

Example 1 — lay query

  • input: Does Tofu Cause Dementia?
  • metadata.query_id: PLAIN-3337
  • metadata.split: train
  • expected_output (excerpt):
[
  {"id": "MED-5002", "score": 1},
  {"id": "MED-2215", "score": 1},
  {"id": "MED-726", "score": 1},
  {"id": "MED-4548", "score": 1}
]

Example 2 — short topic query

  • input: pancreatic cancer
  • metadata.query_id: PLAIN-1797
  • metadata.split: train
  • expected_output: JSON list of many MED-* documents with "score": 1 (multi-document relevance for this query).

References and citations

BEIR benchmark (aggregation & protocol)

Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. NeurIPS 2021 Datasets and Benchmarks Track.

@inproceedings{thakur2021beir,
  title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
  author={Thakur, Nandan and Reimers, Nils and R{\"u}ckl{\'e}, Andreas and Srivastava, Abhishek and Gurevych, Iryna},
  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
  year={2021},
  url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}

NFCorpus (original dataset)

Vera Boteva, Demian Gholipour Ghalandari, Artem Sokolov, Stefan Riezler. A Full-Text Learning to Rank Dataset for Medical Information Retrieval. ECIR 2016.

@inproceedings{boteva2016nfcorpus,
  author={Boteva, Vera and Gholipour Ghalandari, Demian and Sokolov, Artem and Riezler, Stefan},
  title={A Full-Text Learning to Rank Dataset for Medical Information Retrieval},
  booktitle={Advances in Information Retrieval: 38th European Conference on Information Retrieval (ECIR)},
  year={2016},
  pages={716--722},
  doi={10.1007/978-3-319-30671-1_58}
}

License

NFCorpus and the BEIR preprocessing follow the CC BY-SA 4.0 license as used in the upstream BEIR Hugging Face dataset card. Verify current terms on the official BEIR / NFCorpus sources before redistribution.

Changelog

  • Dataset card: Comprehensive README describing NFCorpus, BEIR retrieval task, citations, schema, and examples.