Datasets:
File size: 8,313 Bytes
e7d40d3 76ae65e 4f2cc9b ce8818f e7d40d3 ce8818f 76ae65e ce8818f 76ae65e ce8818f 76ae65e ce8818f 76ae65e ce8818f 76ae65e ce8818f 76ae65e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 | ---
language:
- en
license: cc-by-sa-4.0
tags:
- retrieval
- text-retrieval
- beir
- fact-checking
- claim-verification
- wikipedia
- benchmark
pretty_name: BEIR FEVER (retrieval)
size_categories: "100K<n<1M"
task_categories:
- text-retrieval
---
# FEVER (BEIR) — Fact-checking retrieval
## Dataset description
**FEVER** (*Fact Extraction and VERification*) is a large-scale English dataset for **claim verification against textual sources**. Claims were produced by **altering sentences** drawn from **Wikipedia**; annotators then labeled each claim without knowing which source sentence it came from. Labels are **Supported**, **Refuted**, or **NotEnoughInfo** (with substantial inter-annotator agreement). For Supported and Refuted claims, annotators also identified the **sentence-level evidence** needed to justify the label.
**BEIR** (*Benchmarking IR*) repackaged FEVER—along with many other public corpora—as a standard **retrieval** benchmark for **zero-shot** evaluation of dense, sparse, and hybrid information retrieval models across heterogeneous tasks. In the BEIR formulation, **each claim acts as a query**, and the objective is to **retrieve relevant Wikipedia documents** (by title) that contain the evidence required for verification. This setting isolates **retrieval quality** as the variable of interest when paired with a fixed downstream verifier or when reporting standard IR metrics.
This repository (`orgrctera/beir_fever`) provides **train / validation / test** splits in **Parquet** form for retrieval evaluation pipelines. Each row is one **query** (a claim) with **relevance judgments** pointing at corpus document identifiers in the BEIR FEVER benchmark (Wikipedia article titles as used upstream).
### Scale and domain (BEIR FEVER)
The original FEVER release comprises on the order of **185k** verified claims; BEIR’s FEVER split follows the standard [BEIR](https://github.com/beir-cellar/beir) packaging. The **corpus** is Wikipedia-oriented text keyed by article identifiers (titles in the BEIR release). Exact counts for this Hub snapshot follow the upstream BEIR FEVER release—see the [BEIR repository](https://github.com/beir-cellar/beir) for version-precise figures.
## Task: retrieval (FEVER in BEIR)
The task is **ad hoc document retrieval** for **fact-checking**:
1. **Input:** a natural-language **claim** (the query).
2. **Output:** a ranked list of **document IDs** from the FEVER corpus (Wikipedia titles in the BEIR distribution), or scores over the full collection, such that **relevant** IDs—according to the official qrels—receive high rank.
Downstream metrics are standard IR metrics (e.g., **nDCG@k**, **Recall@k**, **MRR**), as in BEIR’s evaluation scripts or frameworks such as Pyserini / the BEIR API.
> **Note:** Full retrieval evaluation also requires the **corpus** (passage or document text keyed by the same IDs). This card describes the **query + qrels** side as prepared for CTERA-style evaluation rows; align corpus IDs with the same **BEIR FEVER** corpus you use for indexing.
## Data format (this repository)
Each record includes:
| Field | Description |
|--------|-------------|
| `id` | UUID for this example row. |
| `input` | The **claim** text (query). |
| `expected_output` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <relevance>}`. Document IDs are **Wikipedia article titles** as in the BEIR FEVER corpus; scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). |
| `metadata.query_id` | Original BEIR / FEVER query identifier (string). |
| `metadata.split` | Split name: `train`, `dev`, or `test`. |
### Example 1
```json
{
"id": "7e965799-c99c-46a6-95ab-91dae89ecd4f",
"input": "Robert Duvall has not won a BAFTA.",
"expected_output": "[{\"id\": \"Robert_Duvall\", \"score\": 1}]",
"metadata.query_id": "145027",
"metadata.split": "train"
}
```
### Example 2
```json
{
"id": "a2b4302d-b0cb-4917-ab1a-67622b3c9790",
"input": "Reese Witherspoon grew up in the United States.",
"expected_output": "[{\"id\": \"Tennessee\", \"score\": 1}, {\"id\": \"New_Orleans\", \"score\": 1}, {\"id\": \"Reese_Witherspoon\", \"score\": 1}]",
"metadata.query_id": "160148",
"metadata.split": "train"
}
```
## References
### FEVER (original dataset)
**James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal**
*FEVER: a large-scale dataset for Fact Extraction and VERification*
Presented at **NAACL 2018**; extended version on arXiv.
**Abstract (from arXiv):** *“In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo by annotators achieving 0.6841 in Fleiss κ. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment. To characterize the challenge of the dataset presented, we develop a pipeline approach and compare it to suitably designed oracles. The best accuracy we achieve on labeling a claim accompanied by the correct evidence is 31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.”*
- Paper: [arXiv:1803.05355](https://arxiv.org/abs/1803.05355) — [PDF](https://arxiv.org/pdf/1803.05355.pdf).
### BEIR benchmark (FEVER as a subset)
**Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych**
*BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models*
NeurIPS 2021 (Datasets and Benchmarks Track).
**Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”*
- Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ); code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir).
### Related resources
- **BEIR FEVER** mirrors on Hugging Face in classic BEIR layout, e.g. [`BeIR/fever`](https://huggingface.co/datasets/BeIR/fever) (corpus / queries / qrels).
- **IRDS** packaging: [`irds/beir_fever`](https://huggingface.co/datasets/irds/beir_fever) exposes FEVER via the ir-datasets tooling.
## Citation
If you use **FEVER**, cite the FEVER paper (Thorne et al., NAACL 2018 / arXiv:1803.05355). If you use the **BEIR** benchmark formulation, cite the BEIR paper (Thakur et al., NeurIPS 2021). BibTeX for BEIR is available in the [official repository](https://github.com/beir-cellar/beir).
## License
FEVER’s evidence and corpus trace to **Wikipedia**-derived text; Wikipedia content is typically licensed under **Creative Commons Attribution-ShareAlike** (version depends on the snapshot). This card marks **`cc-by-sa-4.0`** as a common umbrella for Wikipedia-derived redistribution; verify against your corpus snapshot and upstream terms if compliance is strict.
---
*Dataset card maintained for the `orgrctera/beir_fever` Hub repository.*
|