Datasets:

Languages:
English
ArXiv:
License:
File size: 6,996 Bytes
da596cf
 
 
 
 
 
eab5564
da596cf
eab5564
1f7758e
da596cf
 
 
 
 
 
 
 
 
 
6e2824d
c83e52f
eda20cb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7034078
 
eda20cb
 
 
 
 
 
 
 
 
 
 
c9d702c
 
 
1f7758e
eda20cb
da596cf
 
6be08fc
da596cf
1f7758e
 
da596cf
1f7758e
da596cf
 
6e2824d
da596cf
 
 
6e2824d
 
da596cf
 
 
 
 
 
 
6487a71
da596cf
1f7758e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da596cf
 
 
 
 
 
 
 
 
1f7758e
 
 
 
 
da596cf
 
 
1f7758e
 
 
 
 
da596cf
 
 
1f7758e
da596cf
 
 
 
 
1f7758e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da596cf
 
1f7758e
da596cf
 
 
1f7758e
 
 
 
 
 
 
 
 
 
 
 
da596cf
 
 
1f7758e
 
 
 
 
 
 
 
da596cf
 
 
 
 
1f7758e
da596cf
 
 
1f7758e
da596cf
 
 
1f7758e
da596cf
 
 
 
 
1f7758e
da596cf
 
 
1f7758e
 
 
 
 
 
 
 
 
6487a71
 
1f7758e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- fact-checking-retrieval
paperswithcode_id: hover
pretty_name: HoVer
dataset_info:
  features:
  - name: id
    dtype: int32
  - name: uid
    dtype: string
  - name: claim
    dtype: string
  - name: supporting_facts
    list:
    - name: key
      dtype: string
    - name: value
      dtype: int32
  - name: label
    dtype:
      class_label:
        names:
          '0': NOT_SUPPORTED
          '1': SUPPORTED
  - name: num_hops
    dtype: int32
  - name: hpqa_id
    dtype: string
  splits:
  - name: train
    num_bytes: 5532178
    num_examples: 18171
  - name: validation
    num_bytes: 1299252
    num_examples: 4000
  - name: test
    num_bytes: 927513
    num_examples: 4000
  download_size: 3428352
  dataset_size: 7758943
---

# Dataset Card for HoVer

> **Note**: This is a scriptless, Parquet-based version of the HoVer dataset for seamless integration with HuggingFace `datasets` library. No `trust_remote_code` required!

## Table of Contents
- [Quick Start](#quick-start)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Quick Start

```python
from datasets import load_dataset

# Load the dataset (no trust_remote_code needed!)
dataset = load_dataset("hover-nlp/hover")

# Access splits
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]

# Example usage
print(train[0])
# {
#   'id': 0,
#   'uid': '330ca632-e83f-4011-b11b-0d0158145036',
#   'claim': 'Skagen Painter Peder Severin Krøyer favored naturalism...',
#   'supporting_facts': [{'key': 'Kristian Zahrtmann', 'value': 0}, ...],
#   'label': 1,  # 0: NOT_SUPPORTED, 1: SUPPORTED
#   'num_hops': 3,
#   'hpqa_id': '5ab7a86d5542995dae37e986'
# }
```

## Dataset Description

- **Homepage:** https://hover-nlp.github.io/
- **Repository:** https://github.com/hover-nlp/hover
- **Paper:** https://arxiv.org/abs/2011.03088
- **Leaderboard:** https://hover-nlp.github.io/

### Dataset Summary

HoVer (HOP VERification) is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The dataset contains claims that require reasoning over multiple documents (multi-hop) to verify whether they are supported or not supported by evidence.

The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It was collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics.

This version provides the dataset in Parquet format for efficient loading and compatibility with modern data processing pipelines, eliminating the need for custom loading scripts.

### Supported Tasks and Leaderboards

- **Fact Verification**: Determine whether a claim is SUPPORTED or NOT_SUPPORTED based on evidence from Wikipedia articles
- **Multi-hop Reasoning**: Claims require reasoning across multiple documents (indicated by `num_hops` field)
- **Evidence Retrieval**: Identify relevant supporting facts from source documents

The official leaderboard is available at https://hover-nlp.github.io/

### Languages

English (en)

## Dataset Structure

### Data Instances

A sample training set example:

```json
{
  "id": 14856,
  "uid": "a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce",
  "claim": "The park at which Tivolis Koncertsal is located opened on 15 August 1843.",
  "supporting_facts": [
    {"key": "Tivolis Koncertsal", "value": 0},
    {"key": "Tivoli Gardens", "value": 1}
  ],
  "label": 1,
  "num_hops": 2,
  "hpqa_id": "5abca1a55542993a06baf937"
}
```

**Note**: In the test set, only `id`, `uid`, and `claim` fields contain meaningful data. The `label` is set to `-1`, `num_hops` to `-1`, `hpqa_id` to `"None"`, and `supporting_facts` is an empty list, as these are withheld for evaluation purposes.

### Data Fields

- **id** (`int32`): Sequential identifier for the example within its split
- **uid** (`string`): Unique identifier (UUID) for the claim
- **claim** (`string`): The claim statement to be verified
- **supporting_facts** (`list`): List of evidence facts, where each fact contains:
  - **key** (`string`): Title of the Wikipedia article
  - **value** (`int32`): Sentence index within that article
- **label** (`ClassLabel`): Verification label with values:
  - `0`: NOT_SUPPORTED - The claim is not supported by the evidence
  - `1`: SUPPORTED - The claim is supported by the evidence
  - `-1`: Unknown (used in test set)
- **num_hops** (`int32`): Number of reasoning hops required (typically 2-4 for this dataset)
- **hpqa_id** (`string`): Original HotpotQA question ID from which the claim was derived

### Data Splits

| Split | Examples |
|-------|----------|
| Train | 18,171 |
| Validation | 4,000 |
| Test | 4,000 |
| **Total** | **26,171** |

The splits maintain the original distribution from the HoVer dataset.

## Dataset Creation

### Curation Rationale

HoVer was created to address the challenge of multi-hop fact verification, where claims require reasoning across multiple documents. The dataset was built to push the boundaries of claim verification systems beyond single-document fact-checking.

### Source Data

The dataset is built upon Wikipedia as the knowledge source. Claims are adapted from HotpotQA question-answer pairs and modified to create verification statements that require multi-hop reasoning.

### Annotations

The dataset was annotated by expert annotators who identified supporting facts across multiple Wikipedia articles and determined whether claims were supported or not supported by the evidence.

## Additional Information

### Licensing Information

This dataset is licensed under the MIT License.

### Citation Information

```bibtex
@inproceedings{jiang2020hover,
  title={{HoVer}: A Dataset for Many-Hop Fact Extraction And Claim Verification},
  author={Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Singh and Mohit Bansal},
  booktitle={Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year={2020}
}
```

### Contributions

Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the original dataset and [@vincentkoc](https://github.com/vincentkoc) for creating this Parquet version.