Update README.md
Browse files
README.md
CHANGED
|
@@ -47,3 +47,89 @@ configs:
|
|
| 47 |
- split: test
|
| 48 |
path: data/test-*
|
| 49 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
- split: test
|
| 48 |
path: data/test-*
|
| 49 |
---
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
# 2WikiMultihopQA
|
| 54 |
+
|
| 55 |
+
**This repository only repackages the original 2WikiMultihopQA data so that every example follows the field layout used by [HotpotQA](https://hotpotqa.github.io/).** The content of the underlying questions, answers and contexts is **unaltered**.
|
| 56 |
+
All intellectual credit for creating 2WikiMultihopQA belongs to the authors of the paper *Constructing a Multi‑hop QA Dataset for Comprehensive Evaluation of Reasoning Steps* (COLING 2020) and the accompanying code/data in their GitHub project [https://github.com/Alab-NII/2wikimultihop](https://github.com/Alab-NII/2wikimultihop).
|
| 57 |
+
|
| 58 |
+
## Dataset Summary
|
| 59 |
+
|
| 60 |
+
* **Name:** 2WikiMultihopQA
|
| 61 |
+
* **What’s different:** only the JSON schema. Each instance now has `id`, `question`, `answer`, `type`, `evidences`, `supporting_facts`, and `context` keys arranged exactly like HotpotQA so that existing HotpotQA data pipelines work out‑of‑the‑box.
|
| 62 |
+
|
| 63 |
+
The dataset still contains **multi‑hop question‑answer pairs** with supporting evidence chains drawn from Wikipedia. There are three splits:
|
| 64 |
+
|
| 65 |
+
| split | #examples |
|
| 66 |
+
| ---------- | --------: |
|
| 67 |
+
| train | 113,284 |
|
| 68 |
+
| validation | 12,981 |
|
| 69 |
+
| test | 12,995 |
|
| 70 |
+
|
| 71 |
+
## Dataset Structure
|
| 72 |
+
|
| 73 |
+
Each JSON Lines file contains records like:
|
| 74 |
+
|
| 75 |
+
```json
|
| 76 |
+
{
|
| 77 |
+
"id": "13f5ad2c088c11ebbd6fac1f6bf848b6",
|
| 78 |
+
"question": "Are director of film Move (1970 Film) and director of film Méditerranée (1963 Film) from the same country?",
|
| 79 |
+
"answer": "no",
|
| 80 |
+
"type": "bridge_comparison",
|
| 81 |
+
"level": "unknown",
|
| 82 |
+
"supporting_facts": {
|
| 83 |
+
"title": ["Move (1970 film)", "Méditerranée (1963 film)", "Stuart Rosenberg", "Jean-Daniel Pollet"],
|
| 84 |
+
"sent_id": [0, 0, 0, 0]
|
| 85 |
+
},
|
| 86 |
+
"context": {
|
| 87 |
+
"title": ["Stuart Rosenberg", "Méditerranée (1963 film)", "Move (1970 film)", ...],
|
| 88 |
+
"sentences": [["Stuart Rosenberg (August 11, 1927 – March 15, 2007) was an American film and television director ..."],
|
| 89 |
+
["Méditerranée is a 1963 French experimental film directed by Jean-Daniel Pollet ..."],
|
| 90 |
+
...]
|
| 91 |
+
}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
### Field definitions
|
| 96 |
+
|
| 97 |
+
| Field | Type | Description |
|
| 98 |
+
| -------------------------- | -------------------- | ------------------------------------------------------------------------ |
|
| 99 |
+
| `id` | `string` | Unique identifier (original `_id`). |
|
| 100 |
+
| `question` | `string` | Natural‑language question. |
|
| 101 |
+
| `answer` | `string` | Short answer span (may be "yes"/"no" for binary questions). |
|
| 102 |
+
| `type` | `string` | Original 2Wiki question type (e.g. `bridge_comparison`, `comparison`). |
|
| 103 |
+
| `evidences` | `List[List[string]]` | Structured data (subject, property, object) obtained from Wikidata. |
|
| 104 |
+
| `supporting_facts.title` | `List[string]` | Wikipedia page titles that contain evidence sentences. |
|
| 105 |
+
| `supporting_facts.sent_id` | `List[int]` | Zero‑based sentence indices within each page that support the answer. |
|
| 106 |
+
| `context.title` | `List[string]` | Titles for every paragraph provided to the model. |
|
| 107 |
+
| `context.sentences` | `List[List[string]]` | Tokenised sentences for each corresponding title. |
|
| 108 |
+
|
| 109 |
+
## Data Splits
|
| 110 |
+
|
| 111 |
+
The conversion keeps the same train/validation/test division as the original dataset. No documents or examples were removed or added.
|
| 112 |
+
|
| 113 |
+
## Source Data
|
| 114 |
+
|
| 115 |
+
* **Original repository:** [https://github.com/Alab-NII/2wikimultihop](https://github.com/Alab-NII/2wikimultihop)
|
| 116 |
+
Contains data generation scripts, the Apache‑2.0 license and citation information.
|
| 117 |
+
* **Paper:** Ho et al., *Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps*, COLING 2020.
|
| 118 |
+
[\[ACL Anthology\]](https://aclanthology.org/2020.coling-main.580/) – [\[arXiv\]](https://arxiv.org/abs/2011.01060)
|
| 119 |
+
|
| 120 |
+
No new text was crawled; every paragraph is already present in the original dataset.
|
| 121 |
+
|
| 122 |
+
## License
|
| 123 |
+
|
| 124 |
+
The original 2WikiMultihopQA is released under the **Apache License 2.0**.
|
| 125 |
+
This redistribution keeps the same license. See [`LICENSE`](./LICENSE) copied verbatim from the upstream repo.
|
| 126 |
+
|
| 127 |
+
## How to Use
|
| 128 |
+
|
| 129 |
+
```python
|
| 130 |
+
from datasets import load_dataset
|
| 131 |
+
|
| 132 |
+
# after `ds.push_to_hub(...)` the dataset can be loaded like:
|
| 133 |
+
ds = load_dataset("framolfese/2wiki_multihopqa_hotpotfmt")
|
| 134 |
+
print(ds["train"][0]["question"])
|
| 135 |
+
```
|