Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
Size:
10K - 100K
ArXiv:
License:
add dataset_info in dataset metadata
Browse files
README.md
CHANGED
|
@@ -34,8 +34,34 @@ train-eval-index:
|
|
| 34 |
text: text
|
| 35 |
answer_start: answer_start
|
| 36 |
metrics:
|
| 37 |
-
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
---
|
| 40 |
|
| 41 |
# Dataset Card for "squad"
|
|
@@ -211,4 +237,4 @@ archivePrefix = {arXiv},
|
|
| 211 |
|
| 212 |
### Contributions
|
| 213 |
|
| 214 |
-
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
|
|
|
|
| 34 |
text: text
|
| 35 |
answer_start: answer_start
|
| 36 |
metrics:
|
| 37 |
+
- type: squad
|
| 38 |
+
name: SQuAD
|
| 39 |
+
dataset_info:
|
| 40 |
+
features:
|
| 41 |
+
- name: id
|
| 42 |
+
dtype: string
|
| 43 |
+
- name: title
|
| 44 |
+
dtype: string
|
| 45 |
+
- name: context
|
| 46 |
+
dtype: string
|
| 47 |
+
- name: question
|
| 48 |
+
dtype: string
|
| 49 |
+
- name: answers
|
| 50 |
+
sequence:
|
| 51 |
+
- name: text
|
| 52 |
+
dtype: string
|
| 53 |
+
- name: answer_start
|
| 54 |
+
dtype: int32
|
| 55 |
+
config_name: plain_text
|
| 56 |
+
splits:
|
| 57 |
+
- name: train
|
| 58 |
+
num_bytes: 79317110
|
| 59 |
+
num_examples: 87599
|
| 60 |
+
- name: validation
|
| 61 |
+
num_bytes: 10472653
|
| 62 |
+
num_examples: 10570
|
| 63 |
+
download_size: 35142551
|
| 64 |
+
dataset_size: 89789763
|
| 65 |
---
|
| 66 |
|
| 67 |
# Dataset Card for "squad"
|
|
|
|
| 237 |
|
| 238 |
### Contributions
|
| 239 |
|
| 240 |
+
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
|