File size: 1,464 Bytes
0df0cd0 3b9b329 024935d 0df0cd0 a3383bb 5c5ce85 0df0cd0 eebc604 7cb4b96 e2824b6 eb9c555 597b485 0dd105f 713ce33 0df0cd0 cc1e9e0 6434edf cc1e9e0 0df0cd0 fefeb4f 0df0cd0 2c695f4 0df0cd0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ---
pretty_name: '`antique`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `antique`
The `antique` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=403,666
This dataset is used by: [`antique_test`](https://huggingface.co/datasets/irds/antique_test), [`antique_test_non-offensive`](https://huggingface.co/datasets/irds/antique_test_non-offensive), [`antique_train`](https://huggingface.co/datasets/irds/antique_train), [`antique_train_split200-train`](https://huggingface.co/datasets/irds/antique_train_split200-train), [`antique_train_split200-valid`](https://huggingface.co/datasets/irds/antique_train_split200-valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/antique', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
|