File size: 1,484 Bytes
2b019ad bcacbf5 2b019ad 4c6db65 bcacbf5 2b019ad bcacbf5 2b019ad ba98e32 2b019ad | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: MEDIQA RQE
homepage: https://sites.google.com/view/mediqa2019
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- TEXT_PAIRS_CLASSIFICATION
---
# Dataset Card for MEDIQA RQE
## Dataset Description
- **Homepage:** https://sites.google.com/view/mediqa2019
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXT2CLASS
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
The objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]
[1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016.
## Citation Information
```
@inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
}
```
|