Datasets:
Tasks:
Text Classification
Formats:
json
Sub-tasks:
natural-language-inference
Languages:
English
Size:
1K - 10K
ArXiv:
License:
update ReadMe
Browse files- .gitignore +3 -0
- README.md +2 -36
.gitignore
CHANGED
|
@@ -1 +1,4 @@
|
|
| 1 |
.DS_Store.DS_Store
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
.DS_Store.DS_Store
|
| 2 |
+
|
| 3 |
+
# macOS junk
|
| 4 |
+
.DS_Store
|
README.md
CHANGED
|
@@ -14,44 +14,10 @@ tags:
|
|
| 14 |
---
|
| 15 |
|
| 16 |
|
| 17 |
-
# SciClaimEval Shared Task:
|
| 18 |
|
| 19 |
-
## Subtask 1: Claim Label Prediction Task
|
| 20 |
-
Each sample includes the following information:
|
| 21 |
|
| 22 |
-
|
| 23 |
-
- claim_id: the ID of the claim
|
| 24 |
-
- claim: the claim for which the label needs to be predicted
|
| 25 |
-
- label: there are two labels in our dataset: Supported and Refuted
|
| 26 |
-
- caption: the caption of the evidence file
|
| 27 |
-
- context: the preceding sentences from the same paragraph, provided as a short contextual field for each claim sentence
|
| 28 |
-
- domain: three domains, ML, NLP, and PeerJ (medical domain)
|
| 29 |
-
- use_context: No (the claim is understandable without context), Yes (short context is needed; information is taken from the context field), or Other sources (the full paper is needed to understand the claim)
|
| 30 |
-
- operation: how the evidence is modified to obtain the modified evidence that pairs with the same claim to create a refuted sample
|
| 31 |
-
- paper_path: the path to the paper
|
| 32 |
-
- detail_others: if the operation is Other, a description is provided here
|
| 33 |
-
- claim_id_pair: one claim is paired with two pieces of evidence, creating two labels: Supported and Refuted
|
| 34 |
-
|
| 35 |
-
Please refer to the file [here](https://github.com/SciClaimEval/sciclaimeval-shared-task/blob/main/examples/task1_ground_truth.json) for an example.
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
Please prepare your prediction file following the format in [this file](https://github.com/SciClaimEval/sciclaimeval-shared-task/blob/main/examples/task1_pred_format.json).
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
### Information about the Test Set:
|
| 42 |
-
|
| 43 |
-
- You will receive the input for the test set, but the gold labels are not available.
|
| 44 |
-
The example format is the same as in the file [task1_ground_truth.json](https://github.com/SciClaimEval/sciclaimeval-shared-task/blob/main/examples/task1_ground_truth.json), except that the following keys are missing: label, operation, detail_others, and claim_id_pair.
|
| 45 |
-
|
| 46 |
-
- This is also the case for the second subtask.
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
## Subtask 2: Claim Evidence Prediction Task
|
| 50 |
-
|
| 51 |
-
- Please refer to the file [here](https://github.com/SciClaimEval/sciclaimeval-shared-task/blob/main/examples/task2_ground_truth.json) for an example.
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
- Please prepare your prediction file following the format in [this file](https://github.com/SciClaimEval/sciclaimeval-shared-task/blob/main/examples/task2_pred_format.json).
|
| 55 |
|
| 56 |
|
| 57 |
## Note on License Information
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
|
| 17 |
+
## SciClaimEval Shared Task: All information is available at [sciclaimeval.github.io](https://sciclaimeval.github.io/)
|
| 18 |
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
## Evaluation scripts & examples: [github.com/SciClaimEval/sciclaimeval-shared-task](https://github.com/SciClaimEval/sciclaimeval-shared-task)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
|
| 23 |
## Note on License Information
|