Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,22 @@
|
|
| 1 |
---
|
| 2 |
license: openrail
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: openrail
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-classification
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- science
|
| 9 |
+
- Argument Identification
|
| 10 |
+
pretty_name: AMSR
|
| 11 |
+
size_categories:
|
| 12 |
+
- 10K<n<100K
|
| 13 |
---
|
| 14 |
+
|
| 15 |
+
Argument Mining in Scientific Reviews (AMSR)
|
| 16 |
+
|
| 17 |
+
We release a new dataset of peer-reviews from different computer science conferences with annotated arguments, called AMSR (Argument Mining in Scientific Reviews).
|
| 18 |
+
|
| 19 |
+
The dataset has been crawled by the OpenReview platform (https://openreview.net/) and the OpenReviewCrawler (https://openreview-py.readthedocs.io/en/latest/)
|
| 20 |
+
|
| 21 |
+
From 12,135 collected papers and reviews, we sample 77 for the annotation.
|
| 22 |
+
We use a simple argumentation scheme, which distinguishes between non-arguments, supporting arguments, and attacking arguments, which we denote as NON/PRO/CON accordingly.
|