debela-arg commited on
Commit
deaeeea
Β·
verified Β·
1 Parent(s): 35ccd6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -2
README.md CHANGED
@@ -3,8 +3,169 @@ task_categories:
3
  - question-answering
4
  language:
5
  - en
6
- pretty_name: art
7
  tags:
8
  - reasoning
9
  - llm_evaluation
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - question-answering
4
  language:
5
  - en
6
+ pretty_name: Argument Reasoning Tasks (ART)
7
  tags:
8
  - reasoning
9
  - llm_evaluation
10
+ - argument-mining
11
+ size_categories:
12
+ - 100K<n<1M
13
+ license: cc-by-nc-sa-4.0
14
+ ---
15
+
16
+ # 🧠 Argument Reasoning Tasks (ART) Dataset
17
+
18
+ **Evaluating natural language argumentative reasoning in large language models.**
19
+
20
+ ---
21
+
22
+ ## πŸ“– Overview
23
+
24
+ The **Argument Reasoning Tasks (ART)** dataset is a **large-scale benchmark** designed to evaluate the ability of large language models (LLMs) to perform **natural language argumentative reasoning**.
25
+
26
+ It contains **multiple-choice questions** where models must identify missing argument components, given an argument context and reasoning structure.
27
+
28
+ ---
29
+
30
+ ## 🧩 Argumentation Structures
31
+
32
+ ART covers **16 task types** derived from four core argumentation structures:
33
+
34
+ 1. **Serial reasoning** – chained inference steps.
35
+ 2. **Linked reasoning** – multiple premises jointly supporting a conclusion.
36
+ 3. **Convergent reasoning** – independent premises supporting a conclusion.
37
+ 4. **Divergent reasoning** – a single premise leading to multiple possible conclusions.
38
+
39
+ ---
40
+
41
+ ## πŸ“„ Source & Reference
42
+
43
+ This dataset was introduced in:
44
+
45
+ > **Debela Gemechu, Ramon Ruiz-Dolz, Henrike Beyer, and Chris Reed. 2025.**
46
+ > *Natural Language Reasoning in Large Language Models: Analysis and Evaluation.*
47
+ > Findings of the Association for Computational Linguistics: ACL 2025, pp. 3717–3741.
48
+ > Vienna, Austria: Association for Computational Linguistics.
49
+ > [πŸ“„ Read the paper](https://aclanthology.org/2025.findings-acl.192/) | DOI: [10.18653/v1/2025.findings-acl.192](https://doi.org/10.18653/v1/2025.findings-acl.192)
50
+
51
+ ```bibtex
52
+ @inproceedings{gemechu-etal-2025-natural,
53
+ title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
54
+ author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
55
+ booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
56
+ pages = {3717--3741},
57
+ year = {2025},
58
+ address = {Vienna, Austria},
59
+ publisher = {Association for Computational Linguistics},
60
+ url = {https://aclanthology.org/2025.findings-acl.192/},
61
+ doi = {10.18653/v1/2025.findings-acl.192}
62
+ }
63
+ ````
64
+
65
+ ---
66
+
67
+ ## πŸ“‚ Dataset Details
68
+
69
+ * **Hugging Face repo:** [debela-arg/art](https://huggingface.co/datasets/debela-arg/art)
70
+ * **License:** CC BY-NC-SA 4.0 (non-commercial, share alike)
71
+ * **Languages:** English
72
+ * **Domain:** Argumentative reasoning, question answering
73
+ * **File format:** JSON
74
+ * **Size:** \~482 MB
75
+ * **Splits:** Single `train` split with **88,628 examples**
76
+
77
+ ---
78
+
79
+ ### πŸ—‚ Example JSON Entry
80
+
81
+ ```json
82
+ {
83
+ "prompt": "Please answer the following multiple-choice question...",
84
+ "task_type": "1H-C",
85
+ "answer": ["just one of three children returning to school..."],
86
+ "data_source": "qt30"
87
+ }
88
+ ```
89
+
90
+ **Fields:**
91
+
92
+ * `prompt` – Question with context and multiple-choice options
93
+ * `task_type` – Argument reasoning task category
94
+ * `answer` – Correct answer(s)
95
+ * `data_source` – Original source corpus
96
+
97
+ ---
98
+
99
+ ## πŸ“Š Statistics
100
+
101
+ | Attribute | Value |
102
+ | -------------- | -------------------------------------------- |
103
+ | Total examples | 88,628 |
104
+ | Task types | 16 |
105
+ | Data sources | MTC, AAEC, CDCP, ACSP, AbstRCT, US2016, QT30 |
106
+
107
+ ---
108
+
109
+ ## ⚑ How to Load the Dataset
110
+
111
+ Install the dependencies:
112
+
113
+ ```bash
114
+ pip install datasets pandas
115
+ ```
116
+
117
+ Load in Python:
118
+
119
+ ```python
120
+ from datasets import load_dataset
121
+ import pandas as pd
122
+
123
+ # Load the train split
124
+ dataset = load_dataset("debela-arg/art", split="train")
125
+
126
+ # Convert to DataFrame
127
+ df = pd.DataFrame(dataset)
128
+
129
+ print("Total examples:", len(df))
130
+ print("Available columns:", df.columns.tolist())
131
+ print("Task type distribution:")
132
+ print(df["task_type"].value_counts())
133
+ ```
134
+
135
+
136
+ ---
137
+
138
+ ## πŸ” Suggested Uses
139
+
140
+ * **LLM evaluation** – Benchmark reasoning capabilities
141
+ * **Few-shot prompting** – Create reasoning-based examples for instruction tuning
142
+ * **Error analysis** – Identify reasoning failure modes in models
143
+
144
+ ---
145
+
146
+ ## πŸ“Œ Citation
147
+
148
+ If you use ART in your work, please cite:
149
+
150
+ ```bibtex
151
+ @inproceedings{gemechu-etal-2025-natural,
152
+ title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
153
+ author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
154
+ booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
155
+ pages = {3717--3741},
156
+ year = {2025},
157
+ address = {Vienna, Austria},
158
+ publisher = {Association for Computational Linguistics},
159
+ url = {https://aclanthology.org/2025.findings-acl.192/},
160
+ doi = {10.18653/v1/2025.findings-acl.192}
161
+ }
162
+ ```
163
+
164
+ ---
165
+
166
+ ## πŸ›  Maintainers
167
+
168
+ * **Author:** Debela Gemechu
169
+
170
+ ---
171
+