Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
English
Size:
10K - 100K
License:
File size: 4,924 Bytes
ad1a335 deaeeea 35ccd6f deaeeea 51cd2a8 deaeeea | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | ---
task_categories:
- question-answering
language:
- en
pretty_name: Argument Reasoning Tasks (ART)
tags:
- reasoning
- llm_evaluation
- argument-mining
size_categories:
- 100K<n<1M
license: cc-by-nc-sa-4.0
---
# π§ Argument Reasoning Tasks (ART) Dataset
**Evaluating natural language argumentative reasoning in large language models.**
---
## π Overview
The **Argument Reasoning Tasks (ART)** dataset is a **large-scale benchmark** designed to evaluate the ability of large language models (LLMs) to perform **natural language argumentative reasoning**.
It contains **multiple-choice questions** where models must identify missing argument components, given an argument context and reasoning structure.
---
## π§© Argumentation Structures
ART covers **16 task types** derived from four core argumentation structures:
1. **Serial reasoning** β chained inference steps.
2. **Linked reasoning** β multiple premises jointly supporting a conclusion.
3. **Convergent reasoning** β independent premises supporting a conclusion.
4. **Divergent reasoning** β a single premise leading to multiple possible conclusions.
---
## π Source & Reference
This dataset was introduced in:
> **Debela Gemechu, Ramon Ruiz-Dolz, Henrike Beyer, and Chris Reed. 2025.**
> *Natural Language Reasoning in Large Language Models: Analysis and Evaluation.*
> Findings of the Association for Computational Linguistics: ACL 2025, pp. 3717β3741.
> Vienna, Austria: Association for Computational Linguistics.
> [π Read the paper](https://aclanthology.org/2025.findings-acl.192/) | DOI: [10.18653/v1/2025.findings-acl.192](https://doi.org/10.18653/v1/2025.findings-acl.192)
```bibtex
@inproceedings{gemechu-etal-2025-natural,
title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
pages = {3717--3741},
year = {2025},
address = {Vienna, Austria},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2025.findings-acl.192/},
doi = {10.18653/v1/2025.findings-acl.192}
}
````
---
## π Dataset Details
* **Hugging Face repo:** [debela-arg/art](https://huggingface.co/datasets/debela-arg/art)
* **License:** CC BY-NC-SA 4.0 (non-commercial, share alike)
* **Languages:** English
* **Domain:** Argumentative reasoning, question answering
* **File format:** JSON
* **Size:** \~482 MB
* **Splits:** Single `train` split with **88,628 examples**
---
### π Example JSON Entry
```json
{
"prompt": "Please answer the following multiple-choice question...",
"task_type": "1H-C",
"answer": ["just one of three children returning to school..."],
"data_source": "qt30"
}
```
**Fields:**
* `prompt` β Question with context and multiple-choice options
* `task_type` β Argument reasoning task category
* `answer` β Correct answer(s)
* `data_source` β Original source corpus
---
## π Statistics
| Attribute | Value |
| -------------- | -------------------------------------------- |
| Total examples | 88,628 |
| Task types | 16 |
| Data sources | MTC, AAEC, CDCP, ACSP, AbstRCT, US2016, QT30 |
---
## β‘ How to Load the Dataset
Install the dependencies:
```bash
pip install datasets pandas
```
Load in Python:
```python
from datasets import load_dataset
import pandas as pd
# Load the train split
dataset = load_dataset("debela-arg/art", split="train")
# Convert to DataFrame
df = pd.DataFrame(dataset)
print("Total examples:", len(df))
print("Available columns:", df.columns.tolist())
print("Task type distribution:")
print(df["task_type"].value_counts())
```
---
## π Suggested Uses
* **LLM evaluation** β Benchmark reasoning capabilities
* **Few-shot prompting** β Create reasoning-based examples for instruction tuning
* **Error analysis** β Identify reasoning failure modes in models
---
## π Citation
If you use ART in your work, please cite:
```bibtex
@inproceedings{gemechu-etal-2025-natural,
title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
pages = {3717--3741},
year = {2025},
address = {Vienna, Austria},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2025.findings-acl.192/},
doi = {10.18653/v1/2025.findings-acl.192}
}
```
---
## π Maintainers
* **Author:** Debela Gemechu, Ramon Ruiz-Dolz, Henrike Beyer and Chris Reed
---
|