Datasets:
task_categories:
- question-answering
language:
- en
pretty_name: Argument Reasoning Tasks (ART)
tags:
- reasoning
- llm_evaluation
- argument-mining
size_categories:
- 100K<n<1M
license: cc-by-nc-sa-4.0
π§ Argument Reasoning Tasks (ART) Dataset
Evaluating natural language argumentative reasoning in large language models.
π Overview
The Argument Reasoning Tasks (ART) dataset is a large-scale benchmark designed to evaluate the ability of large language models (LLMs) to perform natural language argumentative reasoning.
It contains multiple-choice questions where models must identify missing argument components, given an argument context and reasoning structure.
π§© Argumentation Structures
ART covers 16 task types derived from four core argumentation structures:
- Serial reasoning β chained inference steps.
- Linked reasoning β multiple premises jointly supporting a conclusion.
- Convergent reasoning β independent premises supporting a conclusion.
- Divergent reasoning β a single premise leading to multiple possible conclusions.
π Source & Reference
This dataset was introduced in:
Debela Gemechu, Ramon Ruiz-Dolz, Henrike Beyer, and Chris Reed. 2025.
Natural Language Reasoning in Large Language Models: Analysis and Evaluation.
Findings of the Association for Computational Linguistics: ACL 2025, pp. 3717β3741.
Vienna, Austria: Association for Computational Linguistics.
π Read the paper | DOI: 10.18653/v1/2025.findings-acl.192
@inproceedings{gemechu-etal-2025-natural,
title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
pages = {3717--3741},
year = {2025},
address = {Vienna, Austria},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2025.findings-acl.192/},
doi = {10.18653/v1/2025.findings-acl.192}
}
π Dataset Details
- Hugging Face repo: debela-arg/art
- License: CC BY-NC-SA 4.0 (non-commercial, share alike)
- Languages: English
- Domain: Argumentative reasoning, question answering
- File format: JSON
- Size: ~482 MB
- Splits: Single
trainsplit with 88,628 examples
π Example JSON Entry
{
"prompt": "Please answer the following multiple-choice question...",
"task_type": "1H-C",
"answer": ["just one of three children returning to school..."],
"data_source": "qt30"
}
Fields:
promptβ Question with context and multiple-choice optionstask_typeβ Argument reasoning task categoryanswerβ Correct answer(s)data_sourceβ Original source corpus
π Statistics
| Attribute | Value |
|---|---|
| Total examples | 88,628 |
| Task types | 16 |
| Data sources | MTC, AAEC, CDCP, ACSP, AbstRCT, US2016, QT30 |
β‘ How to Load the Dataset
Install the dependencies:
pip install datasets pandas
Load in Python:
from datasets import load_dataset
import pandas as pd
# Load the train split
dataset = load_dataset("debela-arg/art", split="train")
# Convert to DataFrame
df = pd.DataFrame(dataset)
print("Total examples:", len(df))
print("Available columns:", df.columns.tolist())
print("Task type distribution:")
print(df["task_type"].value_counts())
π Suggested Uses
- LLM evaluation β Benchmark reasoning capabilities
- Few-shot prompting β Create reasoning-based examples for instruction tuning
- Error analysis β Identify reasoning failure modes in models
π Citation
If you use ART in your work, please cite:
@inproceedings{gemechu-etal-2025-natural,
title = {Natural Language Reasoning in Large Language Models: Analysis and Evaluation},
author = {Gemechu, Debela and Ruiz-Dolz, Ramon and Beyer, Henrike and Reed, Chris},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2025},
pages = {3717--3741},
year = {2025},
address = {Vienna, Austria},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2025.findings-acl.192/},
doi = {10.18653/v1/2025.findings-acl.192}
}
π Maintainers
- Author: Debela Gemechu, Ramon Ruiz-Dolz, Henrike Beyer and Chris Reed