|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- reasoning |
|
|
- math |
|
|
- commonsense |
|
|
- compositional |
|
|
- generalisation |
|
|
- agents |
|
|
pretty_name: agentcoma_benchmark |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# <img src="https://agentcoma.github.io/images/agent.png" alt="Agent icon" width="65"/> AgentCoMa Benchmark |
|
|
|
|
|
[**Paper**](https://arxiv.org/abs/2508.19988) | [**GitHub**](https://github.com/lisaalaz/agentcoma-benchmark) | [**Leaderboard**](https://agentcoma.github.io/) |
|
|
|
|
|
Dataset repository for the paper [AgentCoMa: A Compositional Benchmark Mixing Commonsense and |
|
|
Mathematical Reasoning in Real-World Scenarios](https://arxiv.org/abs/2508.19988). |
|
|
|
|
|
To submit to the [Leaderboard](https://agentcoma.github.io/), follow the instructions in this [README](https://github.com/lisaalaz/agentcoma-benchmark/blob/main/README.md). |
|
|
|
|
|
AgentCoMa is an **Agent**ic **Co**mmonsense and **Ma**th benchmark where each compositional task requires both commonsense and mathematical reasoning to be solved. The tasks are set in real-world scenarios: *house working*, *web shopping*, *science experiments*, *smart assistant* and *travel agent*. The benchmark is designed to test the mixed-type compositional reasoning abilities of LLMs. Contemporary LLMs perform well on commonsense and math reasoning in isolation, but are far less effective at solving AgentCoMa tasks that require their composition. See some dev set example questions below. |
|
|
|
|
|
<img src="https://agentcoma.github.io/images/question_examples.svg" alt="Question examples" width="900"/> |
|
|
|
|
|
For each compositional task, we also provide its underlying reasoning steps as individual questions. Performance on AgentCoMa is measured as the *compositionality gap* — i.e., the difference between the accuracy on the compositional tasks and the proportion of samples where all individual reasoning steps are answered correctly in isolation. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite our work: |
|
|
|
|
|
````bibtex |
|
|
@misc{alazraki2025agentcomacompositionalbenchmarkmixing, |
|
|
title={AgentCoMa: A Compositional Benchmark Mixing Commonsense and Mathematical Reasoning in Real-World Scenarios}, |
|
|
author={Lisa Alazraki and Lihu Chen and Ana Brassard and Joe Stacey and Hossein A. Rahmani and Marek Rei}, |
|
|
year={2025}, |
|
|
eprint={2508.19988}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2508.19988}, |
|
|
} |
|
|
```` |