Strategic Navigation or Stochastic Search? How Agents and Humans Reason Over Document Collections
Abstract
MADQA benchmark evaluates multimodal agents' strategic reasoning capabilities through diverse PDF document questions, revealing gaps between human-level accuracy and efficient reasoning performance.
Multimodal agents offer a promising path to automating complex document-intensive workflows. Yet, a critical question remains: do these agents demonstrate genuine strategic reasoning, or merely stochastic trial-and-error search? To address this, we introduce MADQA, a benchmark of 2,250 human-authored questions grounded in 800 heterogeneous PDF documents. Guided by Classical Test Theory, we design it to maximize discriminative power across varying levels of agentic abilities. To evaluate agentic behaviour, we introduce a novel evaluation protocol measuring the accuracy-effort trade-off. Using this framework, we show that while the best agents can match human searchers in raw accuracy, they succeed on largely different questions and rely on brute-force search to compensate for weak strategic planning. They fail to close the nearly 20% gap to oracle performance, persisting in unproductive loops. We release the dataset and evaluation harness to help facilitate the transition from brute-force retrieval to calibrated, efficient reasoning.
Community
Dataset: https://huggingface.co/datasets/OxRML/MADQA
Baseline code: https://github.com/OxRML/MADQA
Leaderboard: https://huggingface.co/spaces/Snowflake/MADQA-Leaderboard
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper