Papers
arxiv:2602.22953

General Agent Evaluation

Published on Feb 26
ยท Submitted by
Leshem Choshen
on Feb 27
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

General-purpose agents remain underdeveloped despite promising implementations, necessitating systematic evaluation frameworks and benchmarks to assess their true versatility across diverse environments.

AI-generated summary

The promise of general-purpose agents - systems that perform tasks in unfamiliar environments without domain-specific engineering - remains largely unrealized. Existing agents are predominantly specialized, and while emerging implementations like OpenAI SDK Agent and Claude Code hint at broader capabilities, no systematic evaluation of their general performance has been pursued. Current agentic benchmarks assume domain-specific integration, encoding task information in ways that preclude fair evaluation of general agents. This paper frames general-agent evaluation as a first-class research objective. We propose conceptual principles for such evaluation, a Unified Protocol enabling agent-benchmark integration, and Exgentic - a practical framework for general agent evaluation. We benchmark five prominent agent implementations across six environments as the first Open General Agent Leaderboard. Our experiments show that general agents generalize across diverse environments, achieving performance comparable to domain-specific agents without any environment-specific tuning. We release our evaluation protocol, framework, and leaderboard to establish a foundation for systematic research on general-purpose agents.

Community

Paper submitter

๐Ÿš€ Welcome to the Open General Agent Era! ๐Ÿค–๐ŸŒ

What if we could finally measure how general our โ€œgeneral-purpose agentsโ€ really are?

This paper takes on that challenge head-first ๐Ÿ’ฅ Instead of evaluating agents in carefully engineered, domain-specific setups (where they often get hidden hints ๐Ÿ‘€), the authors treat general-agent evaluation as a first-class research problem.

Hereโ€™s whatโ€™s inside:

โœจ Conceptual principles for fairly evaluating general-purpose agents
๐Ÿ”Œ A Unified Protocol that lets agents plug into benchmarks without special treatment
๐Ÿงช Exgentic, a practical framework for running these evaluations
๐Ÿ† The first Open General Agent Leaderboard, benchmarking 5 major agent implementations across 6 diverse environments

And the big takeaway? ๐ŸŽ‰
General agents can perform competitively across very different environments โ€” without environment-specific tuning. Thatโ€™s a huge step toward truly adaptable AI systems.

By releasing the protocol, framework, and leaderboard, this work lays the groundwork for a more transparent and systematic future of agent research ๐ŸŒฑ๐Ÿ“Š

If you care about building agents that actually generalize (not just specialize), this is an exciting milestone. Let the leaderboard battles begin! ๐ŸฅŠ๐Ÿ“ˆ

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.22953 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.22953 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.22953 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.