Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

English | 中文

Agent Harness for Large Language Model Agents: A Survey

GitHub Stars License Papers Version HuggingFace

H=(E,T,C,S,L,V) Six-Component Architecture

This survey is actively maintained. If you find it useful, please star the repo to stay updated and help others find it.


The agent execution harness — not the model — is the primary determinant of agent reliability at scale.
This survey formalizes the harness as a first-class architectural object H = (E, T, C, S, L, V), surveys 110+ papers, blogs and reports across 23 systems, and maps 9 open technical challenges.
📄 Read the Paper (coming soon)
✉️ Corrections & suggestions: gloriamenng@gmail.com (Qianyu Meng); wangyanan@mail.dlut.edu.cn (Yanan Wang); chenliyi@xiaohongshu.com (Liyi Chen)

If you find this survey useful, please cite:

@misc{meng2026agentharness,
  title   = {Agent Harness for Large Language Model Agents: A Survey},
  author  = {Meng, Qianyu* and Wang, Yanan* and Chen, Liyi and Wang, Qimeng and
             Lu, Chengqiang and Wu, Wei and Gao, Yan and Wu, Yi and Hu, Yao},
  year    = {2026},
  url     = {https://github.com/Gloriaameng/LLM-Agent-Harness-Survey},
  note    = {*Equal contribution. Work in progress}
}

🆕 News & Updates

  • [2026-04-03] Initial release
  • [2026-04-07] Repo updated


Overview

LLM agents are increasingly deployed in agentic settings where they autonomously plan, use tools, and act in multi-step environments. The dominant narrative attributes agent performance to the underlying model. This survey challenges that assumption.

We introduce a formal definition of the agent execution harness as a six-component tuple:

Component Symbol Role
Execution Loop E Observe-think-act cycle, termination conditions, error recovery
Tool Registry T Typed tool catalog, routing, monitoring, schema validation
Context Manager C What enters the context window, compaction, retrieval
State Store S Persistence across turns/sessions, crash recovery
Lifecycle Hooks L Auth, logging, policy enforcement, instrumentation
Evaluation Interface V Action trajectories, intermediate states, success signals

Key empirical evidence that harnesses matter:

  • 🔥 Pi Research: Grok Code Fast 1 jumped from 6.7% → 68.3% on SWE-bench by changing only the harness edit-tool format — model unchanged
  • 💀 OpenAI Codex: 1M lines of code, 0 hand-written over 5 months — failure attributed not to model capability but to "underspecified environments"
  • ⚡ Stripe Minions: 1,300 PRs/week, 0 human-written code — harness-first engineering
  • 📉 METR: benchmark-passing PRs have a 24.2pp lower human merge rate, gap widening at 9.6pp/year — evaluation harness validity crisis
  • 💬 "The harness is the chassis; the model is the engine." — practitioner consensus, 2026

Root Cause Analysis

What This Survey Accomplishes

Conceptual contribution: We formalize the agent harness as an architectural object with six governable components (E, T, C, S, L, V), elevating it from implicit infrastructure to an explicit research target.

Empirical scope: We systematically review 110+ papers spanning academic research (evaluation benchmarks, security frameworks, memory architectures) and production deployments (Stripe, OpenAI, Cursor, METR), establishing that harness design is a binding constraint on deployed agent reliability.

Methodological advance: We introduce the Harness Completeness Matrix — a structured assessment framework mapping which of the six components each system implements — enabling direct comparison across heterogeneous agent systems that prior surveys could not evaluate on common terms.

Open challenges identified: We document nine technical challenges where current research provides partial solutions but no production-grade infrastructure: formal security models, cross-harness portability, protocol interoperability (MCP/A2A), context economics at 1M+ tokens/task, Byzantine fault tolerance in multi-agent systems, and compositional verification.

Practitioner-academic bridge: Unlike prior surveys focused exclusively on model capabilities or isolated components (memory, planning, tool use), we synthesize peer-reviewed research with production deployment reports to show where theory meets practice — and where critical gaps remain.

Intended audience: Researchers designing agent infrastructure, practitioners building production systems, and evaluators seeking to understand why benchmark performance often fails to predict deployment outcomes.


Historical Timeline

Historical Evolution of Agent Harnesses

Year Milestone Significance
1997–2005 JUnit, TestNG, xUnit family Software test harness paradigm; standardized observe-assert lifecycle
2016 OpenAI Gym (Brockman et al.) RL environment harness; step/reset API becomes canonical interface
2022 Nov ChatGPT public release; LangChain emerges LLM-native agent frameworks begin; tool-use as first-class citizen
2023 ReAct, Toolformer, MemGPT, Reflexion, Voyager, AutoGPT Core agent patterns: reasoning-acting, memory, reflection, skill accumulation
2023 CAMEL, ChatDev, Generative Agents Multi-agent coordination; social simulation harnesses
2023 AgentBench, SWE-bench Agent evaluation infrastructure emerges
2024 MetaGPT, WebArena, ToolLLM, SWE-agent, OSWorld Full-stack harnesses; real-world environment benchmarks
2024 CodeAct, LATS, Tree of Thoughts Structured action spaces; search-augmented planning
2024 Nov Anthropic releases MCP protocol First major tool↔harness standardization (2–15ms latency)
2025 HAL, AIOS, LangGraph Benchmark unification (21,730 rollouts); OS-level scheduling (2.1× speedup)
2025 Google releases A2A protocol Agent↔agent standardization (50–200ms)
2025 MemoryOS, SkillsBench†, AgentBound† Memory OS abstraction; skills-as-context (+16.2pp); safety certification
2026 Jan–Mar AgencyBench†, SandboxEscapeBench†, PRISM†, AEGIS†, SkillFortify†, Schema First† Compute economics; 15–35% escape rates; runtime security; schema discipline

† preprint


Harness Completeness Matrix

Legend: ✓ full support · ≈ partial · ✗ absent

Category System E T C S L V
Full-Stack
Harnesses
Claude Code
OpenClaw / PRISM
AIOS
OpenHands
Multi-Agent
Harnesses
MetaGPT
AutoGen
ChatDev
CAMEL
DeerFlow
DeepAgents
General
Frameworks
LangChain
LangGraph
LlamaIndex
Specialized
Harnesses
SWE-agent
Capability
Modules
MemGPT
Voyager
Reflexion
Generative Agents
Concordia
Evaluation
Infrastructure
HAL
AgentBench
OSWorld
BrowserGym

Paper List

Historical Lineages

Software Test Harnesses (1990s–2000s)

  • JUnit: "JUnit: A Cook's Tour". Beck & Gamma. Java Report, 4(5), May 1999. [Article]

RL Environment Harnesses (2016–2022)

  • OpenAI Gym: "OpenAI Gym". Brockman et al. arXiv 2016. [Paper] [Code]
  • Gymnasium: "Gymnasium: A Standard Interface for Reinforcement Learning Environments". Towers et al. NeurIPS 2025. [Paper] [Code]

Early LLM Agent Frameworks (2023–2024)

  • ReAct: "ReAct: Synergizing Reasoning and Acting in Language Models". Yao et al. ICLR 2023. [Paper] [Code]
  • Toolformer: "Toolformer: Language Models Can Teach Themselves to Use Tools". Schick et al. NeurIPS 2023. [Paper]
  • AutoGPT: "Auto-GPT: An Autonomous GPT-4 Experiment". Gravitas et al. GitHub 2023. [Code]
  • LangChain: "LangChain: Building Applications with LLMs through Composability". Chase et al. GitHub 2022. [Code]

Harness Taxonomy

What we classify: We categorize agent systems by harness completeness — which of the six components (E, T, C, S, L, V) each system implements — distinguishing full-stack harnesses (all six components) from specialized frameworks (partial implementations).

Why it matters: Prior taxonomies classified agents by application domain (coding, web navigation, embodied AI) or model architecture (single-agent, multi-agent). These categorizations cannot explain why systems with similar models achieve different reliability outcomes. Our harness-centric taxonomy reveals that production-grade systems converge on full ETCSLV implementations, while research prototypes often implement only 2-3 components.

Key finding: No agent framework can achieve production reliability without implementing all six governance components. Systems missing L-component (lifecycle hooks) cannot enforce safety policies. Systems missing V-component (evaluation interfaces) cannot debug failures. Systems missing S-component (state persistence) cannot recover from crashes.

Full-Stack Harnesses

  • PRISM/OpenClaw: "OpenClaw PRISM: A Zero-Fork, Defense-in-Depth Runtime Security Layer for Tool-Augmented LLM Agents". Li. arXiv 2026. [Paper]
  • AIOS: "AIOS: LLM Agent Operating System". Mei et al. COLM 2025. [Paper] [Code]
  • OpenHands: "OpenHands: An Open Platform for AI Software Developers as Generalist Agents". Wang et al. ICLR 2025. [Paper] [Code]
  • SWE-agent: "SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering". Yang et al. NeurIPS 2024. [Paper] [Code]
  • HAL: "Holistic Agent Leaderboard: The Missing Infrastructure for AI Agent Evaluation". Kapoor et al. ICLR 2026. [Paper]

Multi-Agent Harnesses

  • MetaGPT: "MetaGPT: Meta Programming for a Multi-Agent Collaborative Framework". Hong et al. ICLR 2024. [Paper] [Code]
  • AutoGen: "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation". Wu et al. arXiv 2023. [Paper] [Code]
  • ChatDev: "ChatDev: Communicative Agents for Software Development". Qian et al. ACL 2024. [Paper] [Code]
  • CAMEL: "CAMEL: Communicative Agents for 'Mind' Exploration of Large Language Model Society". Li et al. NeurIPS 2023. [Paper] [Code]

Frameworks & Modules

  • LangGraph: "LangGraph: Build Resilient Language Agents as Graphs". LangChain team. GitHub 2024. [Code]
  • MemGPT: "MemGPT: Towards LLMs as Operating Systems". Packer et al. NeurIPS 2023. [Paper] [Code]
  • Voyager: "Voyager: An Open-Ended Embodied Agent with Large Language Models". Wang et al. arXiv 2023. [Paper] [Code]
  • Reflexion: "Reflexion: Language Agents with Verbal Reinforcement Learning". Shinn et al. NeurIPS 2023. [Paper] [Code]
  • Generative Agents: "Generative Agents: Interactive Simulacra of Human Behavior". Park et al. UIST 2023. [Paper] [Code]
  • LangChain: "LangChain: Building Applications with LLMs through Composability". Chase et al. GitHub 2022. [Code]
  • LlamaIndex: "LlamaIndex: A Data Framework for LLM Applications". Liu et al. GitHub 2022. [Code]
  • DeerFlow: "DeerFlow: Distributed Workflow Engine for LLM Agents". GitHub 2024. [Code]
  • DeepAgents: "DeepAgents: Multi-Agent Framework for Deep Learning". GitHub 2024. [Code]

Evaluation Infrastructure

  • AgentBench: "AgentBench: Evaluating LLMs as Agents". Liu et al. ICLR 2024. [Paper] [Code]
  • SWE-bench: "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?". Jimenez et al. ICLR 2024. [Paper] [Code]
  • OSWorld: "OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments". Xie et al. NeurIPS 2024. [Paper] [Code]
  • WebArena: "WebArena: A Realistic Web Environment for Building Autonomous Agents". Zhou et al. ICLR 2024. [Paper] [Code]
  • GAIA: "GAIA: A Benchmark for General AI Assistants". Mialon et al. ICLR 2024. [Paper]
  • Mind2Web: "Mind2Web: Towards a Generalist Agent for the Web". Deng et al. NeurIPS 2023. [Paper]
  • AgentBoard: "AgentBoard: An Analytical Evaluation Board of Multi-Turn LLM Agents". Ma et al. NeurIPS 2024. [Paper]
  • InterCode: "InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback". Yang et al. NeurIPS 2023. [Paper]

Technical Challenges

Scope: We identify nine open research problems where current harness infrastructure provides partial solutions but no production-grade, generalizable frameworks. Each challenge represents a gap between what academic prototypes demonstrate and what deployed systems require.

Evaluation criteria: For each challenge, we assess: (1) Academic coverage — peer-reviewed solutions, if any; (2) Production maturity — whether any deployed system has solved it at scale; (3) Blocking severity — whether the unsolved problem prevents deployment or merely limits performance.

Cross-cutting theme: Most challenges stem from the composition problem — individual components (security sandboxes, memory stores, evaluation metrics) work in isolation, but combining them into a coherent harness surfaces emergent failures. Formal verification, portability testing, and protocol interoperability all require compositional reasoning that current research has not addressed.

Security & Sandboxing

Challenge: Agent sandboxes must prevent malicious tool use while permitting legitimate system access — a harder problem than traditional application sandboxing because agents intentionally interact with sensitive resources (file systems, network APIs, shell commands).

State of the art: SandboxEscapeBench shows frontier models can escape containers; PRISM demonstrates zero-fork defense-in-depth is possible but requires OS-level instrumentation. No generalizable sandbox exists that works across all agent use cases without manual per-task configuration.

Open problems: Compositional security properties (proving agent + harness + environment satisfy a safety specification) remain unsolved. Current defenses are reactive (detecting escapes) rather than preventive (proving impossibility of escape).

Key numbers: SandboxEscapeBench — frontier LLMs achieve 15–35% container escape rates; PRISM — 10-hook zero-fork runtime reduces escape to near-zero with <5ms overhead.

  • SandboxEscapeBench†: "Quantifying Frontier LLM Capabilities for Container Sandbox Escape". Marchand et al. arXiv 2026. [Paper]
  • InjecAgent: "InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents". Zhan et al. arXiv 2024. [Paper]
  • ToolHijacker†: "Prompt Injection Attack to Tool Selection in LLM Agents". Shi et al. NDSS 2026. [Paper]
  • Securing MCP†: "Securing the Model Context Protocol (MCP): Risks, Controls, and Governance". Errico et al. arXiv 2025. [Paper]
  • SHIELDA†: "SHIELDA: Structured Handling of Exceptions in LLM-Driven Agentic Workflows". Zhou et al. arXiv 2025. [Paper]
  • PALADIN†: "PALADIN: Self-Correcting Language Model Agents to Cure Tool-Failure Cases". Vuddanti et al. ICLR 2026. [Paper]
  • AgentBound†: "Securing AI Agent Execution". Bühler et al. arXiv 2025. [Paper]
  • AgentSys†: "AgentSys: Secure and Dynamic LLM Agents Through Explicit Hierarchical Memory Management". Wen et al. arXiv 2026. [Paper]
  • Indirect Prompt Injection: "Not What You've Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection". Greshake et al. AISec 2023. [Paper]
  • AgentHarm†: "AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents". Andriushchenko et al. arXiv 2024. [Paper]
  • TrustAgent: "TrustAgent: Towards Safe and Trustworthy LLM-Based Agents". Hua et al. EMNLP 2024. [Paper]
  • ToolEmu†: "Identifying the Risks of LM Agents with an LM-Emulated Sandbox". Ruan et al. arXiv 2023. [Paper]
  • Ignore Previous Prompt: "Ignore Previous Prompt: Attack Techniques For Language Models". Perez & Ribeiro. NeurIPS ML Safety Workshop 2022. [Paper]

Evaluation & Benchmarking

Key numbers: HAL unified 21,730 rollouts, compressing weeks of evaluation to hours; OSWorld reports 28% false negative rate in automated evaluation; METR finds benchmark-passing PRs have 24.2pp lower human merge rate, widening at 9.6pp/year.

  • AgencyBench†: "AgencyBench: Benchmarking the Frontiers of Autonomous Agents in 1M-Token Real-World Contexts". Li et al. arXiv 2026. [Paper]
  • AEGIS†: "AEGIS: No Tool Call Left Unchecked -- A Pre-Execution Firewall and Audit Layer for AI Agents". Yuan et al. arXiv 2026. [Paper]
  • Hell or High Water†: "Hell or High Water: Evaluating Agentic Recovery from External Failures". Wang et al. COLM 2025. [Paper]
  • SearchLLM†: "Aligning Large Language Models with Searcher Preferences". Wu et al. arXiv 2026. [Paper]
  • Meta-Harness†: "Meta-Harness: End-to-End Optimization of Model Harnesses". Lee et al. arXiv 2026. [Paper]
  • TheAgentCompany†: "TheAgentCompany: Benchmarking LLM Agents on Consequential Real-World Tasks". Xu et al. arXiv 2024. [Paper]
  • BrowserGym†: "The BrowserGym Ecosystem for Web Agent Research". Le Sellier De Chezelles et al. arXiv 2024. [Paper]
  • WorkArena†: "WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?". Drouin et al. arXiv 2024. [Paper]
  • R-Judge: "R-Judge: Benchmarking Safety Risk Awareness for LLM Agents". Yuan et al. EMNLP 2024. [Paper]
  • R2E: "R2E: Turning any GitHub Repository into a Programming Agent Environment". Jain et al. ICML 2024. [Paper]
  • Evaluation Survey: "Evaluation and Benchmarking of LLM Agents: A Survey". Mohammadi et al. KDD 2025. [Paper]
  • PentestJudge†: "PentestJudge: Judging Agent Behavior Against Operational Requirements". Caldwell et al. arXiv 2025. [Paper]

Protocol Standardization

Key numbers: MCP (tool↔harness): 2–15ms latency; A2A (agent↔agent): 50–200ms; ACP (intent-level, IBM) — three protocols serve complementary roles.

  • MCP: "Model Context Protocol". Anthropic. Technical Report 2024. [Spec]
  • A2A: "Agent-to-Agent Protocol". Google. Technical Report 2025. [Spec]
  • Protocol Comparison†: "A Survey of Agent Interoperability Protocols: Model Context Protocol (MCP), Agent Communication Protocol (ACP), Agent-to-Agent Protocol (A2A), and Agent Network Protocol (ANP)". Ehtesham et al. arXiv 2025. [Paper]
  • Gorilla: "Gorilla: Large Language Model Connected with Massive APIs". Patil et al. NeurIPS 2023. [Paper] [Code]

Runtime Context Management

Key numbers: SkillsBench — curated skill injection yields +16.2pp improvement; "Lost in the Middle" effect documented; long-context models shift the problem from retention to salience.

  • SkillsBench†: "SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks". Li et al. arXiv 2026. [Paper]
  • ReadAgent: "A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts". Lee et al. ICML 2024. [Paper]
  • MemoryOS: "Memory OS of AI Agent". Kang et al. arXiv 2025. [Paper]
  • CoALA: "Cognitive Architectures for Language Agents". Sumers et al. TMLR 2024. [Paper]
  • SkillFortify†: "Formal Analysis and Supply Chain Security for Agentic AI Skills". Bhardwaj. arXiv 2026. [Paper]
  • Lost in the Middle: "Lost in the Middle: How Language Models Use Long Contexts". Liu et al. TACL 2024. [Paper]
  • Context Engineering Survey†: "Context Engineering: A Survey of 1,400 Papers on Effective Context Management for LLM Agents". Mei et al. arXiv 2025. [Paper]

Tool Use & Registry

Key numbers: Vercel found removing 80% of tools helped more than any model upgrade; Schema First (Sigdel & Baral, 2026) — a controlled pilot showing that schema-based tool contracts reduce interface misuse but not semantic misuse, with end-task success at zero across all conditions, suggesting interface design alone is insufficient for tool reliability; CodeAct outperforms on 17/17 Mint benchmarks with −20% turns.

  • CodeAct: "Executable Code Actions Elicit Better LLM Agents". Wang et al. ICML 2024. [Paper] [Code]
  • Schema First†: "Schema First Tool APIs for LLM Agents: A Controlled Study of Tool Misuse, Recovery, and Budgeted Performance". Sigdel & Baral. arXiv 2026. [Paper]
  • ToolLLM: "ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs". Qin et al. ICLR 2024. [Paper] [Code]
  • ToolSandbox†: "ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities". Lu et al. arXiv 2024. [Paper]
  • AutoTool†: "AutoTool: Efficient Tool Selection for Large Language Model Agents". Jia & Li. AAAI 2026. [Paper]
  • Tool Learning Survey: "Tool Learning with Large Language Models: A Survey". Qu et al. Frontiers of Computer Science 2024. [Paper]
  • GoEX†: "GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications". Patil et al. arXiv 2024. [Paper]
  • AgentTuning: "AgentTuning: Enabling Generalized Agent Abilities for LLMs". Zeng et al. ACL 2024. [Paper]

Memory Architecture

Key numbers: Mem0 achieves 90% token reduction vs full-context; Zep temporal knowledge: +18.5% QA accuracy; Agent Workflow Memory: +14.9% on Mind2Web. Six architectural patterns: flat buffer → hierarchical → episodic → semantic → procedural → graph.

  • Agent Workflow Memory (AWM)†: "Agent Workflow Memory". Wang et al. arXiv 2024. [Paper]
  • Mem0†: "Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory". Khant et al. arXiv 2025. [Paper]
  • A-MEM†: "A-MEM: Agentic Memory for LLM Agents". Xu et al. NeurIPS 2025. [Paper]
  • MemAct†: "Memory as Action: Autonomous Context Curation for Long-Horizon Agentic Tasks". Zhang et al. arXiv 2025. [Paper]
  • Memory Survey†: "Memory for Autonomous LLM Agents: Mechanisms, Evaluation, and Emerging Frontiers". Du. arXiv 2026. [Paper]
  • MemoryBank: "MemoryBank: Enhancing Large Language Models with Long-Term Memory". Zhong et al. AAAI 2024. [Paper]
  • LoCoMo†: "Evaluating Very Long-Term Conversational Memory of LLM Agents". Maharana et al. arXiv 2024. [Paper]
  • Memory Mechanisms Survey†: "A Survey on the Memory Mechanism of Large Language Model Based Agents". Zhang et al. arXiv 2024. [Paper]
  • Evo-Memory†: "Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory". Wei et al. arXiv 2025. [Paper]

Planning & Reasoning

Key numbers: SWE-agent ACI study shows interface design outweighs model capability as the primary performance determinant. LATS integrates MCTS with language model feedback for state-space search. Plan-on-Graph enables adaptive self-correcting planning on knowledge graphs through guidance, memory, and reflection mechanisms.

  • Tree of Thoughts: "Tree of Thoughts: Deliberate Problem Solving with Large Language Models". Yao et al. NeurIPS 2023. [Paper] [Code]
  • LATS: "Language Agent Tree Search Unifies Reasoning, Acting, and Planning in Language Models". Zhou et al. arXiv 2023. [Paper] [Code]
  • Plan-on-Graph: "Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs". Chen et al. NeurIPS 2024. [Paper]
  • AFlow†: "AFlow: Automating Agentic Workflow Generation". Zhang et al. arXiv 2024. [Paper]
  • Agent Q†: "Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents". Putta et al. arXiv 2024. [Paper]
  • OPENDEV†: "Building Effective AI Coding Agents for the Terminal: Scaffolding, Harness, Context Engineering, and Lessons Learned". Bui. arXiv 2026. [Paper]
  • AOrchestra†: "AOrchestra: Automating Sub-Agent Creation for Agentic Orchestration". Ruan et al. arXiv 2026. [Paper]
  • RAP: "Reasoning with Language Model is Planning with World Model". Hao et al. EMNLP 2023. [Paper]
  • Inner Monologue: "Inner Monologue: Embodied Reasoning Through Planning with Language Models". Huang et al. CoRL 2022. [Paper]
  • Agent-Oriented Planning: "Agent-Oriented Planning in Multi-Agent Systems". Li et al. ICLR 2025. [Paper]
  • ExACT†: "ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning". Yu et al. arXiv 2024. [Paper]

Multi-Agent Coordination

Key numbers: AgencyBench — agents achieve 48.4% success on native SDK harness vs substantially lower on independent harnesses, demonstrating tight harness-agent coupling. Byzantine fault tolerance remains an open problem for adversarial multi-agent settings.

Multi-Agent Coordination Topologies

  • SAGA†: "SAGA: A Security Architecture for Governing AI Agentic Systems". Syros et al. NDSS 2026. [Paper]
  • MAS-FIRE†: "MAS-FIRE: Fault Injection and Reliability Evaluation for LLM-Based Multi-Agent Systems". Jia et al. arXiv 2026. [Paper]
  • Byzantine fault tolerance†: "Rethinking the Reliability of Multi-agent System: A Perspective from Byzantine Fault Tolerance". Zheng et al. arXiv 2025. [Paper]
  • Multi-agent baseline study†: "Rethinking the Value of Multi-Agent Workflow: A Strong Single Agent Baseline". Xu et al. arXiv 2026. [Paper]
  • AgentVerse†: "AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors". Chen et al. arXiv 2023. [Paper]
  • Mixture-of-Agents†: "Mixture-of-Agents Enhances Large Language Model Capabilities". Wang et al. arXiv 2024. [Paper]
  • Multi-Agent Survey: "Large Language Model Based Multi-Agents: A Survey of Progress and Challenges". Guo et al. IJCAI 2024. [Paper]
  • Concordia†: "Generative Agent-Based Modeling with Actions Grounded in Physical, Social, or Digital Space Using Concordia". Vezhnevets et al. arXiv 2023. [Paper]

Compute Economics

Key numbers: OpenRouter reports 13T tokens/week (Feb 2026), doubling every 4 weeks; AgencyBench measures 1M tokens/task average; 1000× agent compute growth projected by 2027; AIOS achieves 2.1× throughput speedup via proper agent scheduling.

  • Repo2Run†: "Repo2Run: Automated Building Executable Environment for Code Repository at Scale". Hu et al. arXiv 2025. [Paper]
  • Policy-First†: "Guardrails as Infrastructure: Policy-First Control for Tool-Orchestrated Workflows". Sigdel & Baral. arXiv 2026. [Paper]

Emerging Topics

  • Self-Evolving Agents Survey†: "A Survey of Self-Evolving Agents: What, When, How, and Where to Evolve on the Path to Artificial Super Intelligence". Gao et al. TMLR 2026. [Paper]
  • Self-RAG: "Self-RAG: Learning to Retrieve, Generate, and Critique Through Self-Reflection". Asai et al. ICLR 2024. [Paper]
  • Constitutional AI: "Constitutional AI: Harmlessness from AI Feedback". Bai et al. arXiv 2022. [Paper]
  • AppAgent†: "AppAgent: Multimodal Agents as Smartphone Users". Zhang et al. arXiv 2023. [Paper]

Related Surveys

  • LLM Agents Survey: "A Survey on Large Language Model Based Autonomous Agents". Wang et al. arXiv 2023. [Paper]
  • Rise of LLM Agents: "The Rise and Potential of Large Language Model Based Agents: A Survey". Xi et al. arXiv 2023. [Paper]
  • LLM Survey: "A Survey of Large Language Models". Zhao et al. arXiv 2023. [Paper]
  • AI Agent Systems†: "AI Agent Systems: Architectures, Applications, and Evaluation". Xu. arXiv 2025. [Paper]

Practitioner Reports & Industry Insights

Production deployment experiences from Stripe, OpenAI, Cursor, METR, and other frontier practitioners.

  • Stripe Minions: "Minions: Stripe's one-shot, end-to-end coding agents". Gray. Stripe Dev Blog, Feb 2026. [Blog]
  • Harness Engineering (OpenAI): "Harness engineering: leveraging Codex in an agent-first world". Lopopolo. OpenAI Blog, Feb 2026. [Blog]
  • Self-Driving Codebases: "Towards self-driving codebases". Lin. Cursor Blog, Feb 2026. [Blog]
  • METR SWE-bench Analysis: "Many SWE-bench-Passing PRs Would Not Be Merged into Main". Whitfill et al. METR Notes, Mar 2026. [Report]

Future Directions

Eight open research directions identified in the survey (no curated paper list — these are forward-looking):

  • Formal Harness Specification Language — DSL for specifying and verifying H=(E,T,C,S,L,V) components
  • Cross-Harness Benchmark Suite — portability testing across incompatible harness ecosystems
  • Security Taxonomy & Threat Model — extension of OWASP Top 10 to agent harness attack surfaces
  • Protocol Interoperability (MCP/A2A) — bridging tool-level and agent-level protocols
  • Long-Horizon Evaluation Methodology — metrics that don't degrade under multi-session, multi-day tasks
  • Harness-Aware Fine-Tuning — training models that are aware of their execution environment
  • Memory Interface Standardization — portable memory APIs across flat, episodic, and graph stores
  • Harness Transparency Specification — auditability and explainability for runtime decisions

Citation

See BibTeX at the top of this README.


Update Log

Version Date Changes
v1 2026-04-03 Initial preprint
v2 2026-04-07 Repo updated

† denotes preprint, not yet peer-reviewed.
This survey is under active development; the full manuscript will be released soon.
Maintained by Qianyu Meng & Liyi Chen. PRs welcome for missing papers or updated links.

Downloads last month
-

Papers for GloriaaaM/LLM-Agent-Harness-Survey