Papers
arxiv:2512.15374

SCOPE: Prompt Evolution for Enhancing Agent Effectiveness

Published on Dec 17
ยท Submitted by
JarvisPei
on Dec 18
Authors:
,
,
,
,
,

Abstract

SCOPE enhances LLM agents' context management through prompt evolution, improving task success rates in dynamic environments without human intervention.

AI-generated summary

Large Language Model (LLM) agents are increasingly deployed in environments that generate massive, dynamic contexts. However, a critical bottleneck remains: while agents have access to this context, their static prompts lack the mechanisms to manage it effectively, leading to recurring Corrective and Enhancement failures. To address this capability gap, we introduce SCOPE (Self-evolving Context Optimization via Prompt Evolution). SCOPE frames context management as an online optimization problem, synthesizing guidelines from execution traces to automatically evolve the agent's prompt. We propose a Dual-Stream mechanism that balances tactical specificity (resolving immediate errors) with strategic generality (evolving long-term principles). Furthermore, we introduce Perspective-Driven Exploration to maximize strategy coverage, increasing the likelihood that the agent has the correct strategy for any given task. Experiments on the HLE benchmark show that SCOPE improves task success rates from 14.23\% to 38.64\% without human intervention. We make our code publicly available at https://github.com/JarvisPei/SCOPE.

Community

Paper author Paper submitter

We introduce SCOPE (Self-evolving Context Optimization via Prompt Evolution), a framework that automatically evolves agent prompts by learning from execution traces.

Try it now:

pip install scope-optimizer

๐Ÿ“„ Paper: https://arxiv.org/abs/2512.15374
๐Ÿ’ป Code: https://github.com/JarvisPei/SCOPE

arXiv lens breakdown of this paper ๐Ÿ‘‰ https://arxivlens.com/PaperView/Details/scope-prompt-evolution-for-enhancing-agent-effectiveness-4920-fbe057d9

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.15374 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.15374 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.15374 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.