Papers
arxiv:2605.13880

PREPING: Building Agent Memory without Tasks

Published on May 11
· Submitted by
Yumin Choi
on May 15
Authors:
,
,
,
,

Abstract

Preping is a framework for pre-task memory construction that uses proposer-guided synthetic practice to improve agent performance in new environments with reduced deployment costs.

AI-generated summary

Agent memory is typically constructed either offline from curated demonstrations or online from post-deployment interactions. However, regardless of how it is built, an agent faces a cold-start gap when first introduced to a new environment without any task-specific experience available. In this paper, we study pre-task memory construction: whether an agent can build procedural memory before observing any target-environment tasks, using only self-generated synthetic practice. Yet, synthetic interaction alone is insufficient, as without controlling what to practice and what to store, synthetic tasks become redundant, infeasible, and ultimately uninformative, and memory further degrades quickly due to unfiltered trajectories. To overcome this, we present Preping, a proposer-guided memory construction framework. At its core is proposer memory, a structured control state that shapes future practice. A Proposer generates synthetic tasks conditioned on this state, a Solver executes them, and a Validator determines which trajectories are eligible for memory insertion while also providing feedback to guide future proposals. Experiments on AppWorld, BFCL v3, and MCP-Universe show that Preping substantially improves over a no-memory baseline and achieves performance competitive with strong playbook-based methods built from offline or online experience, with deployment cost 2.99times lower on AppWorld and 2.23times lower on BFCL v3 than online memory construction. Further analyses reveal that the main benefit does not come from synthetic volume alone, but from proposer-side control over feasibility, redundancy, and coverage, combined with selective memory updates.

Community

Paper submitter

LLM agents often need memory to solve tasks in new tool environments, but memory is usually built only after seeing human tasks or deployment-time user interactions. This creates a cold-start problem: the agent needs memory most when it has the least experience.

We introduce Preping, a framework for building agent memory before the first user task. Preping lets an agent generate its own synthetic practice tasks, execute them in the target environment, validate which trajectories are reliable, and distill them into reusable procedural memory.

The key idea is simple: instead of passively waiting for user tasks, agents can actively prepare through controlled, environment-grounded practice. Across AppWorld, BFCL v3, and MCP-Universe, Preping improves over no-memory baselines and remains competitive with memory methods(ACE) that rely on offline or online target-task experience, while reducing deployment-time memory construction cost.
concept

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13880
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.13880 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13880 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13880 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.