Papers
arxiv:2603.13875

GradMem: Learning to Write Context into Memory with Test-Time Gradient Descent

Published on Mar 14
· Submitted by
MIKHAIL BURTSEV
on Mar 18

Abstract

GradMem enables efficient context storage and retrieval in language models through gradient-based memory writing that outperforms traditional forward-only approaches.

AI-generated summary

Many large language model applications require conditioning on long contexts. Transformers typically support this by storing a large per-layer KV-cache of past activations, which incurs substantial memory overhead. A desirable alternative is ompressive memory: read a context once, store it in a compact state, and answer many queries from that state. We study this in a context removal setting, where the model must generate an answer without access to the original context at inference time. We introduce GradMem, which writes context into memory via per-sample test-time optimization. Given a context, GradMem performs a few steps of gradient descent on a small set of prefix memory tokens while keeping model weights frozen. GradMem explicitly optimizes a model-level self-supervised context reconstruction loss, resulting in a loss-driven write operation with iterative error correction, unlike forward-only methods. On associative key--value retrieval, GradMem outperforms forward-only memory writers with the same memory size, and additional gradient steps scale capacity much more effectively than repeated forward writes. We further show that GradMem transfers beyond synthetic benchmarks: with pretrained language models, it attains competitive results on natural language tasks including bAbI and SQuAD variants, relying only on information encoded in memory.

Community

Paper author Paper submitter

image

Paper author Paper submitter

image

Paper author Paper submitter

image

Paper author Paper submitter

image

Paper author Paper submitter

image

Paper author Paper submitter

image

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.13875 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.13875 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.13875 in a Space README.md to link it from this page.

Collections including this paper 2