SE-Search: Self-Evolving Search Agent via Memory and Dense Reward
Abstract
Self-Evolving Search agent improves retrieval-augmented generation by enhancing search behavior through memory purification, atomic query training, and dense rewards for better factual accuracy.
Retrieval augmented generation (RAG) reduces hallucinations and factual errors in large language models (LLMs) by conditioning generation on retrieved external knowledge. Recent search agents further cast RAG as an autonomous, multi-turn information-seeking process. However, existing methods often accumulate irrelevant or noisy documents and rely on sparse reinforcement learning signals. We propose Self-Evolving Search, a Self-Evolving Search agent that improves online search behavior through three components, memory purification, atomic query training, and dense rewards. SE-Search follows a Think-Search-Memorize strategy that retains salient evidence while filtering irrelevant content. Atomic query training promotes shorter and more diverse queries, improving evidence acquisition. Dense rewards provide fine-grained feedback that speeds training. Experiments on single-hop and multi-hop question answering benchmarks show that SE-Search-3B outperforms strong baselines, yielding a 10.8 point absolute improvement and a 33.8% relative gain over Search-R1.We will make the code and model weights publicly available upon acceptance.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper