LLDS-Search
Collection
On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death Spiral
β’
12 items
β’
Updated
π Paper ο½π€ LLDS-Huggingface ο½π GitHub
LLDS is a lightweight likelihood-preserving regularization designed to stabilize tool-integrated reinforcement learning (e.g., GRPO / Search-R1 style training). It prevents training collapse by regularizing only when the likelihood of (good) action decreases, and only on the tokens responsible for the decrease.
We support tool-integrated inference using the same workflow as Search-R1, where the LLM interacts with a local retrieval server for multi-step reasoning.
The pipeline consists of two parts:
Search-R1 recommends running the retriever in a separate environment.
conda activate retriever
bash retrieval_launch.sh
conda activate searchr1
python infer.py
MODEL_NAME = "<YOUR_ORG>/<YOUR_MODEL_NAME>" # e.g. my-org/LLDS-R-GRPO-Qwen2.5-3B-Base
question = "Your question here"
@article{deng2025grpo,
title={On GRPO Collapse in Search-R1: The Lazy Likelihood-Displacement Death Spiral},
author={Deng, Wenlong and Li, Yushu and Gong, Boying and Ren, Yi and Thrampoulidis, Christos and Li, Xiaoxiao},
journal={arXiv preprint arXiv:2512.04220},
year={2025}
}