--- license: apache-2.0 tags: - rlhf - llama - GRIP pipeline_tag: text-generation base_model: - meta-llama/Meta-Llama-3-8B language: - en - zh ---

Retrieval as Generation: A Unified Framework with Self-Triggered Information Planning

English | įŽ€äŊ“中文

[ACL'26 Main Conference]

Bo LiMingda WangGeXiang FangShikun ZhangWei Ye
Traditional RAG (Retrieval-Augmented Generation) systems treat retrieval as an external, one-shot intervention, rigidly fetching documents before generation begins, which often fails when information needs emerge gradually during complex reasoning. Even dynamic search methods heavily rely on disconnected external controllers or heuristic rules. We believe that, much like human cognitive processes, retrieval should be an intrinsic, generative capability. LLMs must be able to autonomously evaluate their knowledge, trigger searches, and formulate contextual follow-up queries tightly coupled with their evolving reasoning states. GRIP (Generation-guided Retrieval with Information Planning) embodies this new paradigm. Under the framework of Retrieval as Generation, our model internalizes retrieval decisions directly into token-level decoding using specific control tokens. This approach shifts from relying on auxiliary multi-stage search modules to achieving end-to-end, self-triggered information planning within a single autoregressive trajectory. ## 🌟 Key Features - đŸŽ¯ **Token-Driven Control**: Embeds retrieval behaviors directly into the model's generative policy via explicit control tokens (e.g., [RETRIEVE], [ANSWER], [INTERMEDIARY]) without external classifiers. - 🔄 **Self-Triggered Planning**: Autonomously decides when to fall back to internal knowledge, how to reformulate targeted queries based on partial reasoning, and when to terminate the search. - âš–ī¸ **Adaptive Retrieval Depth**: Dynamically adjusts the number of retrieval rounds based on question complexity, successfully avoiding redundant searches while extrapolating beyond strict training budgets. - 🚀 **State-of-the-Art Performance**: Surpasses strong open-source RAG baselines (e.g., GainRAG, R1-Searcher) and achieves performance competitive with GPT-4o across five QA benchmarks using a much smaller backbone (LLaMA3-8B). - 🧩 **Unified Decoding Trajectory**: Tightly couples multi-step reasoning and on-the-fly evidence integration into a single, continuous generation flow. - đŸ› ī¸ **Optimized Training Recipe**: Employs a structured supervised fine-tuning (SFT) over four distinct behavioral patterns, further refined by rule-based Reinforcement Learning (DAPO) to ensure accurate and balanced retrieval control. ## 🚀 Quick Start ### Installation ```bash git clone https://github.com/WisdomShell/GRIP cd GRIP conda create -n GRIP python=3.9 conda activate GRIP cd GRIP/model/Train pip install -e . cd ../ pip install -r requirements.txt ``` ## Preparation ### Build Wikipedia index Download the Wikipedia dump. ```python mkdir wiki_data cd wiki_data wget https://dl.fbaipublicfiles.com/dpr/wikipedia_split/psgs_w100.tsv.gz gzip -d psgs_w100.tsv.gz ``` Use Elasticsearch to index the Wikipedia dump ```python mkdir ret cd ret wget -O elasticsearch-7.17.9.tar.gz https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.9-linux-x86_64.tar.gz tar zxvf elasticsearch-7.17.9.tar.gz rm elasticsearch-7.17.9.tar.gz cd elasticsearch-7.17.9 nohup bin/elasticsearch python data_generation/index.py --data_path path/to/your/psgs_w100.tsv --index_name wiki ``` ## Checkpoints and Datasets Below are the datasets used for SFT and RL training in our work, and the weights of the already trained GRIP model. | Dataset | HF Dataset Repo | |------------------------------|-----------------------------------------------------------------------------------------------------------| | GRIP_SFT_Train_Data | [WisdomShell/GRIP_SFT_Data](https://huggingface.co/datasets/WisdomShell/GRIP_SFT_Data) | | GRIP_RL_Train_Data | [WisdomShell/GRIP_RL_Data](https://huggingface.co/datasets/WisdomShell/GRIP_RL_Data) | | Model | HF Model Repo | |------------------------------|-----------------------------------------------------------------------------------------------------------| | Meta-LLaMa-3-8b-GRIP | [WisdomShell/LLaMa-3-8b-GRIP](https://huggingface.co/WisdomShell/GRIP-Llama-3-8B) | ## Generation SFT and RL Training Data Before that, you need to download the `NaturalQuestion-open` training set, `WebQuestions` training set and `TriviaQA` training set, extract their Questions and answers and merge them into a jsonl file,Convert them into the following format: ```json { "question": "", "answer":["Answer", ...] } ``` Use the `Meta-LLaMa-3-8B-Instruct` model to execute the following code ```python bash data_generation/first.sh ``` Write your *OpenAI token* into the use_gpt_for_data.py file,and configure `C.jsonl` file path. After the generation is completed, it will automatically overwrite the original file. ``` python generation_train_data/use_gpt_for_data.py ``` Write the directory where A, B, C, and D are located into the merge_dataset.py file. The output path will save SFT_Train_data and RL_Train_data. ``` python generation_train_data/merge_dataset.py ``` ## Train ### SFT 1. Data Process - Script: `Train/examples/data_preprocess/grip/sft.py` - You need to specify the `data_path` parameter, indicating the path of the data synthesized by GRIP ```python parser.add_argument('--data_path', default='/SFT_data.jsonl') ``` - You should specify the name of the `dataset` for use during subsequent training. ```python # The data path is stored in the "datasets" folder by default. parser.add_argument('--save_dir', default='datasets/GRIPSFT') ``` 2. Train Script - Script: `Train/examples/sft/run_sft_llama.sh` - Train using the **Base version** of the model. ```bash set -x NAME=GRIPSFT # Here to specify the names of the processed training data from the previous step torchrun --standalone --nnodes=1 --nproc_per_node=8 -m verl.trainer.fsdp_sft_trainer \ data.train_files=datasets/$NAME/train.parquet \ data.val_files=datasets/$NAME/test.parquet \ data.prompt_key=extra_info \ data.response_key=extra_info \ optim.lr=1e-6 \ data.prompt_dict_keys=['question'] \ +data.response_dict_keys=['answer'] \ data.micro_batch_size=4 \ model.partial_pretrain=meta-llama/Meta-Llama-3-8B-Base \ #Use Base to Train trainer.default_local_dir=/path/to/your/SFT_model \ # Finetuned Model Save Path trainer.project_name=GRIPSFT \ trainer.experiment_name=$NAME \ trainer.logger=['console'] \ # Report `console` or `wandb` trainer.total_epochs=8 \ # Training Epoches trainer.default_hdfs_dir=null $@ \ ulysses_sequence_parallel_size=2 \ use_remove_padding=true ``` ### RL 1. Data Process - Script: `Train/examples/data_preprocess/grip/rl.py` - You need to specify the `data_path` parameter, indicating the path of the data synthesized by GRIP ```python parser.add_argument('--data_path', default='/RL_data.jsonl') ``` - You should specify the name of the `dataset` for use during subsequent training. ```python # The data path is stored in the "datasets" folder by default. parser.add_argument('--save_dir', default='datasets/GRIPRL') ``` - You should specify the name of the `data_source` for use during subsequent training to select reward model. ```python parser.add_argument('--data_source', default='GRIPRL') # Necessary ``` 2. Train Script using `DAPO` - Script: `Train/recipe/dapo/dapo_4w_continue_rl_ep3_llama.sh` - You should modify these parameters to suit RL training. ```bash ... # Paths MODEL_PATH=/GRIPSFT_LLaMa/global_step_xxx # SFT Checkpoint CKPTS_DIR=/RL_model # RL Model Save Path TRAIN_FILE=datasets/GRIPRL/train.parquet # RL Datasets TEST_FILE=datasets/GRIPRL/test.parquet # RL Datasets ... ``` 3. The specific implementation of the Reward Model is in the file `Train/verl/utils/reward_score/grip.py`. 4. After training, you should merge the slices saved from the model into Hugging Face format by script `Train/scripts/merge.sh`. ### Local Inference using GRIP #### Test data format alignment ```json { "question": "Test Query", "answer": ["Answer List", ...] } ``` #### Mutil-Turn GRIP Inference - Main Script: `inference/inference.sh` ```python # Model Saved Path parser.add_argument('--model_path', type=str, default="/path/to/your/RL_model/step_xxx") # Predicted file output path parser.add_argument('--output_file', type=str, default="output/rl_step_xxx_hotpot.jsonl") # File to be predicted parser.add_argument('--input_file', type=str, default="test_data/hotpotQA.jsonl") ``` - This script will generate predicitons by format: ```json { "Question": "String", "prediction": ["String",......] } ``` ## Eval ```python python eval/eval.py \ --references_path test_dataset.jsonl \ --predictions_path prediction.jsonl ``` ## 🤝 Contributing We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. ## 📄 Citation ```bibtex @article{li2026retrieval, title={Retrieval as Generation: A Unified Framework with Self-Triggered Information Planning}, author={Li, Bo and Wang, Mingda and Fang, Gexiang and Zhang, Shikun and Ye, Wei}, journal={arXiv preprint arXiv:2604.11407}, year={2026} } ``` ## 📝 License This project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details. ## 🙏 Acknowledgments Special thanks to the open-source community and all contributors who made this project possible.