Update pipeline tag to text-generation
#1
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,10 +1,300 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
license: apache-2.0
|
| 4 |
language:
|
| 5 |
- en
|
|
|
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
The base of this model is [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), using QReCC as the training data, and the training method is ConvSearch-R1.
|
| 9 |
|
| 10 |
-
The code is available [here](https://github.com/BeastyZ/ConvSearch-R1). Please refer to the paper [here](https://arxiv.org/abs/2505.15776).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
library_name: transformers
|
| 5 |
+
license: apache-2.0
|
| 6 |
+
pipeline_tag: text-generation
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# Paper title and link
|
| 10 |
+
|
| 11 |
+
The model was presented in the paper [ConvSearch-R1: Enhancing Query Reformulation for Conversational Search with Reasoning via Reinforcement Learning](https://huggingface.co/papers/2505.15776).
|
| 12 |
+
|
| 13 |
+
# Paper abstract
|
| 14 |
+
|
| 15 |
+
The abstract of the paper is the following:
|
| 16 |
+
|
| 17 |
+
Conversational search systems require effective handling of context-dependent queries that often contain ambiguity, omission, and coreference. Conversational Query Reformulation (CQR) addresses this challenge by transforming these queries into self-contained forms suitable for off-the-shelf retrievers. However, existing CQR approaches suffer from two critical constraints: high dependency on costly external supervision from human annotations or large language models, and insufficient alignment between the rewriting model and downstream retrievers. We present ConvSearch-R1, the first self-driven framework that completely eliminates dependency on external rewrite supervision by leveraging reinforcement learning to optimize reformulation directly through retrieval signals. Our novel two-stage approach combines Self-Driven Policy Warm-Up to address the cold-start problem through retrieval-guided self-distillation, followed by Retrieval-Guided Reinforcement Learning with a specially designed rank-incentive reward shaping mechanism that addresses the sparsity issue in conventional retrieval metrics. Extensive experiments on TopiOCQA and QReCC datasets demonstrate that ConvSearch-R1 significantly outperforms previous state-of-the-art methods, achieving over 10% improvement on the challenging TopiOCQA dataset while using smaller 3B parameter models without any external supervision.
|
| 18 |
+
|
| 19 |
+
## Content
|
| 20 |
+
|
| 21 |
The base of this model is [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), using QReCC as the training data, and the training method is ConvSearch-R1.
|
| 22 |
|
| 23 |
+
The code is available [here](https://github.com/BeastyZ/ConvSearch-R1). Please refer to the paper [here](https://arxiv.org/abs/2505.15776).
|
| 24 |
+
|
| 25 |
+
# π» Usage
|
| 26 |
+
```python
|
| 27 |
+
from vllm import LLM, SamplingParams
|
| 28 |
+
|
| 29 |
+
example = """Given a query and its context, you must first think about the reasoning process in the mind to decontextualize the query by resolving \
|
| 30 |
+
coreference and omission issues. Then, provide the user with a rewrite that retains its original meaning and is as informative as possible to help \
|
| 31 |
+
search engines retrieve relevant documents effectively. The reasoning process and rewrite should be enclosed within <think> </think> and <rewrite> </rewrite> tags, respectively, i.e., \
|
| 32 |
+
<think> reasoning process here </think>
|
| 33 |
+
<rewrite> rewrite here </rewrite>.
|
| 34 |
+
|
| 35 |
+
### Context Begin ###
|
| 36 |
+
Q1: what can you tell me about Gaelic Ireland Dress?
|
| 37 |
+
A1: The common clothing amongst the Gaelic Irish consisted of a woollen semi circular cloak worn over a loose-fitting, long-sleeved tunic made of linen.
|
| 38 |
+
Q2: did they wear any other clothing distinction?
|
| 39 |
+
A2: For men the lΓ©ine reached to their ankles but was hitched up by means of a woven belt. The lΓ©ine was hitched up to knee level.
|
| 40 |
+
Q3: Did they have any other distinction in their clothing?
|
| 41 |
+
A3: The cloak was simply thrown over both shoulders or sometimes over only one.
|
| 42 |
+
Q4: Any other distinction for the women?
|
| 43 |
+
A4: Women wore the lΓ©ine at full length, rather than knee length for men.
|
| 44 |
+
Q5: did they wear their hair up or down?
|
| 45 |
+
A5: Women invariably grew their hair long and, as in other European cultures, this custom was also common among the men.
|
| 46 |
+
Q6: What other things did they wear?
|
| 47 |
+
A6: A short, tight-fitting jacket became popular later on and the Irish commonly wore hoods at that time
|
| 48 |
+
### Context End ###
|
| 49 |
+
|
| 50 |
+
Query: What is a leine?
|
| 51 |
+
Rewrite:"""
|
| 52 |
+
|
| 53 |
+
model_name_or_path = 'BeastyZ/Qwen2.5-3B-ConvSearch-R1-TopiOCQA'
|
| 54 |
+
|
| 55 |
+
sampling_params = SamplingParams(
|
| 56 |
+
temperature=0.7,
|
| 57 |
+
max_tokens=4096
|
| 58 |
+
)
|
| 59 |
+
llm = LLM(
|
| 60 |
+
model=model_name_or_path,
|
| 61 |
+
tensor_parallel_size=1,
|
| 62 |
+
enforce_eager=False,
|
| 63 |
+
gpu_memory_utilization=0.8,
|
| 64 |
+
dtype='bfloat16',
|
| 65 |
+
)
|
| 66 |
+
conv = [
|
| 67 |
+
{
|
| 68 |
+
'role': 'user',
|
| 69 |
+
'content': example,
|
| 70 |
+
}
|
| 71 |
+
]
|
| 72 |
+
outputs = llm.chat(conv, sampling_params, add_generation_prompt=True)
|
| 73 |
+
for output in outputs:
|
| 74 |
+
generated_text = output.outputs[0].text
|
| 75 |
+
print(generated_text)
|
| 76 |
+
|
| 77 |
+
""" Reference Answer:
|
| 78 |
+
<think> The user is asking for a definition or explanation of the term "lΓ©ine," which is mentioned in the previous context. \
|
| 79 |
+
The term "lΓ©ine" refers to a specific type of garment worn by the Gaelic Irish. Based on the previous context, \
|
| 80 |
+
the lΓ©ine was described as a linen tunic worn by both men and women. The query seeks a definition or explanation of this term. </think>
|
| 81 |
+
<rewrite> What is a lΓ©ine? The lΓ©ine is a traditional Irish garment, a loose-fitting, long-sleeved tunic made of linen. \
|
| 82 |
+
It was a common piece of clothing among the Gaelic Irish. The term "lΓ©ine" is derived from the Old Irish word "lΓn," meaning "garment." \
|
| 83 |
+
It typically reached to the ankles for men and at full length for women. The lΓ©ine was a significant part of the Gaelic Irish's clothing, \
|
| 84 |
+
often worn with a cloak and other accessories. </rewrite>
|
| 85 |
+
"""
|
| 86 |
+
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
# π οΈ Installation
|
| 90 |
+
Since training process involves retrieval and RL, to avoid interference between environments, retrieval and RL use their own separate environments.
|
| 91 |
+
|
| 92 |
+
Setup for retrieval
|
| 93 |
+
```bash
|
| 94 |
+
git clone https://github.com/BeastyZ/ConvSearch-R1.git
|
| 95 |
+
cd ConvSearch-R1
|
| 96 |
+
|
| 97 |
+
conda create -n retriever python=3.10
|
| 98 |
+
conda activate retriever
|
| 99 |
+
|
| 100 |
+
pip3 install -r requirements_retriever.txt
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
Setup for RL using verl
|
| 104 |
+
```bash
|
| 105 |
+
conda create -n verl python=3.9
|
| 106 |
+
conda create verl
|
| 107 |
+
|
| 108 |
+
pip3 install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu124
|
| 109 |
+
pip3 install flash-attn --no-build-isolation
|
| 110 |
+
|
| 111 |
+
cd verl
|
| 112 |
+
pip3 install -e .
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
# π Server
|
| 116 |
+
Before training, please deploy the retriever first, and then paste the access address of the retriever into the training script. For deployment methods, please refer to [serve.sh](./src/retrieval/serve.sh). Then test its connectivity with the following code.
|
| 117 |
+
```python
|
| 118 |
+
import requests
|
| 119 |
+
|
| 120 |
+
# URL for your local FastAPI server
|
| 121 |
+
url = "http://127.0.0.1:YOUR_PORT/retrieve"
|
| 122 |
+
|
| 123 |
+
test_rewrite = """Did Harry Hopkins work with other people to publicize and defend New Deal relief programs? Hopkins began important roles in New Deal programs such as Federal Emergency Relief Administration (FERA) in 1933. This period saw Hopkins as federal relief administrator. As part of his work, Hopkins supervised programs like FERA and Civil Works Administration (CWA). In his position, Hopkins was involved from 1933 to 1935 with these efforts. After joining Roosevelt's administration, Hopkins was extensively involved in relief projects. By the late 1930s, as he was fighting stomach cancer, FDR began to train Hopkins as a possible successor. Hopkins also supervised the Works Progress Administration (WPA). Throughout Hopkins' career, was he involved with publicizing and defending New Deal programs? In an effort to publicize and defend these programs, did Hopkins work with any particular groups or individuals? Some reports suggest Hopkins himself was a key public figure in promoting New Deal programs and sometimes worked with respected experts. Historically, there is evidence that Hopkins worked with and supported other government and public figures in advocating for FDR's New Deal programs. Additionally, came Hopkins' work with publicizing these programs include working with others in his capacity as federal relief administrator or in running various programs like FERA, FDR training him as potential successor, and his involvement with relief programs during the Great Depression. Did Hopkins collaborate with others to promote and protect New Deal programs, and when did these efforts start? Did Hopkins work with individuals like Edwin Campell, who helped to develop the New Deal and attempted to publicize government programs to the public as part of this role? How else did Hopkins work with these programs to ensure public support?"""
|
| 124 |
+
test_qid = "QReCC-Train_938_7"
|
| 125 |
+
# Example payload
|
| 126 |
+
payload = {
|
| 127 |
+
# for topiocqa
|
| 128 |
+
# "queries": ["what was australia's contribution to the battle of normandy?", "was the battle fought in australia?"],
|
| 129 |
+
# "query_ids": ['topiocqa-train_1_1', 'topiocqa-train_1_2'], # gold id: 5498209, 5498207
|
| 130 |
+
|
| 131 |
+
# for qrecc
|
| 132 |
+
"queries": [test_rewrite, "How did Van Halen reunite with Roth?", "Where was he born?", 'Who is the new chairman of National Scheduled Tribes Commision', 'For how long was the country devided before it became united'],
|
| 133 |
+
"query_ids": [test_qid, 'QReCC-Train_2_1', 'QReCC-Train_5_1', 'QReCC-Train_10822_1', 'QReCC-Train_10810_3'], # gold id: 54537936, 54572406, 54302883
|
| 134 |
+
|
| 135 |
+
"topk": 100,
|
| 136 |
+
"return_scores": False
|
| 137 |
+
}
|
| 138 |
+
|
| 139 |
+
# Send POST request
|
| 140 |
+
response = requests.post(url, json=payload)
|
| 141 |
+
|
| 142 |
+
# Raise an exception if the request failed
|
| 143 |
+
response.raise_for_status()
|
| 144 |
+
|
| 145 |
+
# Get the JSON response
|
| 146 |
+
retrieved_data = response.json()
|
| 147 |
+
|
| 148 |
+
print("Response from server:")
|
| 149 |
+
print(retrieved_data)
|
| 150 |
+
```
|
| 151 |
+
|
| 152 |
+
# π₯ Training
|
| 153 |
+
Verl only supports data in **Parquet** format, and both the data for SFT and GRPO must follow this format.
|
| 154 |
+
|
| 155 |
+
For the code to preprocess the SFT and GRPO data, please refer to directory [data_preprocess](./verl/examples/data_preprocess).
|
| 156 |
+
|
| 157 |
+
## SFT
|
| 158 |
+
For more examples about SFT, see [SFT](./verl/examples/sft).
|
| 159 |
+
```bash
|
| 160 |
+
conda activate verl
|
| 161 |
+
|
| 162 |
+
set -x
|
| 163 |
+
|
| 164 |
+
nproc_per_node=8
|
| 165 |
+
save_path=ckpt/sft/qrecc/llama3.2-3b-it_self
|
| 166 |
+
export WANDB_API_KEY="your_wandb_key"
|
| 167 |
+
HOME=path/to/your/work/home
|
| 168 |
+
|
| 169 |
+
torchrun --standalone --nnodes=1 --nproc_per_node=$nproc_per_node \
|
| 170 |
+
-m verl.trainer.fsdp_sft_trainer \
|
| 171 |
+
data.train_files=$HOME/data/qrecc/sft/train_llama3.2-3b-it_self.parquet \
|
| 172 |
+
data.val_files=$HOME/data/qrecc/sft/test_llama3.2-3b-it_self.parquet \
|
| 173 |
+
data.prompt_key=extra_info \
|
| 174 |
+
data.response_key=extra_info \
|
| 175 |
+
+data.prompt_dict_keys=['prompt'] \
|
| 176 |
+
+data.response_dict_keys=['answer'] \
|
| 177 |
+
data.train_batch_size=64 \
|
| 178 |
+
data.micro_batch_size_per_gpu=8 \
|
| 179 |
+
data.max_length=3072 \
|
| 180 |
+
data.truncation=right \
|
| 181 |
+
model.partial_pretrain=path/to/your/model \
|
| 182 |
+
model.enable_gradient_checkpointing=True \
|
| 183 |
+
trainer.default_local_dir=$save_path \
|
| 184 |
+
trainer.project_name=llama3.2-3b-it_qrecc-sft \
|
| 185 |
+
trainer.experiment_name=llama3.2-3b-it_self_epoch2 \
|
| 186 |
+
trainer.total_epochs=2 \
|
| 187 |
+
trainer.logger=['wandb'] \
|
| 188 |
+
trainer.default_hdfs_dir=null
|
| 189 |
+
```
|
| 190 |
+
## GRPO
|
| 191 |
+
For more examples about GRPO, see [GRPO](./verl/examples/grpo_trainer).
|
| 192 |
+
```bash
|
| 193 |
+
conda activate verl
|
| 194 |
+
|
| 195 |
+
#!/bin/bash
|
| 196 |
+
set -x
|
| 197 |
+
|
| 198 |
+
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
|
| 199 |
+
export VLLM_ATTENTION_BACKEND=XFORMERS
|
| 200 |
+
export WANDB_API_KEY="your_wandb_key"
|
| 201 |
+
DATE=$(date "+%y%m%d%H%M")
|
| 202 |
+
HOME=path/to/your/work/home
|
| 203 |
+
|
| 204 |
+
PROJECT_NAME=verl_grpo_rewrite_qrecc
|
| 205 |
+
EXPERIMENT_NAME=llama3.2_3b_it_self_bs128_maxlen1024_lr1e-6_warmup100_n8_temp0.7_epoch9_r9_v3
|
| 206 |
+
export RETRIEVER_URL="your_retrieval_server_url"
|
| 207 |
+
|
| 208 |
+
|
| 209 |
+
python3 -m verl.trainer.main_ppo \
|
| 210 |
+
algorithm.adv_estimator=grpo \
|
| 211 |
+
data.train_files=$HOME/data/qrecc/train_v3.parquet \
|
| 212 |
+
data.val_files=$HOME/data/qrecc/test_v3.parquet \
|
| 213 |
+
data.train_batch_size=128 \
|
| 214 |
+
data.max_prompt_length=1536 \
|
| 215 |
+
data.max_response_length=1024 \
|
| 216 |
+
data.filter_overlong_prompts=True \
|
| 217 |
+
data.truncation='error' \
|
| 218 |
+
actor_rollout_ref.model.path=path/to/your/sft/model \
|
| 219 |
+
actor_rollout_ref.actor.optim.lr=1e-6 \
|
| 220 |
+
actor_rollout_ref.actor.optim.lr_warmup_steps=100 \
|
| 221 |
+
actor_rollout_ref.model.use_remove_padding=True \
|
| 222 |
+
actor_rollout_ref.actor.ppo_mini_batch_size=128 \
|
| 223 |
+
actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=8 \
|
| 224 |
+
actor_rollout_ref.actor.use_kl_loss=True \
|
| 225 |
+
actor_rollout_ref.actor.kl_loss_coef=0.001 \
|
| 226 |
+
actor_rollout_ref.actor.kl_loss_type=low_var_kl \
|
| 227 |
+
actor_rollout_ref.model.enable_gradient_checkpointing=True \
|
| 228 |
+
actor_rollout_ref.actor.fsdp_config.param_offload=False \
|
| 229 |
+
actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
|
| 230 |
+
actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=32 \
|
| 231 |
+
actor_rollout_ref.rollout.tensor_model_parallel_size=1 \
|
| 232 |
+
actor_rollout_ref.rollout.name=vllm \
|
| 233 |
+
actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
|
| 234 |
+
actor_rollout_ref.rollout.n=8 \
|
| 235 |
+
actor_rollout_ref.rollout.temperature=0.7 \
|
| 236 |
+
actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=32 \
|
| 237 |
+
actor_rollout_ref.ref.fsdp_config.param_offload=True \
|
| 238 |
+
algorithm.kl_ctrl.kl_coef=0.001 \
|
| 239 |
+
trainer.critic_warmup=0 \
|
| 240 |
+
trainer.logger=['wandb'] \
|
| 241 |
+
trainer.project_name=$PROJECT_NAME \
|
| 242 |
+
trainer.experiment_name=$EXPERIMENT_NAME \
|
| 243 |
+
trainer.n_gpus_per_node=8 \
|
| 244 |
+
trainer.nnodes=1 \
|
| 245 |
+
trainer.save_freq=100 \
|
| 246 |
+
trainer.test_freq=-1 \
|
| 247 |
+
trainer.default_local_dir=ckpt/qrecc/dense/$EXPERIMENT_NAME \
|
| 248 |
+
trainer.total_epochs=9 \
|
| 249 |
+
reward_model.reward_manager=rewrite_r1 \
|
| 250 |
+
custom_reward_function.path=verl/verl/utils/reward_score/rewrite_r1.py \
|
| 251 |
+
retriever.topk=100 $@ 2>&1 | tee logs/${DATE}_${EXPERIMENT_NAME}.log
|
| 252 |
+
```
|
| 253 |
+
|
| 254 |
+
# π€ Inference
|
| 255 |
+
We need to collect data for evaluation. For more examples about inference, see [infer](./src/infer).
|
| 256 |
+
```bash
|
| 257 |
+
conda activate verl
|
| 258 |
+
|
| 259 |
+
python3 src/infer/infer.py \
|
| 260 |
+
--model_name_or_path path/to/your/model \
|
| 261 |
+
--model_name ConvSearch-R1 \
|
| 262 |
+
--dp_size 8 \
|
| 263 |
+
--gpus_per_dp_rank 1 \
|
| 264 |
+
--temperature 0.7 \
|
| 265 |
+
--input_path data/topiocqa/dev.json \
|
| 266 |
+
--output_path path/to/your/output
|
| 267 |
+
```
|
| 268 |
+
|
| 269 |
+
# π¨ββοΈ Evaluation
|
| 270 |
+
For more details about evaluation, see [eval](./src/eval).
|
| 271 |
+
```bash
|
| 272 |
+
conda activate retriever
|
| 273 |
+
|
| 274 |
+
python3 src/eval/get_metrics_using_ance.py \
|
| 275 |
+
--pretrained_encoder_path path/to/your/dense/retriever \
|
| 276 |
+
--test_file_path path/to/rewrite/file/gennerated/by/rewriter \
|
| 277 |
+
--passage_embeddings_dir_path "embedding/ance_topiocqa" \
|
| 278 |
+
--qrel_output_dir data/topiocqa/qrel/dense \
|
| 279 |
+
--output_trec_file filename/to/save/trec/file \
|
| 280 |
+
--trec_gold_qrel_file_path data/topiocqa/dev.trec \
|
| 281 |
+
--n_gpu 2 \
|
| 282 |
+
--test_type rewrite
|
| 283 |
+
```
|
| 284 |
+
|
| 285 |
+
# π Acknowledgement
|
| 286 |
+
We use verl for SFT and GRPO: https://github.com/volcengine/verl
|
| 287 |
+
|
| 288 |
+
|
| 289 |
+
# π Citation
|
| 290 |
+
```
|
| 291 |
+
@misc{zhu2025convsearchr1enhancingqueryreformulation,
|
| 292 |
+
title={ConvSearch-R1: Enhancing Query Reformulation for Conversational Search with Reasoning via Reinforcement Learning},
|
| 293 |
+
author={Changtai Zhu and Siyin Wang and Ruijun Feng and Kai Song and Xipeng Qiu},
|
| 294 |
+
year={2025},
|
| 295 |
+
eprint={2505.15776},
|
| 296 |
+
archivePrefix={arXiv},
|
| 297 |
+
primaryClass={cs.CL},
|
| 298 |
+
url={https://arxiv.org/abs/2505.15776},
|
| 299 |
+
}
|
| 300 |
+
```
|