Add Batch c33e6f5c-0eae-4003-871c-34b2c891a7d9 data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +12 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_content_list.json +0 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_model.json +0 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_origin.pdf +3 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/full.md +0 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/images.zip +3 -0
- 2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/layout.json +0 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_content_list.json +0 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_model.json +0 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_origin.pdf +3 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/full.md +0 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/images.zip +3 -0
- 2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/layout.json +0 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_content_list.json +0 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_model.json +0 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_origin.pdf +3 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/full.md +504 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/images.zip +3 -0
- 2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/layout.json +0 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_content_list.json +1868 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_model.json +0 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_origin.pdf +3 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/full.md +315 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/images.zip +3 -0
- 2023/kNN-LM Does Not Improve Open-ended Text Generation/layout.json +0 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_content_list.json +1149 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_model.json +1461 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_origin.pdf +3 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/full.md +208 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/images.zip +3 -0
- 2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/layout.json +0 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_content_list.json +0 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_model.json +0 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_origin.pdf +3 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/full.md +439 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/images.zip +3 -0
- 2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/layout.json +0 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_content_list.json +0 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_model.json +0 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_origin.pdf +3 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/full.md +656 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/images.zip +3 -0
- 2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/layout.json +0 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_content_list.json +2012 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_model.json +0 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_origin.pdf +3 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/full.md +360 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/images.zip +3 -0
- 2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/layout.json +0 -0
- 2023/‘Don’t Get Too Technical with Me’_ A Discourse Structure-Based Framework for Automatic Science Journalism/07696dca-e5fa-4303-a9d3-d8a2fe310a97_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -7641,3 +7641,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 7641 |
2023/ZGUL_[[:space:]]Zero-shot[[:space:]]Generalization[[:space:]]to[[:space:]]Unseen[[:space:]]Languages[[:space:]]using[[:space:]]Multi-source[[:space:]]Ensembling[[:space:]]of[[:space:]]Language[[:space:]]Adapters/c66551b2-4004-4142-b72d-43d7854dd6e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7642 |
2023/Zero-Shot[[:space:]]Multi-Label[[:space:]]Topic[[:space:]]Inference[[:space:]]with[[:space:]]Sentence[[:space:]]Encoders[[:space:]]and[[:space:]]LLMs/43d215c8-5416-46dd-b0e9-0510e863efc2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7643 |
2023/Zero-shot[[:space:]]Faithfulness[[:space:]]Evaluation[[:space:]]for[[:space:]]Text[[:space:]]Summarization[[:space:]]with[[:space:]]Foundation[[:space:]]Language[[:space:]]Model/62311cab-dc77-402b-aed6-2cde4ac63dcd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7641 |
2023/ZGUL_[[:space:]]Zero-shot[[:space:]]Generalization[[:space:]]to[[:space:]]Unseen[[:space:]]Languages[[:space:]]using[[:space:]]Multi-source[[:space:]]Ensembling[[:space:]]of[[:space:]]Language[[:space:]]Adapters/c66551b2-4004-4142-b72d-43d7854dd6e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7642 |
2023/Zero-Shot[[:space:]]Multi-Label[[:space:]]Topic[[:space:]]Inference[[:space:]]with[[:space:]]Sentence[[:space:]]Encoders[[:space:]]and[[:space:]]LLMs/43d215c8-5416-46dd-b0e9-0510e863efc2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7643 |
2023/Zero-shot[[:space:]]Faithfulness[[:space:]]Evaluation[[:space:]]for[[:space:]]Text[[:space:]]Summarization[[:space:]]with[[:space:]]Foundation[[:space:]]Language[[:space:]]Model/62311cab-dc77-402b-aed6-2cde4ac63dcd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7644 |
+
2023/Zero-shot[[:space:]]Sharpness-Aware[[:space:]]Quantization[[:space:]]for[[:space:]]Pre-trained[[:space:]]Language[[:space:]]Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7645 |
+
2023/clembench_[[:space:]]Using[[:space:]]Game[[:space:]]Play[[:space:]]to[[:space:]]Evaluate[[:space:]]Chat-Optimized[[:space:]]Language[[:space:]]Models[[:space:]]as[[:space:]]Conversational[[:space:]]Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7646 |
+
2023/e-THERAPIST_[[:space:]]I[[:space:]]suggest[[:space:]]you[[:space:]]to[[:space:]]cultivate[[:space:]]a[[:space:]]mindset[[:space:]]of[[:space:]]positivity[[:space:]]and[[:space:]]nurture[[:space:]]uplifting[[:space:]]thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7647 |
+
2023/kNN-LM[[:space:]]Does[[:space:]]Not[[:space:]]Improve[[:space:]]Open-ended[[:space:]]Text[[:space:]]Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7648 |
+
2023/mAggretriever_[[:space:]]A[[:space:]]Simple[[:space:]]yet[[:space:]]Effective[[:space:]]Approach[[:space:]]to[[:space:]]Zero-Shot[[:space:]]Multilingual[[:space:]]Dense[[:space:]]Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7649 |
+
2023/mRedditSum_[[:space:]]A[[:space:]]Multimodal[[:space:]]Abstractive[[:space:]]Summarization[[:space:]]Dataset[[:space:]]of[[:space:]]Reddit[[:space:]]Threads[[:space:]]with[[:space:]]Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7650 |
+
2023/q2d_[[:space:]]Turning[[:space:]]Questions[[:space:]]into[[:space:]]Dialogs[[:space:]]to[[:space:]]Teach[[:space:]]Models[[:space:]]How[[:space:]]to[[:space:]]Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7651 |
+
2023/trlX_[[:space:]]A[[:space:]]Framework[[:space:]]for[[:space:]]Large[[:space:]]Scale[[:space:]]Reinforcement[[:space:]]Learning[[:space:]]from[[:space:]]Human[[:space:]]Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7652 |
+
2023/‘Don’t[[:space:]]Get[[:space:]]Too[[:space:]]Technical[[:space:]]with[[:space:]]Me’_[[:space:]]A[[:space:]]Discourse[[:space:]]Structure-Based[[:space:]]Framework[[:space:]]for[[:space:]]Automatic[[:space:]]Science[[:space:]]Journalism/07696dca-e5fa-4303-a9d3-d8a2fe310a97_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7653 |
+
2023/“Are[[:space:]]Your[[:space:]]Explanations[[:space:]]Reliable_”[[:space:]]Investigating[[:space:]]the[[:space:]]Stability[[:space:]]of[[:space:]]LIME[[:space:]]in[[:space:]]Explaining[[:space:]]Text[[:space:]]Classifiers[[:space:]]by[[:space:]]Marrying[[:space:]]XAI[[:space:]]and[[:space:]]Adversarial[[:space:]]Attack/67df4307-3460-43ff-b514-e5985bd45e76_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7654 |
+
2023/“Fifty[[:space:]]Shades[[:space:]]of[[:space:]]Bias”_[[:space:]]Normative[[:space:]]Ratings[[:space:]]of[[:space:]]Gender[[:space:]]Bias[[:space:]]in[[:space:]]GPT[[:space:]]Generated[[:space:]]English[[:space:]]Text/ae5b34cd-b6cb-4035-b7e2-64177a77d360_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 7655 |
+
2023/“Mistakes[[:space:]]Help[[:space:]]Us[[:space:]]Grow”_[[:space:]]Facilitating[[:space:]]and[[:space:]]Evaluating[[:space:]]Growth[[:space:]]Mindset[[:space:]]Supportive[[:space:]]Language[[:space:]]in[[:space:]]Classrooms/aac9480c-6ce0-449d-bbcd-33b9d456c3be_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/8f00a7ab-26e2-4c9b-b0d4-2c286eeac786_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1021ff8e594440b5538ea3a44708979766d91fcc67954ad698ec5475f118c5d4
|
| 3 |
+
size 3774639
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e3c3d9d29cc8d183b2c7abb2911106812771b8ce25c47a6162435e3ea8e3d0d0
|
| 3 |
+
size 2567210
|
2023/Zero-shot Sharpness-Aware Quantization for Pre-trained Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/5f941bbe-ab9b-4256-b111-052fc6d3110e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0638e045d3b9bb94c20d8bb01a8ea730d1237dfee009bcc82d6f0ab0bffda4e1
|
| 3 |
+
size 1090062
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1d187c7229908ce22401814fd0159d406352e187c8f95bf3a8b9c9eff500f7c8
|
| 3 |
+
size 3324995
|
2023/clembench_ Using Game Play to Evaluate Chat-Optimized Language Models as Conversational Agents/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/6366231d-83c0-4a2b-88c4-4a612e573c2a_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e9db9328e7942708a589c809ca80301fa9eb65dde1e28531de02b78ee4aba04d
|
| 3 |
+
size 4693027
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/full.md
ADDED
|
@@ -0,0 +1,504 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# e-THERAPIST: I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts
|
| 2 |
+
|
| 3 |
+
Kshitij Mishra * and Priyanshu Priya * and Manisha Burja and Asif Ekbal
|
| 4 |
+
|
| 5 |
+
Department of Computer Science and Engineering
|
| 6 |
+
|
| 7 |
+
Indian Institute of Technology Patna, India
|
| 8 |
+
|
| 9 |
+
{kshitij_1921cs23,priyanshu_2021cs26,burja_2211cs05,asif}@iitp.ac.in
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
The shortage of therapists for mental health patients emphasizes the importance of globally accessible dialogue systems alleviating their issues. To have effective interpersonal psychotherapy, these systems must exhibit politeness and empathy when needed. However, these factors may vary as per the user's gender, age, persona, and sentiment. Hence, in order to establish trust and provide a personalized cordial experience, it is essential that generated responses should be tailored to individual profiles and attributes. Focusing on this objective, we propose e-THERAPIST, a novel polite interpersonal psychotherapy dialogue system to address issues like depression, anxiety, schizophrenia, etc. We begin by curating a unique conversational dataset for psychotherapy, called PSYCON. It is annotated at two levels: (i) dialogue-level - including user's profile information (gender, age, persona) and therapist's psychotherapeutic approach; and (ii) utterance-level - encompassing user's sentiment and therapist's politeness, and interpersonal behaviour. Then, we devise a novel reward model to adapt correct polite interpersonal behaviour and use it to train e-THERAPIST on PSYCON employing NLPO loss. Our extensive empirical analysis validates the effectiveness of each component of the proposed e-THERAPIST demonstrating its potential impact in psychotherapy settings<sup>1</sup>.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
The prevalence of psychological and mental disorders, such as depression, anxiety, stress and others, is increasing globally. Approximately $5\%$ of the adult population worldwide is estimated to experience depression (WHO, 2023). Consequently,
|
| 18 |
+
|
| 19 |
+
the demand for counseling services continues to rise, and the existing mental health workforce is struggling to meet the needs adequately. Hence, dialogue systems possessing social influence skills like psychotherapy are crucial for expanding the application of technology to a wide range of realistic situations.
|
| 20 |
+
|
| 21 |
+
Politeness has shown to be effective in driving a smooth and engaging conversation during psychotherapy (Budiarta et al., 2021). However, politeness, in itself, encompasses various aspects (Kitamura, 2000; Laplante and Ambady, 2003; Stephan et al., 2010), making it challenging to instil in a dialogue system. For instance, the degree of politeness in communication varies based on the gender and age of the person involved (Danescu-Niculescu-Mizil et al., 2013; Mahmud, 2013; Firdaus et al., 2022b); conversations with females/elders often include more polite expressions than males/youths. Politeness can also be influenced by various aspects of an individual's personality traits (Goldberg, 1993; Hirsh et al., 2010; Xafizovna and Boboqulovna, 2022).
|
| 22 |
+
|
| 23 |
+
Interpersonal behaviour advances the understanding of the interpersonal dispositions associated with psychological problems. The interpersonal circumplex (IPC) model assesses these dispositions and facilitates the comprehension of user's social cognition, motivation, and behaviour (Locke et al., 2017). It indicates that adults and youth facing psychological issues are more likely to seek negative feedback and criticism in their interactions compared to older individuals. Further, females tend to initiate more interpersonal stressors than males, and their responses to these stressors often involve rumination and negative thinking patterns. The IPC model can also help identify patterns and relationships between personality traits, thereby fostering an understanding of how individuals interact with others (Gurtman, 2009).
|
| 24 |
+
|
| 25 |
+
The incorporation of polite and interpersonal
|
| 26 |
+
|
| 27 |
+
<table><tr><td>Elderly Female Openness
|
| 28 |
+
T: A very good morning madam!
|
| 29 |
+
How are you feeling today?
|
| 30 |
+
(helpful)
|
| 31 |
+
U: Good morning, I'm feeling too
|
| 32 |
+
down today. Can you help me?
|
| 33 |
+
(negative)
|
| 34 |
+
T: I'm really sorry to hear that.
|
| 35 |
+
Kindly tell me what's been
|
| 36 |
+
going on? (understanding)
|
| 37 |
+
U: I just feel like nothing is going
|
| 38 |
+
right in my life. I don't have any
|
| 39 |
+
motivation to do anything.
|
| 40 |
+
(negative)
|
| 41 |
+
T: I see, but don't worry, together
|
| 42 |
+
we will explore the solutions
|
| 43 |
+
that would be better for you.
|
| 44 |
+
Could you please tell me more
|
| 45 |
+
about what's been going on?
|
| 46 |
+
(empathetic)</td><td>Elderly Male Openness
|
| 47 |
+
T: Good morning sir, what brings
|
| 48 |
+
you here? (helpful)
|
| 49 |
+
U: I'm not feeling good these days.
|
| 50 |
+
I seek your help. (negative)
|
| 51 |
+
T: Sorry to learn this. I want to
|
| 52 |
+
know what's been going on to
|
| 53 |
+
help you? (directing)
|
| 54 |
+
U: I don't feel like anything right in
|
| 55 |
+
my life, no motivation, nothing.
|
| 56 |
+
(negative)
|
| 57 |
+
T: Okay, let's explore that. Can
|
| 58 |
+
you please explain a bit about
|
| 59 |
+
what you are exactly
|
| 60 |
+
experiencing these days?
|
| 61 |
+
(helpful)</td></tr><tr><td>Young Female Openness
|
| 62 |
+
T: Hello dear! I'm ready to provide you
|
| 63 |
+
with empathetic support. How are you
|
| 64 |
+
doing today? (helpful)
|
| 65 |
+
U: Not so good, not so energetic.
|
| 66 |
+
(negative)
|
| 67 |
+
T: That's really disappointing to hear,
|
| 68 |
+
my dear friend. I presume you are
|
| 69 |
+
face some issues these days. I
|
| 70 |
+
would love to help you if you could
|
| 71 |
+
share your problem with me.
|
| 72 |
+
(empathetic)
|
| 73 |
+
U: I have this overwhelming sense that
|
| 74 |
+
everything is going wrong in my life. I
|
| 75 |
+
lack the drive and enthusiasm to engage
|
| 76 |
+
in any activities. (negative)
|
| 77 |
+
T: Hey girl, I understand your situation
|
| 78 |
+
a bit. Let's dive deeper to find the
|
| 79 |
+
strategies that would be helpful. If you
|
| 80 |
+
agree, I would like to ask a few
|
| 81 |
+
questions? (empathetic)</td><td>Elderly Male Agreeableness
|
| 82 |
+
T: Welcome to our session. What
|
| 83 |
+
brings you here today? (helpful)
|
| 84 |
+
U: I've been feeling lethargic these
|
| 85 |
+
days. Please help me. (negative)
|
| 86 |
+
T: I apologize for the odd situation
|
| 87 |
+
you're facing. Can you please share
|
| 88 |
+
more details about what has been
|
| 89 |
+
happening? (helpful)
|
| 90 |
+
U: Yes, of course. I can share with
|
| 91 |
+
you. (positive)
|
| 92 |
+
T: I appreciate your willingness to share
|
| 93 |
+
Would you mind elaborating on your
|
| 94 |
+
current experiences? (understanding)
|
| 95 |
+
U: I've been struggling to find any
|
| 96 |
+
positivity in my life. It's disheartening
|
| 97 |
+
to lack motivation and feeling empty.
|
| 98 |
+
(negative)
|
| 99 |
+
T: We truly understand your situation
|
| 100 |
+
I will do our best to help you. Let's
|
| 101 |
+
explore the techniques to cope with
|
| 102 |
+
your circumstances. (empathetic)</td></tr></table>
|
| 103 |
+
|
| 104 |
+
Figure 1: Example demonstrating the polite and interpersonal behaviour of the therapist $(T)$ based on the user's $(U)$ gender, age, persona and sentiment. The text in green depicts the user's personal profile (gender, age and persona). The text in pink, blue and grey depict the user's sentiment, the therapist's polite and interpersonal behaviour, respectively.
|
| 105 |
+
|
| 106 |
+
conduct into the dialogue agents establishes a friendly and congenial environment, thereby, providing a better personalized experience to users. The user's sentiment further facilitates the generation of contextually correct polite (Firdaus et al., 2022a) and appropriate interpersonal behavioural responses. For instance, in Figure 1, for the "Young Female Openness" sample (fourth utterance), the user expresses a negative sentiment, thus agent's behaviour in the form of imposing or confrontation might make the user more angry or dissatisfied. Hence, it should adapt helping, understanding or empathetic behaviour. Besides, a therapeutic dialogue system needs to utilize various psychotherapeutic approaches (Smith, 1982; Snyder, 1945; Thorne, 1948; Howard et al., 1986) during the ongoing conversation based on user's personal profile (gender, age and persona) to produce effective outcomes during psychotherapy as depicted in Figure1. Driven by these considerations, in this work, we propose e-THERAPIST, a novel gender, age, persona and sentiment-aware polite and interpersonal dialogue system for psychotherapy.
|
| 107 |
+
|
| 108 |
+
To develop e-THERAPIST, we exploit the
|
| 109 |
+
|
| 110 |
+
newly designed seven rewards in a reinforcement learning (RL) setting (Casanueva et al., 2018; Mesgar et al., 2020; Lambert and von Werra, 2023). This allows e-HERAPIST to learn and enhance their performance based on the rewards received through interactions with the environment. To build e-HERAPIST, we first curate a novel conversational dataset, named PSYCON by employing well-designed prompts with manual interventions. Then, we annotate the user's utterances with sentiment, and agent's utterances with politeness and interpersonal behavioural information. First, PSYCON is used to fine-tune a large language model (LLM) in a supervised setting. Then, we fine-tune this trained LLM in an RL framework incorporating a novel reward function. This reward function ensures appropriate psychotherapeutic approach, politeness, and interpersonal behaviour based on gender, age, persona and sentiment of the user along with context adequacy and fluency in the generated responses. This reward is used in a Natural Language Policy Optimization (NLPO) loss (Ramamurthy et al., 2022) to optimize the model fine-tuned in a supervised setting. Finally, the performance of the proposed system is evaluated through both automatic metrics and human assessment.
|
| 111 |
+
|
| 112 |
+
In summary, the key contributions of our current work are summarized as follows: (i) Introduced e-THERAPIST, a novel gender, age, persona and sentiment-aware polite and interpersonal dialogue system for psychotherapy, fine-tuned in an RL environment; (ii) Created a novel conversational dataset for psychotherapy, PSYCON, and annotated it at two distinct levels - (a) dialogue-level with gender, age, persona, and psychotherapeutic approach (b) utterance-level information, viz. sentiment, politeness, and interpersonal behaviour information; (iii) Devised a novel reward function incorporating five attribute-specific rewards and two response-quality rewards to generate engaging, fluent, and interactive responses tailored to the user's sentiment, gender, age, and persona; (iv) Conducted extensive empirical evaluation to test the efficacy of e-THERAPIST in terms of novel metrics, attribute-success and response-quality with respect to the strong baselines.
|
| 113 |
+
|
| 114 |
+
# 2 Related Work
|
| 115 |
+
|
| 116 |
+
The issue of mental health disorders, which is a significant concern for public health (Jacob, 2012), has been the focus of previous research, including
|
| 117 |
+
|
| 118 |
+
computational studies. While depression has received the most attention, other mental illnesses like anxiety, schizophrenia, post-traumatic stress disorder, suicide risk, and self-harm have also been examined (Uban et al., 2021). A few studies have examined the posts and blogs of users on social sites to detect depression (Yates et al., 2017; Tadesse et al., 2019), suicidal thoughts (Zirikly et al., 2019; Cao et al., 2019), and other mental health issues (Xu et al., 2020) using natural language processing (NLP) techniques. Some researchers have also worked on developing "therapybots" and creating dialogue agents to provide therapeutic support (Fitzpatrick et al., 2017).
|
| 119 |
+
|
| 120 |
+
In recent times, generating empathetic responses in psychotherapeutic conversations has grown in popularity (Morris et al., 2018; Sharma et al., 2020). In order to help mental health supporters, (Sharma et al., 2021) investigated empathy rewriting as a text generation task. The authors in (Saha et al., 2022) focused on generating sentiment-driven motivational responses in mental health support dialogues. A few studies have explored the role of politeness in improving the sense of empathy and compassion during conversation (Lucas et al., 2014; Kim et al., 2018). The agent's courteous attitude conveys a sense of concern and emotional involvement like a human companion. Further, comprehending and demonstrating proper interpersonal behaviour has proved its effectiveness in psychology to study interpersonal processes, personality traits, and relationship functioning (Kiesler and Auerbach, 2003; Pincus and Gurtman, 2006; Locke et al., 2017). Studies have suggested that good quality interpersonal relationships and behaviour are important for peoples' social functioning and mental health (Cremers et al., 2021). Lately, the authors in (Firdaus et al., 2022b) have demonstrated that inculcating politeness in the agent based on the user's personal profile, such as gender and age group makes the dialogue agent capable of identifying subtle language changes while conversing with different users. Likewise, taking the users' persona into account while generating responses will further enhance the personalization quotient in the dialogue systems (Firdaus et al., 2020; Nargund et al., 2022; Ahmad et al., 2023; Zhao et al., 2023).
|
| 121 |
+
|
| 122 |
+
Inspired by the significance of politeness and interpersonal relationship in psychological support, together with the subtle change in these aspects with users' profiles (gender, age and persona) and
|
| 123 |
+
|
| 124 |
+
sentiment information, we propose a polite and interpersonal dialogue system for psychotherapy that generates responses in accordance to user's age, gender, persona and sentiment. To the best of our knowledge, ours is the first attempt that exploits politeness and interpersonal relationship to generate precise responses in dialogue systems for psychotherapy. Furthermore, our research pioneers the exploration of how politeness and interpersonal relationships differ across individuals of varying gender, age groups, and personas within psychotherapeutic conversations.
|
| 125 |
+
|
| 126 |
+
# 3 Dataset
|
| 127 |
+
|
| 128 |
+
To develop e-THERAPIST, we create PSYCON, a novel high-quality conversational dataset for psychotherapy. We focus on conversations considering the gender, age and persona of the user with the ultimate purpose of enhancing mental health support in a personalized way and improving the overall outlook of people facing psychological issues.
|
| 129 |
+
|
| 130 |
+
# 3.1 Dataset Creation
|
| 131 |
+
|
| 132 |
+
PSYCON comprises interactions between the therapist and the user suffering from one of the seven most common psychological issues, viz. depression, anxiety, stress, bipolar disorder, disruptive behaviour and dissocial disorders, post-traumatic stress disorder (PTSD), and schizophrenia (WHO, 2022). To minimize the requirement of expensive and scarce human resources, we create the dataset by utilizing knowledge present in LLMs like GPT-J model (Wang and Komatsuzaki, 2021). We create the dataset by prompting this GPT-J model followed by manual intervention to ensure quality control. The dataset creation process involves two steps: (i). Therapist-user dialogue creation, and (ii). Data cleaning and quality control.
|
| 133 |
+
|
| 134 |
+
# 3.1.1 Therapist-user Dialogue Creation
|
| 135 |
+
|
| 136 |
+
We create therapist-user dialogues utilizing the following steps:
|
| 137 |
+
|
| 138 |
+
Attaining the Seed Utterance. We require seed user utterances accompanied by a specific gender, age and persona followed by the seed utterance of the therapist adhering to a designated psychotherapeutic approach to begin the few-shot dialogue generation using GPT-J model. We refer to several authentic websites, such as the World Health Organization (WHO, 2022), the National Mental Health Foundation (MHF, 2023), and the National
|
| 139 |
+
|
| 140 |
+
Alliance on Mental Illness (NAMI, 2023) to understand the specific characteristics of different psychological issues. We also utilize the real user interactions from threads posted on different mental health-focused subreddits (e.g. $r$ -depression) to gather the real experiences of the users facing psychological issues.
|
| 141 |
+
|
| 142 |
+
Further, to enhance user interaction and satisfaction, it is important for the therapist to respond politely and display appropriate interpersonal behaviour. Thus, we utilize the information gathered from the mentioned sources to create the seed utterances considering the variation in politeness and interpersonal behaviour across different genders, age groups and personas while complying with a particular psychotherapeutic approach. This is done with the help of six human experts having postgraduate qualifications in English Linguistics and proficiency in politeness concepts, interpersonal behaviour theory and psychotherapeutic approaches under the guidance of a leading psychotherapist from a government-run institution. The human experts were given proper instructions for designing the seed utterances: (a) create a sequence of seed utterances for each possible combination of gender, age, persona and psychotherapeutic approach for a particular psychological problem; (b) attempt to seek information about the problem and make advances towards solution following a designated psychotherapeutic approach according to the user profile; (c) formulate responses concerning the variation of politeness and interpersonal behaviour quotient in the responses based on the user profile; (d) ensure diversity of the seed utterances to increase user engagement and facilitate better communication; (e) frame the responses displaying positiveness and affirmativeness aiming to boost the user's morale. A few examples of seed utterances are given in Table 4 in Section A.1 of the Appendix.
|
| 143 |
+
|
| 144 |
+
Dialogue Generation. Once the seed utterances are finalized, the GPT-J model is utilized to generate new utterances. The overall dialogue generation completes in two stages. In the first stage, we manually design three prompts, each consisting of instruction and designed seed utterance. For each of the three prompts, 40 dialogues are created by feeding them to GPT-J. During generation, top- $k$ sampling (Fan et al., 2019; Radford et al., 2019) is employed, resulting in the generation of three candidate responses for each input utterance. To check alignment of the candidates with the ongo-
|
| 145 |
+
|
| 146 |
+
ing conversation's context, the contextual similarity is computed between them using BERTScoreF1 ( $BS_{F1}$ ) (Zhang et al., 2019). The candidate with $\max(BS_{F1})$ is selected as the final response. These generated dialogues are then manually rated by the same six human experts for quality on a Likert scale of 1-low, 2-moderate, and 3-high. An inter-evaluator Kappa (McHugh, 2012) agreement ratio of $76.3\%$ is observed among the experts. The prompt generating the maximum number of dialogues with a score of 3 is selected as the final prompt. A prompt example is shown in Section A.1 of the Appendix. The remaining dialogues generated using selected prompt with a score of either 1 or 2 are manually corrected by the experts to ensure quality dialogues. In the second stage, the selected prompt along with the generated seed dialogues is given as input to the GPT-J model, which generates the dialogues with $n$ number of utterances in an incremental way. The dialogues with utterances having $BS_{F1} < 0.4$ are filtered out for manual cross-verification by human experts who are requested to correct the dialogues as per the guidelines.
|
| 147 |
+
|
| 148 |
+
# 3.1.2 Data Cleaning and Quality Control
|
| 149 |
+
|
| 150 |
+
After obtaining the entire conversational dataset, each dialogue is assessed in terms of humanness, user profile consistency, and psychotherapeutic approach consistency by the same set of experts. A score of 1-low, 2-moderate, or 3-high is assigned to each utterance. After obtaining the ratings, we observe an agreement ratio (McHugh, 2012) of $82.7\%$ , $85.3\%$ and $89.6\%$ for humanness, user profile consistency and psychotherapeutic approach consistency, respectively among these experts. All dialogues having utterances with a score of 0 for any of these three aspects are discarded from the dataset. The final dataset statistics are given in Table 1.
|
| 151 |
+
|
| 152 |
+
<table><tr><td>Metrics</td><td>Train</td><td>Validation</td><td>Test</td></tr><tr><td># of Dialogues</td><td>816</td><td>102</td><td>102</td></tr><tr><td># of Utterances</td><td>19,568</td><td>2,692</td><td>2,811</td></tr><tr><td>Avg. Utterances per Dialogue</td><td>23.98</td><td>26.39</td><td>27.56</td></tr></table>
|
| 153 |
+
|
| 154 |
+
Table 1: Dataset statistics of PSYCON.
|
| 155 |
+
|
| 156 |
+
# 3.2 Dataset Annotation
|
| 157 |
+
|
| 158 |
+
To construct e-THERAPIST, we obtain the annotation for PSYCON dataset at two distinct levels(i). dialogue-level annotation aspects, viz. user profile information- gender: male $(m)$ and female $(f)$ ,
|
| 159 |
+
|
| 160 |
+
age: young(y), adult(a), and elderly(e), persona: openness (O), conscientiousness (C), extraversion (E), agreeableness (A) and neuroticism (N), and psychotherapeutic approach: directive (d), nondirective (nd) and eclectic $(ec)^2$ ; (ii). utterance-level annotation aspects, viz. sentiment, politeness, and interpersonal behaviour. The annotation process involves the collaboration of three annotators, consisting of two Ph.D. holders in Linguistics and one individual with a Master's degree<sup>3</sup>. All three annotators possess excellent proficiency in English, substantial experience in labeling tasks, and a thorough understanding of sentiment, politeness, and interpersonal behavioural aspects.
|
| 161 |
+
|
| 162 |
+
# 3.2.1 Sentiment, Politeness, and Interpersonal Behaviour Annotation
|
| 163 |
+
|
| 164 |
+
The user's and therapist's utterances in PSYCON are annotated with one of the ternary sentiment labels, viz. positive, negative and neutral, and politeness labels viz. polite, moderately_polite and impolite, respectively.
|
| 165 |
+
|
| 166 |
+
The different interpersonal behaviour labels following the two-dimensional IPC model (Cremers et al., 2021) that we use in our work are: directing $(Dg)$ , helpful $(Hl)$ , understanding $(Ug)$ , complaint $(Ct)$ , imposing $(Ig)$ , confrontational $(Cl)$ , dissatisfied $(Dd)$ and uncertain $(Un)$ . This interpersonal behaviour annotation list has been extended to incorporate one more label, namely empathetic $(Em)$ , considering the significance of empathy in therapy (Sharma et al., 2020, 2021; Saha et al., 2022). Due to space constraints, the description of all the nine interpersonal behaviour labels and the dataset annotation procedures are provided in Section A.1 of the Appendix.
|
| 167 |
+
|
| 168 |
+
# 4 e-THERAPIST
|
| 169 |
+
|
| 170 |
+
We warm start with a pre-trained language model GPT-2 medium (Radford et al., 2019) fine-tuned on a PSYCON denoted by $TC_{N}$ . Each conversation in $TC_{N}$ can be represented by $TC_{k} = \{t_{0},u_{0},\dots,t_{i},u_{i},\dots,t_{T - 1},u_{T - 1}\}$ ; $t_i$ and $u_{i}$ give the therapist's and user's $i^{th}$ utterance in the conversation, respectively, where $0 < k < N$ for $N$ number of conversations. For each $TC_{k}$ , the user's corresponding gender, age, and persona can be represented by $g_{k} = \{m,f\}$ , $a_{k} = \{y,a,e\}$
|
| 171 |
+
|
| 172 |
+
and $p_k = \{O, C, E, A, N\}$ , respectively. Further, for each user utterance, $u_i$ in $TC_k$ , corresponding sentiment $s_i = \{negative, neutral, positive\}$ is predicted by sentiment classifier $SC$ . The concatenated $u_i$ with $s_i$ can be given as $u_i^s = [u_i + s_i]$ . Now, the $lm_\theta$ is trained to predict output $y_i \approx t_i$ given input $x_i = [u_i^s + u_{i-1}^s + t_{i-1}] + [g_k + a_k + p_k]$ in a supervised learning setting. It can be given as:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
l m _ {\theta} \left(T C _ {n}\right) = \prod_ {k = 0} ^ {N} \prod_ {i = 0} ^ {i = T} \rho \left(y _ {i} \mid x _ {i}\right) \tag {1}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
We call the trained $lm_{\theta}$ as supervised learning fine-tuned language model (SLLM). Next, to end up with proposed e-THERAPIST, we fine-tune SLLM using our novel reward model in RL setting to generate user's profile and sentiment-aware polite responses with correct interpersonal behaviour in therapeutic conversations. The architecture of the proposed system is shown in Figure 2.
|
| 179 |
+
|
| 180 |
+
# 4.1 Reward Model
|
| 181 |
+
|
| 182 |
+
For a given input $x_{i}$ , we generate a set of $n - y_{i}$ possible candidates using $lm_{\theta}$ and score them using a reward model. Our reward model consists of seven distinct rewards to adapt correct polite interpersonal behaviour in generated responses of $lm_{\theta}$ . First reward $R_{1}$ focuses on the awareness of the user's gender and age. Secondly, $R_{2}$ emphasizes the awareness of the user's persona. $R_{3}$ drives the model to follow the correct psychotherapeutic approach. $R_{4}$ , ensures appropriate politeness as per the user's sentiment. $R_{5}$ aims to adapt the interpersonal behaviour in accordance with the user's sentiment. $R_{6}$ focuses on maintaining the plausibility of the context and individual utterances. Lastly, $R_{7}$ steers the generated responses to be fluent and diverse.
|
| 183 |
+
|
| 184 |
+
All seven rewards can be categorized into two types of rewards, viz. Attribute-specific Rewards $(R_{1}, R_{2}, R_{3}, R_{4}, R_{5})$ - to reinforce different attributes of the user or therapist in generated responses, and Response-quality Rewards $(R_{6}, R_{7})$ - to enforce contextually fluent and adequate responses in a conversation.
|
| 185 |
+
|
| 186 |
+
# 4.1.1 Attribute-specific Rewards
|
| 187 |
+
|
| 188 |
+
To design each of the $R_{1}, R_{2}, R_{3}, R_{4}$ and $R_{5}$ , five different RoBERTa-large (Liu et al., 2019) based classifiers are fine-tuned. For $R_{1}$ , Gender-Age Classifier $(GAC)$ - predicts one of the six classes of gender and age $GAC(t_{i}) = \{fy, fa, fe, my, ma, me\}$ ,
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
Figure 2: Overall architecture of the proposed system
|
| 192 |
+
|
| 193 |
+
where $fy, fa, fe, my, ma, me$ denote female-young, female-adult, female-elder, male-young, male-adult, and male-elder, respectively. For $R_{2}$ , Persona classifier ( $PC$ ) predicts one of the five personality traits $PC(t_{i}) = \{O, C, E, A, N\}$ . For $R_{3}$ , a psychotherapeutic approach classifier $CTC([t_{i} + u_{i}]) = \{d, nd, ec\}$ . For $R_{4}$ , a sentiment classifier $SC(u_{i}) = \{positive, neutral, negative\}$ predicts the sentiment of the user's utterance and a politeness classifier $PoC(t_{i}) = \{impolite, moderately_polite, polite\}$
|
| 194 |
+
|
| 195 |
+
predicts the politeness of the therapist's utterance. Lastly, for $R_{5}$ , an interpersonal behaviour classifier $IBC(t_{i}) = \{Dg, Hl, Ug, Ct, Ig, Cl, Dd, Un, Em\}$ predicts one of the nine interpersonal behaviour labels<sup>4</sup>. To design each reward, we track the true class probabilities score from each of the classifiers. $R_{1}$ and $R_{2}$ penalize those responses which deviates from true user profiles $viz$ . gender, age and persona and are computed as:
|
| 196 |
+
|
| 197 |
+
$$
|
| 198 |
+
R 1 = G A C \left(t _ {i}\right) - \alpha \times G A C \left(y _ {i}\right) \tag {2}
|
| 199 |
+
$$
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
R 2 = P C \left(t _ {i}\right) - \alpha \times P C \left(y _ {i}\right) \tag {3}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$R_{3}$ penalize the responses deviating from the correct psychotherapeutic approach in the ongoing dialogue:
|
| 206 |
+
|
| 207 |
+
$$
|
| 208 |
+
R 3 = C T C \left(\left[ t _ {i} + u _ {i} \right]\right) - \alpha \times C T C \left(\left[ y _ {i} + u _ {i} \right]\right) \tag {4}
|
| 209 |
+
$$
|
| 210 |
+
|
| 211 |
+
To formulate $R_4$ , and $R_5$ , we penalize the responses that do not adapt true politeness and interpersonal behaviour as per the sentiment of the user.
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
R A = P o C \left(t _ {i} + S C \left(u _ {i}\right)\right) - \alpha \times P o C \left(y _ {i} + S C \left(u _ {i}\right)\right) \tag {5}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
R 5 = I B C \left(t _ {i} + S C \left(u _ {i}\right)\right) - \alpha \times I B C \left(y _ {i} + S C \left(u _ {i}\right)\right) \tag {6}
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
$\alpha$ acts as a penalization factor in all of the rewards $R_{1}, R_{2}, R_{3}, R_{4}$ , and $R_{5}^{5}$ . The final attribute-specific reward can be written as: $R_{A} = \beta_{1}R_{1} + \beta_{2}R_{2} + \beta_{3}R_{3} + \beta_{4}R_{4} + \beta_{5}R_{5}^{6}$ .
|
| 222 |
+
|
| 223 |
+
# 4.1.2 Response-quality Rewards
|
| 224 |
+
|
| 225 |
+
In $R_6$ , the candidates deviating from context and user's utterance are penalized. It is computed using BERTScore-F1 ( $BS_{F1}$ ) (Zhang et al., 2020) between (i) the true context input $x_i$ and generated $y_i$ (ii) user's utterance $u_i$ and generated $y_i$ . To avoid rewarding high similarities steering, a threshold of 1 is taken into account.
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
R _ {6} = \frac {\operatorname* {m i n} \left(\left(B S _ {F 1} \left(x _ {i} , y _ {i}\right) + B S _ {F 1} \left(u _ {i} , y _ {i}\right)\right) , 1\right)}{2} \tag {7}
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
$R_{7}$ ensures fluency and non-repetitiveness in the generated responses and is computed as the sum of the perplexity reciprocal and $BS_{F1}$ between generated $y_{i}$ and $y_{i - 1}$ .
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
R _ {7} = \frac {1}{P P L} + B S _ {F 1} \left(y _ {i}, y _ {i - 1}\right) \tag {8}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
The response-quality reward can be written as: $R_{R} = \gamma_{1}R_{6} + \gamma_{2}R_{7}^{7}$ . Lastly, by combining $R_{A}$ and $R_{R}$ , we obtain our final normalized reward function $\hat{R}$ as follows:
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\hat {R} = \frac {\delta_ {1} R _ {A} + \delta_ {2} R _ {R}}{7} \tag {9}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
The final score $\hat{R}$ is utilized in an RL policy loss NLPO (Ramamurthy et al., 2022) to provide feedback, compelling the agent to generate high-quality candidates that align with the preferred outcomes. During RL-fine tuning, RL-policy is initialized with $\pi_{\theta} = lm_{\theta}$ . NLPO basically implements a masking policy $\pi_{\phi}$ , a copy of the current policy $\pi_{\theta}$ updated after $m$ steps. Top- $p$ tokens in a vocabulary $V$ contribute as valid-mask remaining are subjected to an invalid mask, i.e. selection probability is set to zero for these tokens. Hence, this strikes a balance between incorporating more task-relevant information compared to the KL penalty derived from $\pi_{\theta}$ . Due to space restrictions, details of NLPO are given in Section A.2 of the Appendix.
|
| 244 |
+
|
| 245 |
+
# 5 Experiments
|
| 246 |
+
|
| 247 |
+
Baselines. We compare the performance of proposed e-THERAPIST with seven strong baselines - LM (Radford et al., 2019), ARDM (Wu et al., 2021b): LMs trained alternatively for both user and therapist, GPT-Critic (Jang et al., 2022): improving LM through cloning of critic-guided self-generated sentences during fine-tuning, SLLM: Fine-tuned LM in a supervised setting with user profiles and sentiment, SLLM+PPO: Supervised + PPO loss based fine-tuning, e-THERAPIST-R: e-THERAPIST with $\hat{R} = 0$ , e-THERAPIST-ASR: e-THERAPIST with $R_{A} = 0$ and e-THERAPIST-RQR: e-THERAPIST with $R_{R} = 0$ .
|
| 248 |
+
|
| 249 |
+
Implementation. We experiment with different values of $n = \{2, 3, 4, 5, 8\}$ , and found that $n = 3$ yields the best performance. We use GPT-2 as the language model (LM) and context window size of 4. We employ top-k sampling with $k = 20$ as the decoding method for all the models in our work. For RL-based models, we determine the end values of coefficients $\beta_{1} = 0.1$ , $\beta_{2} = 0.2$ , $\beta_{3} = 0.2$ , $\beta_{4} = 0.2$ , $\beta_{5} = 0.3$ , $\gamma_{1} = 0.5$ , $\gamma_{2} = 0.5$ , $\delta_{1} = 0.75$ , and $\delta_{2} = 0.25$ empirically. Due to space restrictions, we have included detailed implementations and weight optimization in sections A.3 and A.5 of the Appendix, respectively.
|
| 250 |
+
|
| 251 |
+
Evaluation Metrics. To evaluate the performance of the proposed system, e-THERAPIST, both automatic and human evaluation are conducted. All the five classifiers, viz. GAC, PC, CTC, PoC, and IBC are evaluated in terms of Weighted Accuracy (W-ACC) and Macro-F1. In automatic evaluation, e-THERAPIST is evaluated w.r.t (i). attribute-success - Gender-Age consistency $(GA_{c})$ , Persona consistency $(P_{c})$ , Psychotherapeutic approach correctness $(CT_{c})$ , Politeness correctness $(Po_{c})$ and Interpersonal behaviour correctness $(IB_{c})^{8}$ , and (ii). language-quality - Perplexity (PPL), BERTScore-F1 $(BS_{F1})$ , Response-length $(R\_LEN)$ .
|
| 252 |
+
|
| 253 |
+
For human evaluation, we ask the same six experts to evaluate the e-THERAPIST. Initially, each evaluator engages with the system 5 times, with a different set of responses each time. These 5 human-evaluated interactions are then cross-verified by psychotherapists from a government-run institution to ensure evaluation quality. Upon passing verification, the additional 90 interactions (15 per evaluator) are evaluated, resulting in a total of 120 human-evaluated dialogues. Human evaluation metrics also include $GA_{c}$ , $P_{c}$ , $CT_{c}$ , $Po_{c}$ , $IB_{c}$ as attribute-success and Fluency ( $F$ ), Consistency ( $C$ ), and Non-Repetitiveness ( $N_{R}$ ). All dialogue interactions are evaluated on an integer Likert scale of $1 - 5^{9}$ .
|
| 254 |
+
|
| 255 |
+
# 6 Results and Analysis
|
| 256 |
+
|
| 257 |
+
Automatic Evaluation. Table 2 depicts that the proposed e-THERAPIST achieves better results w.r.t. all the eight baselines, viz. LM, ARDM, GPT-Critic, SLLM, SLLM+PPO, e-THERAPIST-R, e-THERAPIST-ASR, and e-THERAPIST-RQR in terms of all the seven metrics, viz. $GA_{c}$ , $P_{c}$ , $CT_{c}$ , $Po_{c}$ , $IB_{c}$ , $PPL$ , $BS_{F1}$ , and $R_{LEN}$ . Better results of SLLM compared to LM, ARDM, and RL-based GPT-critic highlight the importance of user's gender, age and persona profiles. Incorporation of these attributes steer the SLLM towards more interactive responses inherently. e-THERAPIST-R performs comparable to SLLM as in the absence of any reward, RL policy acts $\pi_{\theta} \approx lm_{\theta}$ .
|
| 258 |
+
|
| 259 |
+
It can also be observed that in the absence of only attribute-specific rewards, the performance
|
| 260 |
+
|
| 261 |
+
<table><tr><td>Model</td><td>GAc</td><td>Pc</td><td>CTc</td><td>PoC</td><td>IBc</td><td>PPL</td><td>BSF1</td><td>RLEN</td></tr><tr><td>LM (Radford et al., 2019)</td><td>78.4%</td><td>72.1%</td><td>79.5%</td><td>80.2%</td><td>73.6%</td><td>4.26</td><td>0.68</td><td>15.61</td></tr><tr><td>ARDM (Wu et al., 2021b)</td><td>80.4%</td><td>73.3%</td><td>80.0%</td><td>81.5%</td><td>74.2%</td><td>3.57</td><td>0.74</td><td>16.82</td></tr><tr><td>GPT-Critic (Jang et al., 2022)</td><td>80.7%</td><td>73.8%</td><td>80.6%</td><td>82.7%</td><td>73.1%</td><td>3.86</td><td>0.69</td><td>15.94</td></tr><tr><td>SLLM</td><td>85.4%</td><td>80.1%</td><td>86.3%</td><td>84.6%</td><td>77.8%</td><td>3.26</td><td>0.81</td><td>19.79</td></tr><tr><td>e-THERAPIST-R</td><td>85.1%</td><td>79.7%</td><td>86.8%</td><td>84.5%</td><td>77.5%</td><td>3.09</td><td>0.84</td><td>19.26</td></tr><tr><td>e-THERAPIST-ASR</td><td>86.1%</td><td>80.8%</td><td>87.2%</td><td>86.2%</td><td>79.8%</td><td>3.06</td><td>0.87</td><td>20.12</td></tr><tr><td>e-THERAPIST-RQR</td><td>87.5%</td><td>82.3%</td><td>88.7%</td><td>87.9%</td><td>80.5%</td><td>2.97</td><td>0.88</td><td>22.79</td></tr><tr><td>SLLM+PPO</td><td>89%</td><td>83.9%</td><td>91.5%</td><td>91.3%</td><td>82.3%</td><td>2.67</td><td>0.89</td><td>23.01</td></tr><tr><td>e-THERAPIST</td><td>90.1%</td><td>84.1%</td><td>92.6%</td><td>92.5%</td><td>83.4%</td><td>2.52</td><td>0.89</td><td>23.89</td></tr></table>
|
| 262 |
+
|
| 263 |
+
Table 2: Automatic evaluation results
|
| 264 |
+
|
| 265 |
+
<table><tr><td>Model</td><td>\(GA_c\)</td><td>\(P_c\)</td><td>\(CT_c\)</td><td>\(Po_c\)</td><td>\(IB_c\)</td><td>\(F\)</td><td>\(C\)</td><td>\(N_R\)</td></tr><tr><td>LM (Radford et al., 2019)</td><td>2.02</td><td>2.21</td><td>2.07</td><td>2.10</td><td>2.39</td><td>2.17</td><td>2.39</td><td>2.01</td></tr><tr><td>ARDM (Wu et al., 2021b)</td><td>2.88</td><td>2.74</td><td>2.77</td><td>2.81</td><td>2.80</td><td>2.79</td><td>2.83</td><td>2.29</td></tr><tr><td>GPT-Critic (Jang et al., 2022)</td><td>2.98</td><td>2.83</td><td>2.81</td><td>2.90</td><td>2.91</td><td>2.86</td><td>2.91</td><td>2.34</td></tr><tr><td>SLLM</td><td>3.50</td><td>3.67</td><td>3.80</td><td>3.75</td><td>3.41</td><td>3.89</td><td>3.44</td><td>3.21</td></tr><tr><td>e-THERAPIST-R</td><td>3.53</td><td>3.45</td><td>3.86</td><td>3.84</td><td>3.50</td><td>4.11</td><td>4.05</td><td>3.72</td></tr><tr><td>e-THERAPIST-ASR</td><td>3.75</td><td>3.70</td><td>4.01</td><td>3.91</td><td>3.72</td><td>4.32</td><td>4.27</td><td>3.82</td></tr><tr><td>e-THERAPIST-RQR</td><td>3.97</td><td>3.91</td><td>4.12</td><td>4.09</td><td>3.89</td><td>4.45</td><td>4.33</td><td>3.97</td></tr><tr><td>SLLM+PPO</td><td>4.10</td><td>4.06</td><td>4.38</td><td>4.30</td><td>4.01</td><td>4.55</td><td>4.50</td><td>4.05</td></tr><tr><td>e-THERAPIST</td><td>4.21</td><td>4.10</td><td>4.42</td><td>4.35</td><td>4.02</td><td>4.62</td><td>4.60</td><td>4.08</td></tr></table>
|
| 266 |
+
|
| 267 |
+
Table 3: Human evaluation results.
|
| 268 |
+
|
| 269 |
+
of e-THERAPIST-ASR sees a significant increase in scores of $PPL$ , $BS_{F1}$ , and $R_{LEN}$ , but minimal change is attribute-specific metrics. This supports the use of Response-quality rewards, due to which model tries to engage the user with longer and contextually-adequate responses. Similarly, an increase in attribute-specific metrics in case of e-THERAPIST-RQR supports the requirement of attribute-specific rewards as well. It can also be inferred from both e-THERAPIST-ASR and e- THERAPIST-RQR results that the presence of each type of rewards affects other metrics positively but with a minimal margin. This means that all rewards interact with each other and helps in achieving user-aware polite interpersonal therapy. SLLM+PPO is same as e-THERAPIST only with the difference of loss. In the performance of both the models, there is little margin, but still, it reflects that NLPO drives the model to generate longer responses with correct incorporation of politeness, interpersonal behaviour and psychotherapeutic approach. Automatic evaluations support our hypothesis that knowledge of user's profile and reward modelling can play a significant role in building a better psychotherapy dialogue system.
|
| 270 |
+
|
| 271 |
+
Human Evaluation. Results of human evaluation for e-THERAPIST are obtained in sync with the automatic evaluation metrics as shown in Table 3. For all the metrics, viz. $GA_{c}$ , $P_{c}$ , $CT_{c}$ , $Po_{c}$ , $IB_{c}$ , $F$ , $C$ , and $N_{R}$ , e-THERAPIST achieves better scores as compared to all the eight base-
|
| 272 |
+
|
| 273 |
+
lines. Here, as well, SLLM and e-THERAPIST-R beat LM, ARDM, and GPT-Critic. Further, e-THERAPIST-ASR shows a small marginal increase on SLLM and e-THERAPIST-R. This implies that response-quality rewards are not enough to ensure user's profile and sentiment-aware therapeutic responses. Notably, difference between performance of THERAPIST-RQR and of SLLM+PPO is of small margin. Similarly, minimal performance difference can be seen between SLLM+PPO and e-THERAPIST as well. This leads to the argument that attribute-specific rewards play a crucial role in adapting correct psychotherapeutic approach and interpersonal behaviour in generated responses while imbuing politeness in them. It should also be pointed out that e-THERAPIST achieves much better scores w.r.t. all metrics as compared to THERAPIST-RQR. This highlights the importance of response-quality rewards as well. Thus, it can be concluded that a robust psychotherapeutic dialogue system can be built by striking a balance between attribute-specific and response-quality rewards. Employing both, the user can be engaged in interpersonal therapeutic communication while also maintaining a cordial environment.
|
| 274 |
+
|
| 275 |
+
# 7 Conclusion
|
| 276 |
+
|
| 277 |
+
This work introduces e-THERAPIST, a novel polite interpersonal dialogue system for psychotherapy. First, considering user profile information (gender, age, persona) and the therapist's psy
|
| 278 |
+
|
| 279 |
+
chotherapeutic approach, a new conversational dataset PSYCON is created by prompting GPT-J with manual interventions. Further, it is annotated with user's sentiment and therapist politeness label and interpersonal behaviour. Then, a psychotherapeutic dialogue system e-THERAPIST is developed in an RL framework. To ensure the preferences of gender, age, persona and sentiment-aware polite interpersonal psychotherapeutic responses, a novel reward function is designed consisting of seven rewards. Our results conclude the requirement of each of the reward to ensure polite and interpersonal psychotherapeutic responses tailored to user's profile and attributes, and eventually contributing to improved therapy experiences. In future, we would like to incorporate external knowledge to facilitate more factual conversations.
|
| 280 |
+
|
| 281 |
+
# Limitations
|
| 282 |
+
|
| 283 |
+
e-THERAPIST comes with some limitations. Firstly, the training process requires a substantial GPU memory capacity i.e. of 40 GB. Another challenge arises from the optimization of reward weights, which can significantly extend the training and validation time. To address this, heuristic approaches are used to select specific combinations of reward weights. It is also observed, in the case of continuous, short, or direct responses (e.g., 'Yes', 'I don't know', 'No', '2', 'Yeah') model initially attempts to provide therapy by inquiring about the user's issue; however, after a few turns, the model may generate repetitive or inconsistent responses. This is because the training data primarily consists of interactive dialogues with longer utterances, leading to confusion in handling concise inputs. It is also observed that continuous out-of-the-context responses may drive the model towards generation of inadequate responses, as these large language models models inherently have knowledge of vast information in their memories.
|
| 284 |
+
|
| 285 |
+
To overcome these limitations, future research could focus on refining and enhancing a psychotherapeutic dialogue systems as well as language models. This could involve addressing memory requirements, reducing training time, improving the handling of short, direct and out-of-context responses, and ensuring the generation of relevant inquiries to enhance user satisfaction.
|
| 286 |
+
|
| 287 |
+
# Ethics Statement
|
| 288 |
+
|
| 289 |
+
Developing a psychotherapeutic dialogue system do take a step towards Responsible AI. But, same methodology can also be used to develop toxic or irresponsible natural language generation models. Hence, we acknowledge that the significant importance of addressing ethical concerns when developing computational models for psychotherapeutic applications is needed. Given the sensitive nature of the subject, we prioritize safeguarding the privacy of users' personal data. It is important to clarify that this paper does not provide any clinical diagnostic assertions. We mainly focus to enrich the interactiveness of such dialogue system with the user better engagingness and therapy dialogues. The dataset will be made available for research purposes with proper permissions.
|
| 290 |
+
|
| 291 |
+
# Acknowledgements
|
| 292 |
+
|
| 293 |
+
Ksjtij Mishra acknowledges the support received from the Indian government's "Prime Minister's Research Fellowship (PMRF) Program". Priyanshu Priya acknowledges the financial support provided by the Department of Science and Technology, Ministry of Science and Technology, Government of India, through the Innovation in Science Pursuit for Inspired Research (INSPIRE) Fellowship.
|
| 294 |
+
|
| 295 |
+
# References
|
| 296 |
+
|
| 297 |
+
Zishan Ahmad, Kshitij Mishra, Asif Ekbal, and Pushpak Bhattacharyya. 2023. Rptcs: A reinforced persona-aware topic-guiding conversational system. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3464-3476.
|
| 298 |
+
Tri Budiarta, Joko Nurkamto, Sumarlam Sumarlam, and Dwi Purnanto. 2021. Expressive speech acts of politeness in the counselling process. Langkawi: Journal of The Association for Arabic and English, 7(2):212-226.
|
| 299 |
+
Lei Cao, Huijun Zhang, Ling Feng, Zihan Wei, Xin Wang, Ningyun Li, and Xiaohao He. 2019. Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. arXiv preprint arXiv:1910.12038.
|
| 300 |
+
Inigo Casanueva, Paweł Budzianowski, Pei-Hao Su, Stefan Ultes, Lina Rojas-Barahona, Bo-Hsiang Tseng, and Milica Gašić. 2018. Feudal reinforcement learning for dialogue management in large domains. arXiv preprint arXiv:1803.03232.
|
| 301 |
+
|
| 302 |
+
Jolien Cremers, Helena JM Pennings, Tim Mainhard, and Irene Klugkist. 2021. Circular modelling of circumplex measurements for interpersonal behavior. Assessment, 28(2):585-600.
|
| 303 |
+
Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. arXiv preprint arXiv:1306.6078.
|
| 304 |
+
Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. arXiv preprint arXiv:1902.01109.
|
| 305 |
+
Mauajama Firdaus, Asif Ekbal, and Pushpak Bhattacharyya. 2022a. PoliSe: Reinforcing politeness using user sentiment for customer care response generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6165-6175, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
| 306 |
+
Mauajama Firdaus, Arunav Shandilya, Asif Ekbal, and Pushpak Bhattacharyya. 2022b. Being polite: Modeling politeness variation in a personalized dialog agent. IEEE Transactions on Computational Social Systems.
|
| 307 |
+
Mauajama Firdaus, Naveen Thangavelu, Asif Ekba, and Pushpak Bhattacharyya. 2020. Persona aware response generation with emotions. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
|
| 308 |
+
Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (woebot): a randomized controlled trial. *JMIR mental health*, 4(2):e7785.
|
| 309 |
+
Lewis R Goldberg. 1993. The structure of phenotypic personality traits. American psychologist, 48(1):26.
|
| 310 |
+
Michael B Gurtman. 2009. Exploring personality with the interpersonal circumplex. Social and personality psychology compass, 3(4):601-619.
|
| 311 |
+
Jacob B Hirsh, Colin G DeYoung, Xiaowen Xu, and Jordan B Peterson. 2010. Compassionate liberals and polite conservatives: Associations of agreeableness with political ideology and moral values. *Personality and Social Psychology Bulletin*, 36(5):655-664.
|
| 312 |
+
George S Howard, Don W Nance, and Pennie Myers. 1986. Adaptive counseling and therapy: An integrative, eclectic model. The Counseling Psychologist, 14(3):363-442.
|
| 313 |
+
KS Jacob. 2012. Depression: a major public health problem in need of a multi-sectoral response. The Indian journal of medical research, 136(4):537.
|
| 314 |
+
|
| 315 |
+
Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. 2022. GPT-critic: Offline reinforcement learning for end-to-end task-oriented dialogue systems. In International Conference on Learning Representations.
|
| 316 |
+
Donald J Kiesler and Stephen M Auerbach. 2003. Integrating measurement of control and affiliation in studies of physician-patient interaction: the interpersonal circumplex. Social Science & Medicine, 57(9):1707-1722.
|
| 317 |
+
Junhan Kim, Yoojung Kim, Byungjoon Kim, Sukyung Yun, Minjoon Kim, and Joongseok Lee. 2018. Can a machine tend to teenagers' emotional needs? a study with conversational agents. In *Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems*, pages 1-6.
|
| 318 |
+
Noriko Kitamura. 2000. Adapting brown and levinson's 'politeness' theory to the analysis of casual conversation. In Proceedings of ALS2k, the 2000 Conference of the Australian Linguistic Society, pages 1-8.
|
| 319 |
+
Nathan Lambert and L von Werra. 2023. Illustrating reinforcement learning from human feedback (rlhf).
|
| 320 |
+
Debi Laplante and Nalini Ambady. 2003. On how things are said: Voice tone, voice intensity, verbal content, and perceptions of politeness. Journal of language and social psychology, 22(4):434-441.
|
| 321 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 322 |
+
Kenneth D Locke, Liliane Sayegh, J Kim Penberthy, Charlotte Weber, Katherine Haentjens, and Gustavo Turecki. 2017. Interpersonal circumplex profiles of persistent depression: Goals, self-efficacy, problems, and effects of group therapy. Journal of Clinical Psychology, 73(6):595-611.
|
| 323 |
+
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations.
|
| 324 |
+
Gale M Lucas, Jonathan Gratch, Aisha King, and Louis Philippe Morency. 2014. It's only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior, 37:94-100.
|
| 325 |
+
Murni Mahmud. 2013. The roles of social status, age, gender, familiarity, and situation in being polite for bugis society. Asian Social Science, 9(5):58-72.
|
| 326 |
+
Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica, 22(3):276-282.
|
| 327 |
+
Mohsen Mesgar, Edwin Simpson, and Iryna Gurevych. 2020. Improving factual consistency between a response and persona facts. arXiv preprint arXiv:2005.00036.
|
| 328 |
+
Mental Health Foundation MHF. 2023. Good mental health for all.
|
| 329 |
+
|
| 330 |
+
Robert R Morris, Kareem Kouddous, Rohan Kshir-sagar, and Stephen M Schueller. 2018. Towards an artificially empathic conversational agent for mental health applications: system design and user perceptions. Journal of medical Internet research, 20(6):e10148.
|
| 331 |
+
National Alliance on Mental Illness NAMI. 2023. Mental health conditions.
|
| 332 |
+
Abhijit Nargund, Sandeep Pandey, and Jina Ham. 2022. Par: Persona aware response in conversational systems. In Proceedings of the 19th International Conference on Natural Language Processing (ICON), pages 50-54.
|
| 333 |
+
Aaron L Pincus and Michael B Gurtman. 2006. Interpersonal theory and the interpersonal circumplex: Evolving perspectives on normal and abnormal personality.
|
| 334 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 335 |
+
Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. 2022. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization.
|
| 336 |
+
Tulika Saha, Saichethan Reddy, Anindya Das, Sriparna Saha, and Pushpak Bhattacharyya. 2022. A shoulder to cry on: Towards a motivational virtual assistant for assuaging mental agony. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2436-2449.
|
| 337 |
+
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438.
|
| 338 |
+
Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. 2021. Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. In Proceedings of the Web Conference 2021, pages 194-205.
|
| 339 |
+
Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276.
|
| 340 |
+
Darrell Smith. 1982. Trends in counseling and psychotherapy. American Psychologist, 37(7):802.
|
| 341 |
+
William U Snyder. 1945. An investigation of the nature of non-directive psychotherapy. The Journal of General Psychology, 33(2):193-223.
|
| 342 |
+
|
| 343 |
+
Elena Stephan, Nira Liberman, and Yaacov Trope. 2010. Politeness and psychological distance: a construal level perspective. Journal of personality and social psychology, 98(2):268.
|
| 344 |
+
Michael M Tadesse, Hongfei Lin, Bo Xu, and Liang Yang. 2019. Detection of depression-related posts in reddit social media forum. IEEE Access, 7:44883-44893.
|
| 345 |
+
Frederick C Thorne. 1948. Principles of directive counseling and psychotherapy. American Psychologist, 3(5):160.
|
| 346 |
+
Ana Sabina Uban, Berta Chulvi, and Paolo Rosso. 2021. Understanding patterns of anorexia manifestations in social media data with deep learning. In Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access, pages 224-236.
|
| 347 |
+
Ben Wang and Aran Komatsuzaki. 2021. Gpt-j-6b: A 6 billion parameter autoregressive language model.
|
| 348 |
+
World Health Organization WHO. 2022. Mental disorders.
|
| 349 |
+
World Health Organization WHO. 2023. Depressive disorder (depression).
|
| 350 |
+
Jeff Wu, Long Ouyang, Daniel M Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. 2021a. Recursively summarizing books with human feedback. arXiv preprint arXiv:2109.10862.
|
| 351 |
+
Qingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2021b. Alternating recurrent dialog model with large-scale pre-trained language models. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1292-1301.
|
| 352 |
+
Ruziyeva Nilufar Xafizovna and Xolova Madina Boboqulovna. 2022. Politeness in literary works: An overview. Eurasian Research Bulletin, 7:200-206.
|
| 353 |
+
Zhentao Xu, Verónica Pérez-Rosas, and Rada Mihalcea. 2020. Inferring social media users' mental health status from multimodal information. In Proceedings of the 12th language resources and evaluation conference, pages 6292-6299.
|
| 354 |
+
Andrew Yates, Arman Cohan, and Nazli Goharian. 2017. Depression and self-harm risk assessment in online forums. arXiv preprint arXiv:1709.01848.
|
| 355 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
|
| 356 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert.
|
| 357 |
+
|
| 358 |
+
Sirui Zhao, Hongyu Jiang, Hanqing Tao, Rui Zha, Kun Zhang, Tong Xu, and Enhong Chen. 2023. Pedm: A multi-task learning model for persona-aware emoji-embedded dialogue generation. ACM Transactions on Multimedia Computing, Communications and Applications, 19(3s):1-21.
|
| 359 |
+
|
| 360 |
+
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593.
|
| 361 |
+
|
| 362 |
+
Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts. *NAACL HLT* 2019, page 24.
|
| 363 |
+
|
| 364 |
+
# A Appendix
|
| 365 |
+
|
| 366 |
+
# A.1 Dataset
|
| 367 |
+
|
| 368 |
+
We provide the details of the dataset in this section.
|
| 369 |
+
|
| 370 |
+
# A.1.1 Prompt Design
|
| 371 |
+
|
| 372 |
+

|
| 373 |
+
Figure 3: Example of one-shot version of prompt.
|
| 374 |
+
|
| 375 |
+
The proposed PSYCON dataset comprises conversations focusing on seven psychological issues and different gender, age, personas and psychotherapeutic approach. In order to create synthetic data focusing on these aspects, we prompt the GPT-J model utilizing four-shot prompt. A sample of the one-shot version of the prompt is provided in Figure 3. We provide the psychological issue, gender, age, persona, psychotherapeutic approach, dialogue flow and the seed utterances for which the next utterance needs to be generated. Table 4 depicts a few examples of seed utterances. The four-shot prompt adheres to a similar pattern with four examples in the input sequence. A few turns of a dialogue generated using this method are given in Table 6.
|
| 376 |
+
|
| 377 |
+
# A.1.2 Dataset Annotation Details
|
| 378 |
+
|
| 379 |
+
The entire annotation process for sentiment, politeness and interpersonal behaviour labels proceeds in
|
| 380 |
+
|
| 381 |
+
two stages to reduce manual efforts. First, we randomly sample 340 dialogues from the dataset and then ask all three annotators to manually annotate the user's utterances with the required sentiment label and the therapist's utterance with the relevant politeness and interpersonal behaviour labels. Second, three pre-trained RoBERTa-large (Liu et al., 2019) models are fine-tuned on manually annotated samples to build sentiment, politeness, and interpersonal behaviour classifiers. Then, the remaining utterances are passed through the respective classifier to predict the corresponding label. Lastly, the same annotators are asked to cross-verify the predicted labels and correct them, if needed. A reliable multi-rater Kappa (McHugh, 2012) agreement ratios of $85.2\%$ , $77.1\%$ and $73.6\%$ is observed in the first stage and $89.6\%$ , $84.6\%$ and $80.3\%$ is observed in the second stage for sentiment, politeness and interpersonal behaviour labels, respectively.
|
| 382 |
+
|
| 383 |
+
A sample conversation with the annotation for sentiment, politeness and interpersonal behaviour is shown in Figure 4. The description of different interpersonal behaviour labels along with examples are provided in Table 5.
|
| 384 |
+
|
| 385 |
+
# A.2 NLPO
|
| 386 |
+
|
| 387 |
+
The RL-based parameterized control policy for e- THERAPIST can be defined as:
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\pi_ {\theta}: S \rightarrow \Delta (A) \tag {10}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
Here, $\pi_{\theta}$ is a probability function that attempts to select an action $A$ in a given state space $S$ with a goal to maximize long-term discounted rewards $R$ over a trajectory:
|
| 394 |
+
|
| 395 |
+
$$
|
| 396 |
+
E _ {\pi} \left[ P \left(\sum_ {t = 0} ^ {T} \gamma_ {t} R \left(s _ {t}, a _ {t}\right) \right] \right. \tag {11}
|
| 397 |
+
$$
|
| 398 |
+
|
| 399 |
+
. During RL fine-tuning, we initialize $\pi_{\theta} = lm_{\theta}$ The value function $V^{\pi}$ and Q-value function $Q^{\pi}$ for policy $\pi_{\theta}$ and estimated reward $R$ are be computed as:
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
V _ {t} ^ {\pi} = E _ {a _ {t}} \sim \pi \left[ \sum_ {\tau = t} ^ {T} \gamma R \left(s _ {\tau}, a _ {\tau}, y\right) \right] \tag {12}
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
Q _ {t} ^ {\pi} \left(s _ {t}, a _ {t}\right) = R \left(s _ {t}, a _ {t}, y\right) + \gamma E _ {s _ {t + 1} \sim P} \left[ V _ {t + 1} ^ {\pi} \left(s _ {t + 1}\right) \right] \tag {13}
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
Considering $V^{\pi}$ and Q-value function $Q^{\pi}$ , we compute the advantage estimate $A^{\pi}$ as:
|
| 410 |
+
|
| 411 |
+
$$
|
| 412 |
+
A _ {t} ^ {\pi} (s, a) = Q _ {t} ^ {\pi} (s, a) - V _ {t} ^ {\pi} \tag {14}
|
| 413 |
+
$$
|
| 414 |
+
|
| 415 |
+
To stabilize the training, we approximate the advantage using Generalized Advantage Estimation(Schulman et al., 2015). To address the sparsity
|
| 416 |
+
|
| 417 |
+
of sequence-level rewards in the environment, we apply Then, a regularization (Wu et al., 2021a) is applied address the sparsity of sequence-level rewards in the environment. It basically incorporates a token-level KL penalty into the reward function to discourage the model from deviating too far from the last model $\pi_{\theta}$ . It can be formalized as:
|
| 418 |
+
|
| 419 |
+
$$
|
| 420 |
+
\hat {R} \left(s _ {t}, a _ {t}, y\right) = R \left(s _ {t}, a _ {t}, y\right) - \delta K L \left(\pi_ {\theta} \left(a _ {t} \mid s _ {t}\right) \mid l m _ {\theta} \left(a _ {t} \mid s _ {t}\right)\right) \tag {15}
|
| 421 |
+
$$
|
| 422 |
+
|
| 423 |
+
Here, $\hat{R}$ represents the regularized KL reward, $y$ denotes the ground-truth predictions, $KL(\pi_{\theta}(a_t|s_t)||lm_{\theta}(a_t|s_t))$ is the KL divergence between $(\pi_{\theta}(a_t|s_t)$ and $lm_{\theta}(a_t|s_t))$ . The KL coefficient $\delta$ is dynamically adjusted following (Ziegler et al., 2019). Now, we update the RL policy $\pi_{\theta}$ by maximizing the PPO-Clip objective:
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\begin{array}{l} \pi_ {\theta_ {m + 1}} = \mathrm {a r g m a x} _ {\theta} \frac {1}{| D _ {m} | T} \\ \sum_ {\tau \in D _ {m}} \sum_ {t = 0} ^ {T} \min (r _ {t} (\theta) A ^ {\pi_ {\theta_ {m}}} (\tau_ {t}), \mathrm {c l i p} (r _ {t} (\theta), \\ 1 - \epsilon , 1 + \epsilon) A _ {\pi_ {\theta_ {m}}} (\tau_ {t}) \\ \end{array}
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
where $r_t(\theta) = \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_m}(a_t|s_t)}$ . Next, value function is updated as follows:
|
| 430 |
+
|
| 431 |
+
$$
|
| 432 |
+
V _ {\phi_ {m + 1}} = \operatorname {a r g m i n} _ {\phi} \frac {1}{| D _ {m} |} \sum_ {\tau \in D _ {m}} \sum_ {t = 0} ^ {T} \left(V _ {\phi} \left(s _ {t}\right) - \hat {R} _ {t}\right) ^ {2} \tag {16}
|
| 433 |
+
$$
|
| 434 |
+
|
| 435 |
+
. For every $m$ iterations, the parameterized masked policy is also updated as follows:
|
| 436 |
+
|
| 437 |
+
$$
|
| 438 |
+
\pi_ {\psi} ^ {n + 1} \left(\cdot | \cdot , \pi_ {\theta^ {m + 1}}\right) \tag {17}
|
| 439 |
+
$$
|
| 440 |
+
|
| 441 |
+
# A.3 Implementation Details
|
| 442 |
+
|
| 443 |
+
All classifiers are fine-tuned using the RoBERTa-large (Liu et al., 2019). The language models GPT-2-medium (Radford et al., 2019), ARDM (Wu et al., 2021b), and SLLM are trained with a cross-entropy loss. In supervised learning setting, AdamW optimizer(Loshchilov and Hutter, 2018) is employed with a learning rate of $\alpha = 2e^{-05}$ and seed_value $= 10$ .
|
| 444 |
+
|
| 445 |
+
For SLLM+PPO and e-THERAPIST, training is conducted with batch_size = 8, seed_value = 10, human Reward = 10, max_candidate_length = 50, clip_ratio = 0.2, discount_factor = 0.95, number_of_steps = 32000, steps_per_update = 640 and AdamW optimizer (Loshchilov and Hutter, 2018) with a learning rate of $\alpha = 2e^{-0.5}$ , $\varepsilon = 0.2$ and epochs = 20.
|
| 446 |
+
|
| 447 |
+
<table><tr><td></td><td>Seed Utterances</td></tr><tr><td>Psychological Issue: PTSD</td><td>User: I've been struggling with PTSD lately. It's been really tough to cope with.</td></tr><tr><td>Gender: Male</td><td>Therapist: As a therapist, I understand that PTSD can have a significant impact on your life.</td></tr><tr><td>Age: Young</td><td>I'm here to help you navigate through it.</td></tr><tr><td>Persona: Openness</td><td rowspan="2">User: I'm a young male dealing with PTSD, and I'm looking for effective techniques to manage it. Can you suggest any approaches?</td></tr><tr><td>Psychotherapeutic Approach: Eclectic</td></tr><tr><td>Psychological Issue: Depression</td><td>User: I've been feeling really down lately. I think I might be experiencing depression.</td></tr><tr><td>Gender: Female</td><td>Therapist: I understand how challenging depression can be.</td></tr><tr><td>Age: Adult</td><td>I'm here to provide support and guidance.</td></tr><tr><td>Persona: Conscientiousness</td><td rowspan="2">User: I'm an adult female struggling with depression, and I've heard about the directive technique. Can you tell me more about it and how it could help me?</td></tr><tr><td>Psychotherapeutic Approach: Directive</td></tr><tr><td>Psychological Issue: Stress</td><td>User: I think a lot of my stress comes from feeling overwhelmed by all my responsibilities.</td></tr><tr><td>Gender: Female</td><td>Therapist: I understand. Stress can affect us in various ways.</td></tr><tr><td>Age: Elder</td><td>Can you tell me more about what's been going on?</td></tr><tr><td>Persona: Extraversion</td><td rowspan="2">User: I've always been more on the extraverted side, so being isolated during the pandemic has been particularly challenging for me. I'm used to being active and engaged with others.</td></tr><tr><td>Psychotherapeutic Approach: Directive</td></tr><tr><td>Psychological Issue: PTSD</td><td>User: It's been going on for several months now. The intensity varies, but there are moments when I feel completely overwhelmed by these memories.</td></tr><tr><td>Gender: Male</td><td>Therapist: I can understand how distressing and disruptive these symptoms can be.</td></tr><tr><td>Age: Adult</td><td>Have you sought any professional help or support for your PTSD symptoms?</td></tr><tr><td>Persona: Agreeableness</td><td rowspan="2">User: No, I haven't. I've been hesitant to reach out because I've always tried to handle things on my own.</td></tr><tr><td>Psychotherapeutic Approach: Non-directive</td></tr><tr><td>Psychological Issue: Anxiety</td><td>User: Lately, I've been feeling overwhelmed by anxiety.</td></tr><tr><td>Gender: Male</td><td>Therapist: Thank you for sharing that with me. Anxiety can be quite overwhelming.</td></tr><tr><td>Age: Young</td><td>Can you describe what you're specifically experiencing?</td></tr><tr><td>Persona: Neuroticism</td><td rowspan="2">User: I've always been someone who tends to be more neurotic, so this anxiety is really overwhelming.</td></tr><tr><td>Psychotherapeutic Approach: Non-directive</td></tr></table>
|
| 448 |
+
|
| 449 |
+
Table 4: A few examples of seed utterances.
|
| 450 |
+
|
| 451 |
+
<table><tr><td>Interpersonal behaviour</td><td>Definition</td><td>Example</td></tr><tr><td>Directing</td><td>Provides clear instructions, assistance and guidance to the users.</td><td>Okay, well let's start with something simple. Maybe you can try going for a walk outside every day.</td></tr><tr><td>Helpful</td><td>Active listening, providing reassurance and emotional support to the users based on their circumstances in order to motivate them.</td><td>One thing you can try is practising self-compassion to fight depression. That means treating yourself with the same kind-ness and understanding that you would offer to a friend.</td></tr><tr><td>Empathetic</td><td>Conveys a sense of being heard, valued and validated to create a non-judgmental and caring environment so as to understand and solve the user's problem.</td><td>That sounds really tough. Have you been experiencing these feelings for a long time?</td></tr><tr><td>Understanding</td><td>Reflects the ability to acknowledge and accept others' perspectives.</td><td>It's completely understandable to have reservations. Take your time and consider joining support groups at your own pace.</td></tr><tr><td>Compliant</td><td>Reflects the sense of accommodation, cooperation and confirmation with the expectations and demands of users.</td><td>I understand that you're feeling upset about the way your friend treated you. Let's explore strategies to improve your communication and set boundaries in a healthy way.</td></tr><tr><td>Imposing</td><td>Reflects an attempt to assert control over the user's emotions, thoughts, and experiences for their betterment.</td><td>It can be challenging, but it's a skill that can be developed over time. I can recommend some exercises you can try to help you get started.</td></tr><tr><td>Confrontational</td><td>Reflects a direct or assertive approach that confronts or challenges the user's thoughts and emotions to make users understand their self-defeating patterns or manipulations.</td><td>That sounds really challenging. It's important to remember that your worth isn't tied to your job performance. Have you considered talking to your boss about how you're feeling?</td></tr><tr><td>Dissatisfied</td><td>Reflects a usual feeling of unhappiness or discontentment with the user's behaviour or actions. Identifying the source of dissatisfaction facilitates making quicker progress towards the user's goals.</td><td>Perhaps, if you feel comfortable, we can explore that topic further during our next session.</td></tr><tr><td>Uncertain</td><td>Reflects the lack of clarity about the user's perspectives and experiences. Recognizing the source of uncertainty fosters healthy and supportive interactions.</td><td>I'm not quite sure, but would it be helpful if we explored that topic further together?</td></tr></table>
|
| 452 |
+
|
| 453 |
+
Table 5: The definition and example of different interpersonal behaviour in our annotated dataset.
|
| 454 |
+
|
| 455 |
+
# A.3.1 Hardware Configuration
|
| 456 |
+
|
| 457 |
+
The experimental setup includes the following device configurations: (a) GPU: A100-PCIE-40GB, (b) CUDA Support: CUDA 11.x (or later.) (c) GPU clocks: Base: 765 MHz, Boost: 1410 MHz, (d)
|
| 458 |
+
|
| 459 |
+
Memory Size: 40 GB, (e) Memory Type: HBM2, (f) Memory clock: 1215 MHz, (g) Bus Width: 5120 bits, (h) Total board power: 250 W.
|
| 460 |
+
|
| 461 |
+
<table><tr><td>Psychological Issue</td><td>Post-traumatic stress disorder</td></tr><tr><td>Gender</td><td>Male</td></tr><tr><td>Age</td><td>Young</td></tr><tr><td>Persona</td><td>Openness</td></tr><tr><td>Psychotherapeutic Approach</td><td>Eclectic</td></tr><tr><td rowspan="3">Seed Utterances</td><td>User: Hi, I’ve been dealing with PTSD recently and it has been quite challenging.</td></tr><tr><td>Therapist: Hi, I’m sorry to hear that you’ve been dealing with PTSD. Could you provide me with more details about your experience?</td></tr><tr><td>User: Of course. I’ve been grappling with PTSD, and I’m uncertain about effective coping strategies.</td></tr><tr><td>Speaker</td><td>Generated Utterances</td></tr><tr><td>Therapist</td><td>Hello, it’s nice to meet you. How can I help you today?</td></tr><tr><td>User</td><td>Hi, I’ve been struggling with PTSD and I’m not sure how to cope with it.</td></tr><tr><td>Therapist</td><td>I’m sorry to hear that. Would you mind telling me more about your experiences with PTSD?</td></tr><tr><td>User</td><td>It started after I witnessed a traumatic event a few years ago. Ever since then, I’ve had trouble sleeping and have intense flashbacks.</td></tr><tr><td>Therapist</td><td>That sounds really difficult. Have you tried any coping mechanisms in the past?</td></tr><tr><td>User</td><td>I’ve tried some breathing exercises and meditation, but they don’t seem to help much.</td></tr></table>
|
| 462 |
+
|
| 463 |
+
# A.4 Classifier Results
|
| 464 |
+
|
| 465 |
+
Results of all five classifiers are shown in Table 7. From this table, it can be inferred that RoBERTa-large based classifiers achieve better scores for W-ACC and Macro-F1 as compared to the BERT-large based classifiers.
|
| 466 |
+
|
| 467 |
+
Table 6: A sample dialogue from PsYCON generated using our method.
|
| 468 |
+
|
| 469 |
+
<table><tr><td colspan="3">BERT-large</td><td colspan="2">RoBERT-large</td></tr><tr><td>Classifier</td><td>W-ACC</td><td>Macro-F1</td><td>W-ACC</td><td>Macro-F1</td></tr><tr><td>GAC</td><td>0.882</td><td>0.871</td><td>0.894</td><td>0.873</td></tr><tr><td>PC</td><td>0.873</td><td>0.860</td><td>0.897</td><td>0.875</td></tr><tr><td>SC</td><td>0.893</td><td>0.881</td><td>0.912</td><td>0.904</td></tr><tr><td>CTC</td><td>0.904</td><td>0.893</td><td>0.923</td><td>0.921</td></tr><tr><td>PoC</td><td>0.947</td><td>0.942</td><td>0.956</td><td>0.943</td></tr><tr><td>IBC</td><td>0.872</td><td>0.752</td><td>0.898</td><td>0.872</td></tr></table>
|
| 470 |
+
|
| 471 |
+
Table 7: Evaluation results of the classifiers.
|
| 472 |
+
|
| 473 |
+
# A.5 Weight Optimization
|
| 474 |
+
|
| 475 |
+
To determine the optimal combination of weights for the reward function, we conducted experiments with different weight combinations of $\beta_{1},\beta_{2},\beta_{3},\beta_{4},\beta_{5},\gamma_{1},\gamma_{2},\delta_{1}$ , and $\delta_{2}$ . These weights were validated using a $20\%$ hold-out dataset PSYCON, and the combination that resulted in the highest perplexity score was selected for training e-THERAPIST. Table 8 presents the weights considered for optimization using the PSYCON dataset. The table indicates that considering all the rewards leads to a better perplexity score. Additionally, removing any of the rewards results in a decrease in the perplexity score, highlighting the importance of each reward in the model.
|
| 476 |
+
|
| 477 |
+
<table><tr><td colspan="10">WEIGHT OPTIMIZATION</td></tr><tr><td>β1</td><td>β2</td><td>β3</td><td>β4</td><td>β5</td><td>γ1</td><td>γ2</td><td>δ1</td><td>δ2</td><td>PPL</td></tr><tr><td>0.1</td><td>0.2</td><td>0.2</td><td>0.3</td><td>0.2</td><td>0.5</td><td>0.5</td><td>0.5</td><td>0.5</td><td>2.5261</td></tr><tr><td>0</td><td>0</td><td>0</td><td>0.8</td><td>0.2</td><td>0.55</td><td>0.45</td><td>0.4</td><td>0.6</td><td>2.556</td></tr><tr><td>0.1</td><td>0.1</td><td>0.1</td><td>0.7</td><td>0</td><td>0.1</td><td>0.9</td><td>0.9</td><td>0.1</td><td>2.6154</td></tr><tr><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>3.0963</td></tr><tr><td>0.1</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.3</td><td>0.35</td><td>0.65</td><td>0.45</td><td>0.55</td><td>2.5290</td></tr><tr><td>0.1</td><td>0.2</td><td>0.2</td><td>0.3</td><td>0.2</td><td>0.7</td><td>0.3</td><td>0.3</td><td>0.7</td><td>2.5361</td></tr><tr><td>0.2</td><td>0.1</td><td>0.1</td><td>0.4</td><td>0.2</td><td>0.8</td><td>0.2</td><td>0.2</td><td>0.8</td><td>0 2.5541</td></tr><tr><td>0.15</td><td>0.15</td><td>0.3</td><td>0.3</td><td>0.1</td><td>0</td><td>1</td><td>0.25</td><td>0.75</td><td>2.5961</td></tr><tr><td>0.3</td><td>0.3</td><td>0.3</td><td>0.1</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>2.6182</td></tr><tr><td>0.2</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.5</td><td>0.5</td><td>0.5</td><td>0.5</td><td>2.58932</td></tr><tr><td>0.1</td><td>0.2</td><td>0.2</td><td>0.2</td><td>0.3</td><td>0.5</td><td>0.5</td><td>0.75</td><td>0.25</td><td>2.52131</td></tr></table>
|
| 478 |
+
|
| 479 |
+
Table 8: Weight optimization using different reward weight combinations.
|
| 480 |
+
|
| 481 |
+
# User's Profile: Female Adult Extraversion
|
| 482 |
+
|
| 483 |
+
Therapist's Psychotherapeutic Approach: Non-directive
|
| 484 |
+
|
| 485 |
+
T: Good afternoon. How are you feeling today? (directing polite)
|
| 486 |
+
U: I've been feeling really down lately. My depression has been affecting my energy levels and my ability to engage with others. (neutral)
|
| 487 |
+
T: I'm sorry to hear that you're feeling down and experiencing a lack of energy due to depression. Could you tell me more about how your depression is impacting your interactions with others? (empathetic, polite)
|
| 488 |
+
U: Well, I've always been an extroverted person, but lately, it's been difficult to find the motivation to socialize. I feel exhausted and withdrawn, and it's affecting my relationships. (negative)
|
| 489 |
+
T: It sounds like your depression is influencing your usual extraverted nature and making it challenging to engage with others. How does this change in behavior and lack of social interaction make you feel? (understanding, moderately polite)
|
| 490 |
+
U: It makes me feel isolated and lonely. I miss the connections I used to have, but it's hard to find the energy to reach out and participate in social activities. (negative)
|
| 491 |
+
T: I can understand how the lack of social interaction and feeling isolated can contribute to your feelings of loneliness. Let's explore ways to gradually increase your engagement with others while considering your current energy levels. How do you envision addressing this? (understanding, polite)
|
| 492 |
+
U: Maybe I can start by reaching out to a close friend or family member and expressing how I've been feeling. It might also help to participate in low-energy social activities, such as having a coffee or going for a walk with someone I trust. (positive)
|
| 493 |
+
T: Those are great suggestions. Reaching out to someone you trust and engaging in low-energy social activities can help you reconnect with others at a pace that feels comfortable for you. Remember to be gentle with yourself and take it one step at a time. How does that resonate with you? (understanding, moderately polite)
|
| 494 |
+
U: It feels like a positive approach. I appreciate your guidance and understanding as I navigate this. (positive).
|
| 495 |
+
T: I'm here to support you every step of the way. Is there anything else you'd like to discuss today? (helpful, polite)
|
| 496 |
+
U: Actually, I've been struggling with negative thoughts and self-criticism. It's like a constant stream of self-doubt and negativity that I can't seem to shake off. (negative)
|
| 497 |
+
T: Negative thoughts and self-criticism are common challenges for individuals with depression. Let's explore ways to address these patterns and cultivate self-compassion. How do you envision addressing this? (understanding, moderately polite)
|
| 498 |
+
T: Those are excellent strategies. Practicing mindfulness, challenging negative thoughts, and journaling can help you gain insight into your thinking patterns and develop a more compassionate perspective toward yourself. Remember, it takes time and effort, so be patient with yourself. How does that resonate with you? (understanding, moderately polite)
|
| 499 |
+
U: It feels like a step in the right direction. I appreciate your guidance and support as I work through this. ( positive )
|
| 500 |
+
T: I'm here to support you every step of the way. Is there anything else you'd like to discuss today? (helpful, polite)
|
| 501 |
+
U: I think that covers everything for now. Thank you for your understanding and kindness. (positive)
|
| 502 |
+
T: You're most welcome. Remember, I'm here to provide support whenever you need it. Take care, and we'll continue our work together in the next session. (helpful, polite)
|
| 503 |
+
|
| 504 |
+
Figure 4: An example dialogue between the therapist $(T)$ and the user $(U)$ from PSYCON. The text highlighted in purple denotes the sentiment label of the user's utterances. The text highlighted in pink denotes the interpersonal behaviour and politeness labels of the therapist's utterances.
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:42c0a8d96c7a7f59d4cb17c2c3e51272435bc1ff5bb562b0e6420adf1455247b
|
| 3 |
+
size 1086586
|
2023/e-THERAPIST_ I suggest you to cultivate a mindset of positivity and nurture uplifting thoughts/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_content_list.json
ADDED
|
@@ -0,0 +1,1868 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "kNN-LM Does Not Improve Open-ended Text Generation",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
200,
|
| 8 |
+
90,
|
| 9 |
+
800,
|
| 10 |
+
112
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Shufan Wang<sup>1</sup> Yixiao Song<sup>1</sup> Andrew Drozdov<sup>1</sup> \nAparna Garimella<sup>2</sup> Varun Manjunatha<sup>2</sup> Mohit Iyyer<sup>1</sup>",
|
| 17 |
+
"bbox": [
|
| 18 |
+
253,
|
| 19 |
+
129,
|
| 20 |
+
746,
|
| 21 |
+
164
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "University of Massachusetts Amherst $^{1}$ Adobe Research $^{2}$ {shufanwang, yixiaosong, adrozdov, miyyer}@umass.edu {garimell,vmanjuna}@adobe.com",
|
| 28 |
+
"bbox": [
|
| 29 |
+
238,
|
| 30 |
+
166,
|
| 31 |
+
764,
|
| 32 |
+
216
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
268
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the $k$ -NN-LM (Khandelwal et al., 2020), interpolate the LM's predicted distribution of the next word with a distribution formed from the most relevant retrievals for a given prefix. While the $k$ -NN-LM and related methods yield impressive decreases in perplexity, we discover that they do not exhibit corresponding improvements in open-ended generation quality, as measured by both automatic evaluation metrics (e.g., MAUVE) and human evaluations. Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to the baseline LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation. However, when decoding a long sequence at inference time, significant improvements on this smaller subset of tokens are washed out by slightly worse predictions on most tokens. Furthermore, we discover that the entropy of the retrieval distribution increases faster than that of the base LM as the generated sequence becomes longer, which indicates that retrieval is less reliable when using model-generated text as queries (i.e., is subject to exposure bias). We hope that our analysis spurs future work on improved decoding algorithms and interpolation strategies for retrieval-augmented language models.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
144,
|
| 53 |
+
281,
|
| 54 |
+
460,
|
| 55 |
+
765
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
778,
|
| 66 |
+
260,
|
| 67 |
+
795
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Retrieval-augmented language models, which integrate non-parametric dense retrieval with autoregressive next-token prediction, have been validated with strong empirical performance across a variety of tasks (Metzler et al., 2022; Basu et al., 2022; Mialon et al., 2023) in addition to achieving low held-out perplexities on LM benchmarks. In this",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
806,
|
| 77 |
+
490,
|
| 78 |
+
917
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "paper, we study interpolation-based LMs, a subtype of retrieval-augmented LMs that compute the probability of the next token by interpolating between the softmax distribution of the original LM and a token distribution formed by retrieving over an external datastore. These methods, perhaps best exemplified by the kNN-LM (Khandelwal et al., 2020), are particularly attractive because they allow any pretrained LM to be retrofitted with a retrieval module without further training.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
507,
|
| 87 |
+
253,
|
| 88 |
+
884,
|
| 89 |
+
413
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Despite these advantages, there is limited understanding about the text generation quality of interpolation-based LMs. In this study, we evaluate the quality of generated text from two such methods, kNN-LM and TRIME (Zhong et al., 2022), against the output of baseline LMs that do not use retrieval. Our evaluations involve open-ended text completions generated using different decoding algorithms on both the WikiText-103 and PG-19 datasets. We discover that interpolation-based LMs do not improve the quality of generated text, as measured by both automatic text generation metrics such as MAUVE (Pillutla et al., 2021) and human evaluation.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
414,
|
| 99 |
+
884,
|
| 100 |
+
637
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "This result begs the question of why the text generation quality does not improve, as the perplexity of interpolation-based LMs is substantially lower than that of the baselines. Our analysis of the $k$ NN-LM model suggests two potential reasons for this lack of improvement:",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
639,
|
| 110 |
+
884,
|
| 111 |
+
736
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "1. $k$ -NN-LM actually worsens the predictions of the majority of tokens in the WikiText-103 test set. On aggregate, perplexity improves because of significantly improved predictions on a smaller subset of tokens. However, when generating a long sequence of tokens, these improvements are washed out by the worsened predictions on other tokens.",
|
| 118 |
+
"bbox": [
|
| 119 |
+
524,
|
| 120 |
+
747,
|
| 121 |
+
884,
|
| 122 |
+
876
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "2. The quality of the retrieval distribution deteriorates faster than that of the LM's predicted",
|
| 129 |
+
"bbox": [
|
| 130 |
+
522,
|
| 131 |
+
887,
|
| 132 |
+
884,
|
| 133 |
+
917
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_number",
|
| 139 |
+
"text": "15023",
|
| 140 |
+
"bbox": [
|
| 141 |
+
475,
|
| 142 |
+
927,
|
| 143 |
+
524,
|
| 144 |
+
941
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "footer",
|
| 150 |
+
"text": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15023-15037 December 6-10, 2023 ©2023 Association for Computational Linguistics",
|
| 151 |
+
"bbox": [
|
| 152 |
+
152,
|
| 153 |
+
945,
|
| 154 |
+
843,
|
| 155 |
+
972
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "distribution as the length of the generation increases; in other words, the retrieval distribution is more vulnerable to exposure bias and can be easily thrown off by artifacts presented in model-generated text.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
149,
|
| 164 |
+
84,
|
| 165 |
+
487,
|
| 166 |
+
164
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 1
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"text": "Unlike previous works that rely on perplexity to evaluate language modeling or BLEU to evaluate machine translation quality of kNN-LM-based models (Khandelwal et al., 2021), our work specifically studies the open-ended text generation capability of kNN-LMs with a range of automatic evaluation metrics as well as human evaluation. We demonstrate that, though they significantly lower perplexity, retrievers might also impair text generation performance of kNN-LMs. This finding suggests potential future directions for using retrieval during text generation, such as developing more robust retrieval components or employing retriever mechanisms more selectively during decoding.",
|
| 173 |
+
"bbox": [
|
| 174 |
+
112,
|
| 175 |
+
177,
|
| 176 |
+
489,
|
| 177 |
+
401
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 1
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "2 Related Work",
|
| 184 |
+
"text_level": 1,
|
| 185 |
+
"bbox": [
|
| 186 |
+
112,
|
| 187 |
+
413,
|
| 188 |
+
270,
|
| 189 |
+
428
|
| 190 |
+
],
|
| 191 |
+
"page_idx": 1
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"type": "text",
|
| 195 |
+
"text": "We present the most extensive study of open-ended text generation<sup>1</sup> from interpolation-based LMs such as kNN-LM (Khandelwal et al., 2020). Our results reveal that although these methods are effective at reducing perplexity, they can also be detrimental to text generation. Previous work finds that retrieval LMs are improved by selectively incorporating retrieval when conditions are favorable (He et al., 2021a; Alon et al., 2022; Drozdov et al., 2022; Mallen et al., 2023), although they only examine the teacher-forced setting or other tasks, e.g. question answering. The kNN-MT (Khandelwal et al., 2021) explores machine translation, which is a constrained task with short inputs, and thus not a good test of open-ended long-form generation.",
|
| 196 |
+
"bbox": [
|
| 197 |
+
112,
|
| 198 |
+
439,
|
| 199 |
+
489,
|
| 200 |
+
678
|
| 201 |
+
],
|
| 202 |
+
"page_idx": 1
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"type": "text",
|
| 206 |
+
"text": "The kNN-LM effectively scales retrieval to billions of tokens using a token-level non-parametric interpolation technique first introduced by Grave et al. (2017). Alternative retrieval-augmented models experiment with training the retriever (Zhong et al., 2022; Ram et al., 2023; Shi et al., 2023), interpolating vectors instead of token probabilities (Yogatama et al., 2021), scaling to trillions of tokens (Borgeaud et al., 2021), exploiting retrieval for strong few-shot learning (Izacard et al., 2022), and so on (Chen et al., 2017; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021; Rae",
|
| 207 |
+
"bbox": [
|
| 208 |
+
112,
|
| 209 |
+
680,
|
| 210 |
+
489,
|
| 211 |
+
872
|
| 212 |
+
],
|
| 213 |
+
"page_idx": 1
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"type": "text",
|
| 217 |
+
"text": "et al., 2021; Wu et al., 2022; Trivedi et al., 2022; He et al., 2022). Among these, kNN-LM stands out as a relatively simple and fundamental work. Our findings indicate important weaknesses of retrieval for text generation.",
|
| 218 |
+
"bbox": [
|
| 219 |
+
507,
|
| 220 |
+
84,
|
| 221 |
+
882,
|
| 222 |
+
162
|
| 223 |
+
],
|
| 224 |
+
"page_idx": 1
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"type": "text",
|
| 228 |
+
"text": "Reference-based metrics are not well suited to evaluate open-ended text generation (Novikova et al., 2017). Instead, effective automated approaches compare the machine generated and human language text distributions using samples (McCoy et al., 2021; Pillutla et al., 2021; Pimentel et al., 2023). Human evaluation remains the golden standard for natural language generation (Hashimoto et al., 2019; Celikyilmaz et al., 2020; Krishna et al., 2023).",
|
| 229 |
+
"bbox": [
|
| 230 |
+
507,
|
| 231 |
+
165,
|
| 232 |
+
884,
|
| 233 |
+
325
|
| 234 |
+
],
|
| 235 |
+
"page_idx": 1
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"type": "text",
|
| 239 |
+
"text": "3 Experimental setup",
|
| 240 |
+
"text_level": 1,
|
| 241 |
+
"bbox": [
|
| 242 |
+
507,
|
| 243 |
+
338,
|
| 244 |
+
714,
|
| 245 |
+
356
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "Using a variety of commonly used text generation evaluation metrics, we evaluate the text generation capability of interpolation-based LMs and compare them to baseline LMs (i.e., without $k$ -nearest-neighbor retrieval from an external datastore). In this section, we describe our experimental setup, including models, automatic evaluation metrics, data selection, and hyperparameters.",
|
| 252 |
+
"bbox": [
|
| 253 |
+
507,
|
| 254 |
+
365,
|
| 255 |
+
882,
|
| 256 |
+
492
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "text",
|
| 262 |
+
"text": "3.1 Models",
|
| 263 |
+
"text_level": 1,
|
| 264 |
+
"bbox": [
|
| 265 |
+
507,
|
| 266 |
+
505,
|
| 267 |
+
613,
|
| 268 |
+
519
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "We experiment with two interpolation-based LMs: the $k$ -NN-LM of Khandelwal et al. (2020), which augments an existing pre-trained LM with a retrieval module without any additional training, and TRIME (Zhong et al., 2022), a recent improvement over the $k$ -NN-LM that trains the retriever and LM jointly to further decrease perplexity.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
507,
|
| 277 |
+
525,
|
| 278 |
+
882,
|
| 279 |
+
639
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "kNN-LM: The kNN-LM is a pre-trained language model that uses retrieval to improve word prediction. We follow the procedure from Khandelwal et al. (2020) and Alon et al. (2022) $^2$ , and use the LM to encode token-level representations from a document collection (e.g., WikiText-103 training data) into a datastore where each token in document is converted into a key-value pair: a context vector $k_{i}$ representing the first $n - 1$ words and a value $v_{i}$ which is the $n$ -th word. During evaluation, the model calculates Euclidean distances $d(k,q_j)$ between the query vector $q_{j}$ and all the keys $k_{1},k_{2},\\ldots k_{|V|}$ in the datastore. The values",
|
| 286 |
+
"bbox": [
|
| 287 |
+
507,
|
| 288 |
+
646,
|
| 289 |
+
882,
|
| 290 |
+
858
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "page_footnote",
|
| 296 |
+
"text": "$^{2}$ Alternative architecture options for kNN-LM are explored in Xu et al. (2023). We don't expect those settings to impact the trends we observe, but as we mention in §6, tuning for text generation could be beneficial.",
|
| 297 |
+
"bbox": [
|
| 298 |
+
507,
|
| 299 |
+
869,
|
| 300 |
+
882,
|
| 301 |
+
917
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "page_footnote",
|
| 307 |
+
"text": "1The $k\\mathrm{NN}$ -LM is also evaluated using MAUVE in Lan et al. (2023); however, our work has much more extensive analysis in the open-ended text generation setting.",
|
| 308 |
+
"bbox": [
|
| 309 |
+
112,
|
| 310 |
+
879,
|
| 311 |
+
487,
|
| 312 |
+
917
|
| 313 |
+
],
|
| 314 |
+
"page_idx": 1
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"type": "page_number",
|
| 318 |
+
"text": "15024 2",
|
| 319 |
+
"bbox": [
|
| 320 |
+
477,
|
| 321 |
+
927,
|
| 322 |
+
524,
|
| 323 |
+
952
|
| 324 |
+
],
|
| 325 |
+
"page_idx": 1
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"type": "text",
|
| 329 |
+
"text": "from the retrieved documents define a new distribution of the next word:",
|
| 330 |
+
"bbox": [
|
| 331 |
+
112,
|
| 332 |
+
84,
|
| 333 |
+
489,
|
| 334 |
+
116
|
| 335 |
+
],
|
| 336 |
+
"page_idx": 2
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"type": "equation",
|
| 340 |
+
"text": "\n$$\nP _ {K N N} (w _ {t} | q _ {t}) \\propto \\sum_ {(k _ {i}, v _ {i})} \\mathbb {1} _ {w _ {t} = v _ {i}} \\exp (- d (k _ {i}, q _ {t}))\n$$\n",
|
| 341 |
+
"text_format": "latex",
|
| 342 |
+
"bbox": [
|
| 343 |
+
126,
|
| 344 |
+
142,
|
| 345 |
+
475,
|
| 346 |
+
180
|
| 347 |
+
],
|
| 348 |
+
"page_idx": 2
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"type": "text",
|
| 352 |
+
"text": "The model interpolates the LM's predicted distribution over the next token $P(w_{t}|q_{t})$ with the retrieval distribution with a tunable hyperparameter $\\lambda$ :",
|
| 353 |
+
"bbox": [
|
| 354 |
+
112,
|
| 355 |
+
189,
|
| 356 |
+
489,
|
| 357 |
+
239
|
| 358 |
+
],
|
| 359 |
+
"page_idx": 2
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"type": "equation",
|
| 363 |
+
"text": "\n$$\nP ^ {\\prime} \\left(w _ {t} \\mid q _ {t}\\right) = \\lambda P _ {K N N} \\left(w _ {t} \\mid q _ {t}\\right) + (1 - \\lambda) P _ {L M} \\left(w _ {t} \\mid q _ {t}\\right) \\tag {1}\n$$\n",
|
| 364 |
+
"text_format": "latex",
|
| 365 |
+
"bbox": [
|
| 366 |
+
114,
|
| 367 |
+
265,
|
| 368 |
+
487,
|
| 369 |
+
299
|
| 370 |
+
],
|
| 371 |
+
"page_idx": 2
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"type": "text",
|
| 375 |
+
"text": "To generate text from the $k\\mathrm{NN}$ -LM, we apply a decoding strategy (e.g., greedy decoding or truncated sampling algorithms) using the final interpolated probability distribution $P'(w_t|q_t)$ .",
|
| 376 |
+
"bbox": [
|
| 377 |
+
112,
|
| 378 |
+
300,
|
| 379 |
+
489,
|
| 380 |
+
365
|
| 381 |
+
],
|
| 382 |
+
"page_idx": 2
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"type": "text",
|
| 386 |
+
"text": "TRIME: Note that in kNN-LM, the LM is trained without retrieval; the retrieval component is bolted on after training. Zhong et al. (2022) note that this approach is suboptimal, as the LM does not understand how to best use the retrieval. Thus, they propose TRIME, which uses an efficient in-batch strategy to incorporate retrievals during training. While kNN-LM relies on just one type of retrieval (from an external datastore), TRIME can retrieve from local, long-range, as well as external context. We use the $\\mathrm{TRIME}_{\\mathrm{EXT}}$ configuration in all of our experiments, which also uses a linear interpolation between LM and retrieval distributions (as in Equation 1) to produce the final probability distribution. The baseline LM (no external retrieval) retrieves from example-level local and long context, but has no access to a huge-scale external datastore.",
|
| 387 |
+
"bbox": [
|
| 388 |
+
112,
|
| 389 |
+
373,
|
| 390 |
+
489,
|
| 391 |
+
646
|
| 392 |
+
],
|
| 393 |
+
"page_idx": 2
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"type": "text",
|
| 397 |
+
"text": "3.2 Constructing an evaluation dataset",
|
| 398 |
+
"text_level": 1,
|
| 399 |
+
"bbox": [
|
| 400 |
+
112,
|
| 401 |
+
658,
|
| 402 |
+
435,
|
| 403 |
+
673
|
| 404 |
+
],
|
| 405 |
+
"page_idx": 2
|
| 406 |
+
},
|
| 407 |
+
{
|
| 408 |
+
"type": "text",
|
| 409 |
+
"text": "We sample from WikiText-103 (Merit et al., 2016) to construct our main evaluation dataset; in Section 4, we also perform an analysis experiment on the PG-19 dataset (fictional books) to test whether our findings hold across domains. We choose WikiText-103 because it is the most commonly used dataset for evaluating interpolation-based LMs; indeed, the main experiments from both kNN-LM and TRIME demonstrate that the retrieval component decreases held-out perplexity on this dataset compared to the baseline LM. Specifically, we randomly sample 5K examples<sup>3</sup> from the",
|
| 410 |
+
"bbox": [
|
| 411 |
+
112,
|
| 412 |
+
678,
|
| 413 |
+
489,
|
| 414 |
+
872
|
| 415 |
+
],
|
| 416 |
+
"page_idx": 2
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"type": "text",
|
| 420 |
+
"text": "validation set of WikiText-103. 4",
|
| 421 |
+
"bbox": [
|
| 422 |
+
507,
|
| 423 |
+
84,
|
| 424 |
+
756,
|
| 425 |
+
98
|
| 426 |
+
],
|
| 427 |
+
"page_idx": 2
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"type": "text",
|
| 431 |
+
"text": "3.3 Automatic evaluation metrics",
|
| 432 |
+
"text_level": 1,
|
| 433 |
+
"bbox": [
|
| 434 |
+
507,
|
| 435 |
+
111,
|
| 436 |
+
786,
|
| 437 |
+
124
|
| 438 |
+
],
|
| 439 |
+
"page_idx": 2
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"type": "text",
|
| 443 |
+
"text": "For all models tested, we compare the quality of text generated by the baseline LM with and without the $k$ -NN retrieval component over the external datastore. We measure quality via the following automatic metrics:",
|
| 444 |
+
"bbox": [
|
| 445 |
+
507,
|
| 446 |
+
131,
|
| 447 |
+
882,
|
| 448 |
+
211
|
| 449 |
+
],
|
| 450 |
+
"page_idx": 2
|
| 451 |
+
},
|
| 452 |
+
{
|
| 453 |
+
"type": "text",
|
| 454 |
+
"text": "MAUVE: MAUVE is an evaluation metric for open-ended text generation (Pillutla et al., 2021) that achieves high correlation with human judgments of text quality. It measures the distribution similarity between the generated text and the reference text. Higher MAUVE scores indicate closer distance between the distribution of the generated text and that of reference text.",
|
| 455 |
+
"bbox": [
|
| 456 |
+
507,
|
| 457 |
+
219,
|
| 458 |
+
884,
|
| 459 |
+
349
|
| 460 |
+
],
|
| 461 |
+
"page_idx": 2
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"type": "text",
|
| 465 |
+
"text": "RankGen: Given a prefix and several possible continuations (suffixes), RankGen (Krishna et al., 2022) outputs a score for each suffix, measuring the relevance between the prefix and suffix. Higher RankGen scores indicate stronger relevance between generated suffix with the given prefix. We thus measure the RankGen score between prefix and generated suffix for each of the two models.",
|
| 466 |
+
"bbox": [
|
| 467 |
+
507,
|
| 468 |
+
357,
|
| 469 |
+
882,
|
| 470 |
+
487
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 2
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "GPT-3 perplexity: We use GPT-3 (Brown et al., 2020), a large-scale pre-trained language model, to compute the perplexity of text generated with and without interpolation conditioned on the same prefix. Lower GPT-3 perplexity indicates stronger relevance between prefix and generated suffix and the better fluency of the generated suffix.",
|
| 477 |
+
"bbox": [
|
| 478 |
+
507,
|
| 479 |
+
495,
|
| 480 |
+
882,
|
| 481 |
+
608
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 2
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "text",
|
| 487 |
+
"text": "Entity-F1: Previous works (Nan et al., 2021; Lee et al., 2022) use the percentage of hallucinated named entities (entities that appear in the generated text but not in the reference text) or the ratio of named entity overlaps between the generated text and reference text to estimate the factuality of the generated text. In our work, we compute the F1 scores between the named entities from the generated text and reference text as a proxy for entity hallucination. Higher F1 scores may correlate to fewer instances of hallucinated entities.",
|
| 488 |
+
"bbox": [
|
| 489 |
+
507,
|
| 490 |
+
617,
|
| 491 |
+
882,
|
| 492 |
+
793
|
| 493 |
+
],
|
| 494 |
+
"page_idx": 2
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"type": "text",
|
| 498 |
+
"text": "Seq-Rep-1: We follow Welleck et al. (2020) and use the percentage of unique unigrams (Seq-Rep-1)",
|
| 499 |
+
"bbox": [
|
| 500 |
+
507,
|
| 501 |
+
803,
|
| 502 |
+
882,
|
| 503 |
+
835
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 2
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "page_footnote",
|
| 509 |
+
"text": "3We choose 5K examples because this is the minimum recommended number of generations to obtain meaningful comparisons as per Pillutla et al. (2021).",
|
| 510 |
+
"bbox": [
|
| 511 |
+
112,
|
| 512 |
+
879,
|
| 513 |
+
487,
|
| 514 |
+
917
|
| 515 |
+
],
|
| 516 |
+
"page_idx": 2
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"type": "page_footnote",
|
| 520 |
+
"text": "<sup>4</sup>We use the first 128 tokens of each example as a prefix that the model must condition on to generate a 256-token-long continuation. As some of our metrics requires reference text, we also store the ground-truth 256 tokens (gold suffix) that follow the prefix in each example.",
|
| 521 |
+
"bbox": [
|
| 522 |
+
507,
|
| 523 |
+
844,
|
| 524 |
+
882,
|
| 525 |
+
904
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 2
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "page_footnote",
|
| 531 |
+
"text": "<sup>5</sup>We use the 6.7B gpt3-curie model via OpenAI's API",
|
| 532 |
+
"bbox": [
|
| 533 |
+
532,
|
| 534 |
+
904,
|
| 535 |
+
872,
|
| 536 |
+
917
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 2
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "page_number",
|
| 542 |
+
"text": "15025 3",
|
| 543 |
+
"bbox": [
|
| 544 |
+
477,
|
| 545 |
+
927,
|
| 546 |
+
524,
|
| 547 |
+
952
|
| 548 |
+
],
|
| 549 |
+
"page_idx": 2
|
| 550 |
+
},
|
| 551 |
+
{
|
| 552 |
+
"type": "text",
|
| 553 |
+
"text": "in the text as a metric for lexical diversity in the text. Higher Seq-Rep-1 scores indicate lower diversity (more repetition) in the generated text. $^6$",
|
| 554 |
+
"bbox": [
|
| 555 |
+
112,
|
| 556 |
+
84,
|
| 557 |
+
487,
|
| 558 |
+
131
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "3.4 Model configurations and hyperparameters",
|
| 565 |
+
"text_level": 1,
|
| 566 |
+
"bbox": [
|
| 567 |
+
112,
|
| 568 |
+
143,
|
| 569 |
+
363,
|
| 570 |
+
177
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "In this work, we leverage pretrained model and datastore checkpoints released by prior work, and also train our own interpolation-based LMs.",
|
| 577 |
+
"bbox": [
|
| 578 |
+
112,
|
| 579 |
+
181,
|
| 580 |
+
487,
|
| 581 |
+
231
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 3
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "text",
|
| 587 |
+
"text": "Baseline LM details: For kNN-LM, we use the implementations from Alon et al. (2022) and Khandelwal et al. (2020). The model in Alon et al. (2022) relies on a backbone 117M-parameter GPT-2 small model (Radford et al., 2019) fine-tuned on the WikiText-103 training data. The external datastore is constructed by the same backbone model, and both the pretrained LM and datastore are publicly released by Alon et al. (2022). We also test the model in Khandelwal et al. (2020), which proposes the first kNN-LM. Khandelwal et al. (2020) uses a 247M-parameter Transformer LM trained from scratch on WikiText-103 and the datastore is computed using the trained Transformer LM. For TRIME, we adopt the 247M-parameter $\\mathrm{TRIME}_{\\mathrm{ext}}$ model trained from scratch on WikiText-103 and publicly released by Zhong et al. (2022). Our \"non-retrieval\" baseline is the same model without external retrieval; in other words, it has access to only the local memory (recent tokens) and long-range memory (in-batch tokens). In all three set-ups, the external datastore is constructed using the training dataset of WikiText-103; the datastores from Zhong et al. (2022) and Khandelwal et al. (2020) both have 103M entries, while the datastore from Alon et al. (2022) has 117M entries (the discrepancy is due to tokenization differences between the models).",
|
| 588 |
+
"bbox": [
|
| 589 |
+
112,
|
| 590 |
+
240,
|
| 591 |
+
489,
|
| 592 |
+
690
|
| 593 |
+
],
|
| 594 |
+
"page_idx": 3
|
| 595 |
+
},
|
| 596 |
+
{
|
| 597 |
+
"type": "text",
|
| 598 |
+
"text": "Perplexity improvements from retrieval: All models studied in this paper substantially decrease perplexity on WikiText-103's validation set when interpolation is enabled. For the model in Alon et al. (2022), the base GPT-2 perplexity is 14.8, and it decreases to 12.6 (-2.2) after interpolation. The kNN-LM in (Khandelwal et al., 2020) decreases perplexity from 17.96 (no retrieval) to 16.06 (-1.9) after interpolation. Meanwhile, TRIME decreases",
|
| 599 |
+
"bbox": [
|
| 600 |
+
112,
|
| 601 |
+
701,
|
| 602 |
+
489,
|
| 603 |
+
847
|
| 604 |
+
],
|
| 605 |
+
"page_idx": 3
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "table",
|
| 609 |
+
"img_path": "images/6f9b6c155ac12027733b53bd6d838d4d05c76604e7c883c3219a8dd45a025f5a.jpg",
|
| 610 |
+
"table_caption": [],
|
| 611 |
+
"table_footnote": [],
|
| 612 |
+
"table_body": "<table><tr><td>Model</td><td>MAUVE↑</td><td>PPLGPT-3↓</td><td>RankGen↑</td><td>EntityF1↑</td><td>SeqRep1↓</td></tr><tr><td colspan=\"6\">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small(no retrieval)</td><td>77.7</td><td>13.1</td><td>11.7</td><td>14.2</td><td>56.7</td></tr><tr><td>GPT-2 small(+ retrieval)</td><td>79.2</td><td>14.8</td><td>11.7</td><td>13.1</td><td>53.3</td></tr><tr><td colspan=\"6\">kNN-LM (Khandelwal et al., 2020) with and without external retrieval</td></tr><tr><td>Transformer(no retrieval)</td><td>89.5</td><td>20.4</td><td>12.9</td><td>12.1</td><td>41.8</td></tr><tr><td>Transformer(+ retrieval)</td><td>90.7</td><td>28.9</td><td>12.5</td><td>9.77</td><td>37.9</td></tr><tr><td colspan=\"6\">TRIMEXT with and without external retrieval from Zhong et al. (2022)</td></tr><tr><td>TRIME(no retrieval)</td><td>90.6</td><td>22.2</td><td>13.1</td><td>11.3</td><td>40.1</td></tr><tr><td>TRIME(+ retrieval)</td><td>87.3</td><td>23.8</td><td>12.5</td><td>9.80</td><td>38.5</td></tr></table>",
|
| 613 |
+
"bbox": [
|
| 614 |
+
514,
|
| 615 |
+
80,
|
| 616 |
+
878,
|
| 617 |
+
319
|
| 618 |
+
],
|
| 619 |
+
"page_idx": 3
|
| 620 |
+
},
|
| 621 |
+
{
|
| 622 |
+
"type": "text",
|
| 623 |
+
"text": "Table 1: Automatic evaluation metrics do not show consistent improvement in generation quality for interpolation-based LMs compared to their non-retrieval baseline LMs. We evaluate three set-ups: 1) kNN-LM with GPT2 as the baseline (top), 2) the original kNN-LM proposed in (Khandelwal et al., 2020) which trains a Transformer LM from scratch on the WikiText-103 training data (middle), and 3) TRIME which trains both the LM and the retrieval mechanism (bottom).",
|
| 624 |
+
"bbox": [
|
| 625 |
+
507,
|
| 626 |
+
328,
|
| 627 |
+
884,
|
| 628 |
+
457
|
| 629 |
+
],
|
| 630 |
+
"page_idx": 3
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"type": "text",
|
| 634 |
+
"text": "perplexity from 17.0 (no retrieval) to 15.5 (-1.5) after interpolation.",
|
| 635 |
+
"bbox": [
|
| 636 |
+
507,
|
| 637 |
+
483,
|
| 638 |
+
882,
|
| 639 |
+
516
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "text",
|
| 645 |
+
"text": "Hyperparameters: To generate text, we use the hyperparameters recommended by the authors that yield low perplexities on the WikiText-103 validation set. For the model in Alon et al. (2022) and Khandelwal et al. (2020), the softmax temperature is set to 1.0 and the interpolation coefficient between the LM distribution and the retrieval distribution $\\lambda$ is set to 0.25. For TRIME(Zhong et al., 2022), the softmax temperature is set to 1.25 and the $\\lambda$ is 0.3. For most of our experiments (e.g., those in Table 1), unless otherwise specified, we use nucleus sampling (Holtzman et al., 2020) with $p = 0.8$ for text generation.",
|
| 646 |
+
"bbox": [
|
| 647 |
+
505,
|
| 648 |
+
526,
|
| 649 |
+
882,
|
| 650 |
+
734
|
| 651 |
+
],
|
| 652 |
+
"page_idx": 3
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"type": "text",
|
| 656 |
+
"text": "4 Results",
|
| 657 |
+
"text_level": 1,
|
| 658 |
+
"bbox": [
|
| 659 |
+
507,
|
| 660 |
+
747,
|
| 661 |
+
608,
|
| 662 |
+
764
|
| 663 |
+
],
|
| 664 |
+
"page_idx": 3
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"type": "text",
|
| 668 |
+
"text": "We find that despite incorporating the retrieval component and interpolating information from the baseline LM and retrieval, these methods do not yield any significant improvement to text generation performance, and even worsen it by some metrics (Table 1). In this section, we provide an overview of our main results, perform more fine-grained analyses, and describe a human evaluation that supports the conclusions drawn from automatic metrics.",
|
| 669 |
+
"bbox": [
|
| 670 |
+
505,
|
| 671 |
+
774,
|
| 672 |
+
884,
|
| 673 |
+
917
|
| 674 |
+
],
|
| 675 |
+
"page_idx": 3
|
| 676 |
+
},
|
| 677 |
+
{
|
| 678 |
+
"type": "page_footnote",
|
| 679 |
+
"text": "We also compute Seq-Rep- $N$ for $N = 2,3,4$ , and observe consistent results with using Seq-Rep-1 (in Appendix A.4).",
|
| 680 |
+
"bbox": [
|
| 681 |
+
112,
|
| 682 |
+
854,
|
| 683 |
+
487,
|
| 684 |
+
891
|
| 685 |
+
],
|
| 686 |
+
"page_idx": 3
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"type": "page_footnote",
|
| 690 |
+
"text": "7See the gpt2-finetuned-wikitext103 model available here: https://github.com/neulab/knn-transformers.",
|
| 691 |
+
"bbox": [
|
| 692 |
+
112,
|
| 693 |
+
891,
|
| 694 |
+
485,
|
| 695 |
+
917
|
| 696 |
+
],
|
| 697 |
+
"page_idx": 3
|
| 698 |
+
},
|
| 699 |
+
{
|
| 700 |
+
"type": "footer",
|
| 701 |
+
"text": "15026",
|
| 702 |
+
"bbox": [
|
| 703 |
+
477,
|
| 704 |
+
927,
|
| 705 |
+
524,
|
| 706 |
+
939
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 3
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "page_number",
|
| 712 |
+
"text": "4",
|
| 713 |
+
"bbox": [
|
| 714 |
+
492,
|
| 715 |
+
940,
|
| 716 |
+
504,
|
| 717 |
+
951
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 3
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "text",
|
| 723 |
+
"text": "Interpolation-based LMs do not improve automatic text generation evaluation metrics: We find that none of the three models significantly improve generation quality compared to the baseline LM, as shown by various metrics (Table 1). For the model in Alon et al. (2022) (top row in Table 1), while the MAUVE score improves by 1.5 points with retrieval, the perplexity of GPT-3 increases on retrieval-augmented generations, and the RankGen score is identical. For the model in Khandelwal et al. (2020) (middle row in Table 1), retrievals improves the MAUVE score even less significantly (1.2 points) but worsens perplexity of GPT-3, RankGen and Entity-F1. For TRIME (bottom row in Table 1), the non-retrieval baseline is actually slightly better across MAUVE, GPT-3 perplexity, RankGen and Entity-F1. In other words, there is no convincing winner; furthermore, contrary to the expectation that kNN-LMs reduce hallucination by retrieving (and potentially copying) from the datastore, we do not observe any improvement in the Entity F1 scores with the gold suffix. We observe a marginal improvement in lexical diversity of the generations (shown by the lower Seq-Rep-1 score $^{8}$ ).",
|
| 724 |
+
"bbox": [
|
| 725 |
+
112,
|
| 726 |
+
84,
|
| 727 |
+
490,
|
| 728 |
+
487
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 4
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "text",
|
| 734 |
+
"text": "These results hold across different decoding algorithms: The results in Table 1 are all from nucleus sampling. What if we change the decoding algorithm? To investigate the impact of decoding algorithm on generation quality, we evaluate the kNN-LM on three different decoding algorithms: greedy decoding, top- $k$ sampling, and beam search. We observe in Table 2 that none of these decoding algorithms changes the result: there is no clear winner between models with and without retrieval.",
|
| 735 |
+
"bbox": [
|
| 736 |
+
112,
|
| 737 |
+
497,
|
| 738 |
+
489,
|
| 739 |
+
657
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 4
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "text",
|
| 745 |
+
"text": "These results hold across different datasets: In addition to WikiText-103, we also evaluate the text generation performance of the kNN-LM on the PG-19 dataset (Rae et al., 2020), which predominantly comprises fictional books and presents a distinct thematic variation to Wikipedia. We construct an evaluation dataset from PG-19 similarly to our constructed evaluation dataset from WikiText-103 in Section 3.2. The baseline LM is GPT2-",
|
| 746 |
+
"bbox": [
|
| 747 |
+
112,
|
| 748 |
+
668,
|
| 749 |
+
489,
|
| 750 |
+
813
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 4
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "table",
|
| 756 |
+
"img_path": "images/d464c150c99d72e9674c4738df7381cb5e3bb2ef3b7989704ab9b153b4e62d73.jpg",
|
| 757 |
+
"table_caption": [],
|
| 758 |
+
"table_footnote": [],
|
| 759 |
+
"table_body": "<table><tr><td>Model</td><td>Nucleus Sampling</td><td>Top-k Sampling</td><td>Greedy Decoding</td></tr><tr><td colspan=\"4\">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small (no retrieval)</td><td>77.7</td><td>87.1</td><td>2.32</td></tr><tr><td>GPT-2 small (+ retrieval)</td><td>79.2</td><td>87.5</td><td>2.44</td></tr></table>",
|
| 760 |
+
"bbox": [
|
| 761 |
+
517,
|
| 762 |
+
80,
|
| 763 |
+
875,
|
| 764 |
+
187
|
| 765 |
+
],
|
| 766 |
+
"page_idx": 4
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "table",
|
| 770 |
+
"img_path": "images/b0321681263e33425628ebb6da270068557b2648350ff6f56c08597b00d47f42.jpg",
|
| 771 |
+
"table_caption": [
|
| 772 |
+
"Table 2: The observation that $k$ -NN-LM does not significantly improve text generation performance (measured here via MAUVE) is consistent across a variety of decoding algorithms: nucleus sampling, top-k sampling ( $k = 40$ ) and greedy decoding. We note that beam search decoding often generates repetitive text and therefore scores poorly with MAUVE."
|
| 773 |
+
],
|
| 774 |
+
"table_footnote": [],
|
| 775 |
+
"table_body": "<table><tr><td>Model</td><td>MAUVE↑</td><td>PPLGPT-3↓</td><td>RankGen↑</td><td>EntityF1↑</td><td>SeqRep1↓</td></tr><tr><td colspan=\"6\">kNN-LM with and without retrieval from PG-19 (Rae et al., 2019)</td></tr><tr><td>GPT-2 small(no retrieval)</td><td>8.00</td><td>17.3</td><td>4.13</td><td>5.63</td><td>47.6</td></tr><tr><td>GPT-2 small(+ retrieval)</td><td>9.26</td><td>18.8</td><td>3.62</td><td>4.87</td><td>44.5</td></tr></table>",
|
| 776 |
+
"bbox": [
|
| 777 |
+
514,
|
| 778 |
+
311,
|
| 779 |
+
878,
|
| 780 |
+
406
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 4
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "Table 3: Consistent with our findings in WikiText-103 dataset, we find in PG-19 (fictional books) that $k$ NN-LM does not yield consistent improvement in text generation quality compared to no-retrieval baseline LMs.",
|
| 787 |
+
"bbox": [
|
| 788 |
+
507,
|
| 789 |
+
414,
|
| 790 |
+
885,
|
| 791 |
+
474
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 4
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "text",
|
| 797 |
+
"text": "small model fine-tuned on the PG-19 dataset for three epochs (with 28.9 perplexity on the validation dataset).<sup>10</sup> Table 3 shows that on the PG-19 dataset, kNN-LM also does not improve text generation quality. While (marginally) improving perplexity, the kNN-LM often returns unhelpful artifacts from the PG19 dataset (see examples in Appendix A.3).",
|
| 798 |
+
"bbox": [
|
| 799 |
+
507,
|
| 800 |
+
499,
|
| 801 |
+
882,
|
| 802 |
+
612
|
| 803 |
+
],
|
| 804 |
+
"page_idx": 4
|
| 805 |
+
},
|
| 806 |
+
{
|
| 807 |
+
"type": "text",
|
| 808 |
+
"text": "4.1 Human evaluation",
|
| 809 |
+
"text_level": 1,
|
| 810 |
+
"bbox": [
|
| 811 |
+
507,
|
| 812 |
+
624,
|
| 813 |
+
702,
|
| 814 |
+
639
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 4
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "Having found that interpolation-based LMs do not notably improve text generation quality according to automatic evaluation metrics, we turn next to human evaluation, which is known to be more reliable for generation tasks (Celikyilmaz et al., 2020; Krishna et al., 2021), to compare the text generated by the kNN-LM vs. the baseline GPT-2 model from Alon et al. (2022). We hired three English teachers/editors on the freelance marketplace Upwork. The evaluation was conducted on the platform Label Studio (Tkachenko et al., 2020-2022).<sup>11</sup> The",
|
| 821 |
+
"bbox": [
|
| 822 |
+
505,
|
| 823 |
+
646,
|
| 824 |
+
884,
|
| 825 |
+
822
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 4
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "page_footnote",
|
| 831 |
+
"text": "8We also report the Seq-Rep-N scores for $N = 2$ 3,4 in Appendix A.4",
|
| 832 |
+
"bbox": [
|
| 833 |
+
112,
|
| 834 |
+
819,
|
| 835 |
+
487,
|
| 836 |
+
845
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "page_footnote",
|
| 842 |
+
"text": "From the validation dataset of PG-19, we randomly sample 5K samples, where in each sample, the first 128 tokens is used as the prefix. For datastore construction, we sample 1536 books from the training dataset only (filtering out the first $10\\%$ and last $10\\%$ tokens of each books for irrelevant content such as copyright statements). Our training dataset",
|
| 843 |
+
"bbox": [
|
| 844 |
+
112,
|
| 845 |
+
846,
|
| 846 |
+
487,
|
| 847 |
+
917
|
| 848 |
+
],
|
| 849 |
+
"page_idx": 4
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "page_footnote",
|
| 853 |
+
"text": "and datastore consist of 98M tokens, similar in size to those in the WikiText-103 dataset.",
|
| 854 |
+
"bbox": [
|
| 855 |
+
507,
|
| 856 |
+
831,
|
| 857 |
+
882,
|
| 858 |
+
854
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 4
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "page_footnote",
|
| 864 |
+
"text": "10Consistent with Drozdov et al. (2022), the model trained on PG-19 gives both worse MAUVE score and perplexity compared to the model trained on WikiText-103 since the PG-19 is a more diverse and challenging dataset.",
|
| 865 |
+
"bbox": [
|
| 866 |
+
507,
|
| 867 |
+
856,
|
| 868 |
+
880,
|
| 869 |
+
904
|
| 870 |
+
],
|
| 871 |
+
"page_idx": 4
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "page_footnote",
|
| 875 |
+
"text": "11https://www.upwork.com,https://labelstud.io/",
|
| 876 |
+
"bbox": [
|
| 877 |
+
527,
|
| 878 |
+
904,
|
| 879 |
+
873,
|
| 880 |
+
917
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 4
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "page_number",
|
| 886 |
+
"text": "15027 5",
|
| 887 |
+
"bbox": [
|
| 888 |
+
477,
|
| 889 |
+
927,
|
| 890 |
+
524,
|
| 891 |
+
952
|
| 892 |
+
],
|
| 893 |
+
"page_idx": 4
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "image",
|
| 897 |
+
"img_path": "images/fa9c3432d9131db7d667d992877a71834fdfc4e7d8fe16b8c1815e957a8c46c2.jpg",
|
| 898 |
+
"image_caption": [
|
| 899 |
+
"Figure 1: The plot presents how many times each type of generations ( $k\\mathrm{NN}$ -LM or GPT-2) is chosen by the evaluators. The dark area in each bar shows that the choices were made confidently. The light area represents the choices between $k\\mathrm{NN}$ -LM and GPT-2 that were hard but the evaluator still chose the corresponding type. Overall, annotators preferred GPT-2 baseline texts $51\\%$ of the time compared to $49\\%$ for $k\\mathrm{NN}$ -LM."
|
| 900 |
+
],
|
| 901 |
+
"image_footnote": [],
|
| 902 |
+
"bbox": [
|
| 903 |
+
139,
|
| 904 |
+
84,
|
| 905 |
+
463,
|
| 906 |
+
221
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 5
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "text",
|
| 912 |
+
"text": "annotators were experienced in text generation evaluation and hired after careful selection.",
|
| 913 |
+
"bbox": [
|
| 914 |
+
112,
|
| 915 |
+
375,
|
| 916 |
+
487,
|
| 917 |
+
405
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 5
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "text",
|
| 923 |
+
"text": "The annotators were given a prefix and two continuations of the context (one generated by the baseline LM and one generated with retrieval, with randomized presentation order). The evaluators' task was to decide which continuation is better, indicate whether it was hard to choose between the two following Thai et al. (2022), and justify their choice in 3 to 4 sentences.[12] The evaluation focused on whether the generated text is grammatical, fluent, consistent, and logical.[13]",
|
| 924 |
+
"bbox": [
|
| 925 |
+
112,
|
| 926 |
+
407,
|
| 927 |
+
489,
|
| 928 |
+
568
|
| 929 |
+
],
|
| 930 |
+
"page_idx": 5
|
| 931 |
+
},
|
| 932 |
+
{
|
| 933 |
+
"type": "text",
|
| 934 |
+
"text": "Human evaluation shows no definitive winner between kNN-LM and GPT-2 either: On aggregate, baseline GPT-2 generations were preferred $51\\%$ of the time, vs. $49\\%$ for kNN-LM. Additionally, the three annotators report that the decision was difficult for $37\\%$ of all cases. For Rater1 and Rater3, the rates of difficult to choose are as high as $42\\%$ and $47\\%$ while for Rater2 it is $22\\%$ . Out of the 45 comparison pairs, the three annotators only agree on their choices in 17 instances $(37.78\\%)$ , resulting in a Fleiss Kappa score 0.17 (slight agreement). Figure 1 presents the evaluator preference when comparing the kNN-LM to GPT-2 generations.",
|
| 935 |
+
"bbox": [
|
| 936 |
+
112,
|
| 937 |
+
579,
|
| 938 |
+
489,
|
| 939 |
+
802
|
| 940 |
+
],
|
| 941 |
+
"page_idx": 5
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "text",
|
| 945 |
+
"text": "Both models make catastrophic errors at similar rates: A qualitative analysis of the the evaluators'",
|
| 946 |
+
"text_level": 1,
|
| 947 |
+
"bbox": [
|
| 948 |
+
112,
|
| 949 |
+
814,
|
| 950 |
+
487,
|
| 951 |
+
845
|
| 952 |
+
],
|
| 953 |
+
"page_idx": 5
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "text",
|
| 957 |
+
"text": "justifications reveals that both kNN-LM and GPT-2 make catastrophic mistakes. Table 5 gives four examples of bad continuations, along with the evaluators' comments and our categorization of the errors. In the first row of the table, Continuation A generated by the kNN-LM contains repetitive content (i.e., $==ZAPU$ retreat $==$ ), and confuses ZAPA and ZIPRA at multiple places. The GPT-2 continuation in the second row states that a person was born in 1584 but was still alive in 1742; the generation in the third row by the kNN-LM claims that U.S. Route 75 curves both northeast and northwest in the northbound direction. Furthermore, both the GPT-2 and kNN-LM's generations change topics abruptly as shown in the lower half of Table 5. Overall, the quantitative and qualitative analyses of the human evaluation results show that the kNN-LM does not clearly improve over its base GPT-2 model despite its significant improvement in perplexity.",
|
| 958 |
+
"bbox": [
|
| 959 |
+
507,
|
| 960 |
+
84,
|
| 961 |
+
884,
|
| 962 |
+
391
|
| 963 |
+
],
|
| 964 |
+
"page_idx": 5
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "text",
|
| 968 |
+
"text": "5 Why do kNN-LMs fail to improve text generation quality?",
|
| 969 |
+
"text_level": 1,
|
| 970 |
+
"bbox": [
|
| 971 |
+
507,
|
| 972 |
+
404,
|
| 973 |
+
873,
|
| 974 |
+
439
|
| 975 |
+
],
|
| 976 |
+
"page_idx": 5
|
| 977 |
+
},
|
| 978 |
+
{
|
| 979 |
+
"type": "text",
|
| 980 |
+
"text": "Our evaluations (both human and automatic) do not show a significant quality increase when interpolating an LM's predicted probability distribution with one formed via retrieval over large external datastores. In this section, we try to understand why we do not observe an improvement by empirically analyzing the $k$ NN-LM and find two potential reasons: (1) despite lowering the aggregate perplexity, $k$ NN-LMs only improve the perplexity of $42\\%$ of all test tokens, which suggests that the improved quality of a subset of tokens could be counter-balanced by worsened predictions on other tokens that do not benefit from the $k$ NN-LM. Moreover, we find the entropy of the retrieval distribution to increase at a faster rate than that of the baseline LM as the model generates longer sequences. This difference implies that the retriever distribution is getting noisier as more tokens are sampled, potentially due to the exposure bias stemming from the retriever having to rely on the sampled text as the query.",
|
| 981 |
+
"bbox": [
|
| 982 |
+
507,
|
| 983 |
+
449,
|
| 984 |
+
884,
|
| 985 |
+
771
|
| 986 |
+
],
|
| 987 |
+
"page_idx": 5
|
| 988 |
+
},
|
| 989 |
+
{
|
| 990 |
+
"type": "text",
|
| 991 |
+
"text": "5.1 KNN-LMs only benefits a subset of tokens",
|
| 992 |
+
"text_level": 1,
|
| 993 |
+
"bbox": [
|
| 994 |
+
507,
|
| 995 |
+
784,
|
| 996 |
+
882,
|
| 997 |
+
799
|
| 998 |
+
],
|
| 999 |
+
"page_idx": 5
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "text",
|
| 1003 |
+
"text": "Many studies have shown that kNN-LMs decrease perplexity via retrieval interpolation (Khandelwal et al., 2020; Alon et al., 2022; Drozdov et al., 2022). Previous work (Drozdov et al., 2022; Zhong et al., 2022) has also suggested that kNN-LMs benefit the inference of tokens of various part-of-speech (POS) tags to different degrees (by lowering the perplexity",
|
| 1004 |
+
"bbox": [
|
| 1005 |
+
507,
|
| 1006 |
+
806,
|
| 1007 |
+
882,
|
| 1008 |
+
919
|
| 1009 |
+
],
|
| 1010 |
+
"page_idx": 5
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "page_footnote",
|
| 1014 |
+
"text": "$^{12}$ A screenshot of our evaluation platform can be found in Appendix A.",
|
| 1015 |
+
"bbox": [
|
| 1016 |
+
112,
|
| 1017 |
+
854,
|
| 1018 |
+
487,
|
| 1019 |
+
879
|
| 1020 |
+
],
|
| 1021 |
+
"page_idx": 5
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "page_footnote",
|
| 1025 |
+
"text": "13 Each evaluator evaluated 45 pairs of continuations generated by kNN-LM and GPT-2. Each evaluator was paid $50 for their work.",
|
| 1026 |
+
"bbox": [
|
| 1027 |
+
112,
|
| 1028 |
+
879,
|
| 1029 |
+
487,
|
| 1030 |
+
917
|
| 1031 |
+
],
|
| 1032 |
+
"page_idx": 5
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "page_number",
|
| 1036 |
+
"text": "15028 6",
|
| 1037 |
+
"bbox": [
|
| 1038 |
+
477,
|
| 1039 |
+
927,
|
| 1040 |
+
524,
|
| 1041 |
+
952
|
| 1042 |
+
],
|
| 1043 |
+
"page_idx": 5
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "image",
|
| 1047 |
+
"img_path": "images/460688bbf0165ed0c72ce508a0c5c2fe32d5e7b864b1ff83ad1f6e8803ce63ad.jpg",
|
| 1048 |
+
"image_caption": [
|
| 1049 |
+
"Figure 2: Across all POS tags, we observe that $k$ NN-LM does not increase the probability of the majority of gold next token predictions. For verbs, pronouns, and adjectives, it only helps $< 40\\%$ of the time (i.e., it hurts the predictions of the majority of these tokens)."
|
| 1050 |
+
],
|
| 1051 |
+
"image_footnote": [],
|
| 1052 |
+
"bbox": [
|
| 1053 |
+
184,
|
| 1054 |
+
84,
|
| 1055 |
+
415,
|
| 1056 |
+
253
|
| 1057 |
+
],
|
| 1058 |
+
"page_idx": 6
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "text",
|
| 1062 |
+
"text": "of the gold token). However, these works focus on aggregate perplexity averaged across tokens in the test data but do not look at individual tokens and the percentage that actually benefit from retrieval.",
|
| 1063 |
+
"bbox": [
|
| 1064 |
+
112,
|
| 1065 |
+
399,
|
| 1066 |
+
487,
|
| 1067 |
+
464
|
| 1068 |
+
],
|
| 1069 |
+
"page_idx": 6
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "text",
|
| 1073 |
+
"text": "Using the dataset we selected from WikiText-103, we compute the percentage of gold tokens from our test examples that are assigned lower perplexity (higher probability) by the $k$ NN-LM compared to the base LM. We find that only $42\\%$ of the tokens benefit from $k$ NN-LMs, while the remaining $58\\%$ of the tokens are adversely affected by the $k$ NN-LM (i.e., the $k$ NN-LM assigns a lower probability to the gold token compared to the base-LM). Moreover, we calculate the percentage of gold tokens that benefit from $k$ NN-LM in each POS category (Figure 2) and consistently find the similar result that $k$ NN-LM only helps reduce the perplexity for a smaller subset of tokens. We show examples of $k$ NN-LM negatively impacting the next-token prediction (assigning the gold token with lower probability than the base-LM) in Table 4.",
|
| 1074 |
+
"bbox": [
|
| 1075 |
+
112,
|
| 1076 |
+
482,
|
| 1077 |
+
489,
|
| 1078 |
+
755
|
| 1079 |
+
],
|
| 1080 |
+
"page_idx": 6
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "text",
|
| 1084 |
+
"text": "This means that despite lowering the aggregate perplexity across the test sets, the $k\\mathrm{NN}$ -LM is more likely to hurt, instead of help, the inference of each individual token. Therefore, we hypothesize that during text generation, as the model samples a sequence of tokens, the advantages brought by $k\\mathrm{NN}$ -LM to a smaller subset of tokens are offset by other tokens, for which $k\\mathrm{NN}$ -LM may even have a detrimental impact on the inference.",
|
| 1085 |
+
"bbox": [
|
| 1086 |
+
112,
|
| 1087 |
+
774,
|
| 1088 |
+
489,
|
| 1089 |
+
917
|
| 1090 |
+
],
|
| 1091 |
+
"page_idx": 6
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "image",
|
| 1095 |
+
"img_path": "images/805718a5c0a1964b8ee8190681dbe6f769fda71ce553cf988e1fbe1ea973b498.jpg",
|
| 1096 |
+
"image_caption": [
|
| 1097 |
+
"Figure 3: We plot the ratio between the Shannon entropy of the retriever's next-token distribution and that of the baseline LM softmax distribution, as the number of generated tokens increases. The ratio increases for longer model-generated sequences, indicating that the retriever becomes less confident than the baseline LM as decoding progresses."
|
| 1098 |
+
],
|
| 1099 |
+
"image_footnote": [],
|
| 1100 |
+
"bbox": [
|
| 1101 |
+
551,
|
| 1102 |
+
92,
|
| 1103 |
+
823,
|
| 1104 |
+
244
|
| 1105 |
+
],
|
| 1106 |
+
"page_idx": 6
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "text",
|
| 1110 |
+
"text": "5.2 The retriever becomes less reliable with longer generated sequences",
|
| 1111 |
+
"text_level": 1,
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
507,
|
| 1114 |
+
386,
|
| 1115 |
+
865,
|
| 1116 |
+
418
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 6
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "Additionally, we observe that as the model generates longer sequences of text, the retriever component from $k$ NN-LM becomes less confident and reliable in returning a high-quality next-token distribution. Since the $k$ NN-LM relies on interpolating the next-token distribution from the baseline LM and that from the retriever, a lower quality retriever distribution can compromise the resulting next-token distribution and adversely affect the text generation performance.",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
505,
|
| 1125 |
+
425,
|
| 1126 |
+
884,
|
| 1127 |
+
583
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 6
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "We plot the ratio of Shannon entropy (Shannon, 2001) between the retriever distribution and that of the baseline LM distribution on the next token (with respect to the index of the token generated) and find that the retriever's entropy is increasing at a faster rate compared to that from the base-LM (Figure 3). A higher entropy indicates lower level of confidence (closer to a uniform distribution over all tokens) and suggests that the retriever, when sampling long sequences, may be less reliable in identifying the high-quality tokens.",
|
| 1134 |
+
"bbox": [
|
| 1135 |
+
507,
|
| 1136 |
+
587,
|
| 1137 |
+
882,
|
| 1138 |
+
763
|
| 1139 |
+
],
|
| 1140 |
+
"page_idx": 6
|
| 1141 |
+
},
|
| 1142 |
+
{
|
| 1143 |
+
"type": "text",
|
| 1144 |
+
"text": "We hypothesize that the worsened reliability of the retriever over longer sampled sequences is likely a result of the exposure bias during text generation (i.e., at test-time, the retriever has to rely on model-generated queries that may contain artifacts or other distributional differences from human-written text). The retriever in kNN-LM",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
507,
|
| 1147 |
+
765,
|
| 1148 |
+
880,
|
| 1149 |
+
876
|
| 1150 |
+
],
|
| 1151 |
+
"page_idx": 6
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"type": "page_footnote",
|
| 1155 |
+
"text": "Given a $|V|$ -dimensional probability distribution $p$ , the entropy is computed as: $H(p) = -\\sum_{i=1}^{d} p_i \\log(p_i)$",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
507,
|
| 1158 |
+
889,
|
| 1159 |
+
880,
|
| 1160 |
+
919
|
| 1161 |
+
],
|
| 1162 |
+
"page_idx": 6
|
| 1163 |
+
},
|
| 1164 |
+
{
|
| 1165 |
+
"type": "page_number",
|
| 1166 |
+
"text": "15029 7",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
477,
|
| 1169 |
+
927,
|
| 1170 |
+
524,
|
| 1171 |
+
951
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 6
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "image",
|
| 1177 |
+
"img_path": "images/1834e830204d09df44e31d325a350aea213c99cedbcb8a303f02964701be86f7.jpg",
|
| 1178 |
+
"image_caption": [
|
| 1179 |
+
"Figure 4: The interpolation coefficient $\\lambda$ optimized for validation perplexity does not necessarily lead to the best text generation quality (measured by MAUVE)."
|
| 1180 |
+
],
|
| 1181 |
+
"image_footnote": [],
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
146,
|
| 1184 |
+
87,
|
| 1185 |
+
455,
|
| 1186 |
+
252
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 7
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "text",
|
| 1192 |
+
"text": "is non-parametric since both the input prefix and the context from the datastore are encoded by the LM (without any additional retrieval parameters), which has been adapted to the training corpus of WikiText-103. However, during text generation, as the model iteratively samples tokens and appends them to the input prefix, the input context is more likely to deviate from those in the training corpus, hence, becomes more out-of-distribution and challenging for the retriever to accurately process.",
|
| 1193 |
+
"bbox": [
|
| 1194 |
+
112,
|
| 1195 |
+
336,
|
| 1196 |
+
489,
|
| 1197 |
+
498
|
| 1198 |
+
],
|
| 1199 |
+
"page_idx": 7
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "text",
|
| 1203 |
+
"text": "6 Discussion",
|
| 1204 |
+
"text_level": 1,
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
112,
|
| 1207 |
+
510,
|
| 1208 |
+
240,
|
| 1209 |
+
526
|
| 1210 |
+
],
|
| 1211 |
+
"page_idx": 7
|
| 1212 |
+
},
|
| 1213 |
+
{
|
| 1214 |
+
"type": "text",
|
| 1215 |
+
"text": "In addition to the limitations of interpolation-based LMs described in Section 5, we hypothesize that there are other potential factors that contribute to the shortcomings of the kNN-LM for text generation. Specifically, it is possible that the interpolation may impede the language models' ability for self-recovery, and also that integrating the retrieval distribution can potentially introduce additional burdens related to hyperparameter tuning, which may not be optimized for text generation. We discuss these potential issues here as they are interesting avenues to explore for future work.",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
112,
|
| 1218 |
+
537,
|
| 1219 |
+
489,
|
| 1220 |
+
731
|
| 1221 |
+
],
|
| 1222 |
+
"page_idx": 7
|
| 1223 |
+
},
|
| 1224 |
+
{
|
| 1225 |
+
"type": "text",
|
| 1226 |
+
"text": "Retrieval interpolation may damage the self-recovery ability of LMs: Language models exhibit some degree of self-recovery abilities (He et al., 2021b), i.e., they can regain fluency and coherence even after previously generating poor-quality tokens. This self-recovery capability is attributed to the LM's ability to pay close attention to recent context and ignore the long-range past context. However, we hypothesize that when interpolation-based LMs encounter artifacts (e.g., non-factual or disfluent text) in a distorted pre",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
112,
|
| 1229 |
+
741,
|
| 1230 |
+
489,
|
| 1231 |
+
919
|
| 1232 |
+
],
|
| 1233 |
+
"page_idx": 7
|
| 1234 |
+
},
|
| 1235 |
+
{
|
| 1236 |
+
"type": "text",
|
| 1237 |
+
"text": "fix $\\tilde{q}_t$ , they may be less likely to recover, as the retrievals may further increase the probability of completions that resemble those artifacts. Furthermore, as we continuously sample and append tokens to the prefix, which the retriever uses as the query to construct $P_{KNN}(w_t|\\tilde{q}_t)$ , the retriever may encounter additional exposure bias as shown in Section 5.2, negatively impacting the quality of $P_{KNN}(w_t|\\tilde{q}_t)$ . Thus, even when the baseline LMs \"recover\" from distorted past context by producing a high-quality distribution over the next-token prediction $P_{LM}(w_t|\\tilde{q}_t)$ , the retriever may re-introduce the distortion by interpolating $P_{LM}(w_t|\\tilde{q}_t)$ with $P_{KNN}(w_t|\\tilde{q}_t)$ .",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
505,
|
| 1240 |
+
84,
|
| 1241 |
+
884,
|
| 1242 |
+
311
|
| 1243 |
+
],
|
| 1244 |
+
"page_idx": 7
|
| 1245 |
+
},
|
| 1246 |
+
{
|
| 1247 |
+
"type": "text",
|
| 1248 |
+
"text": "Hyperparameters introduced by $k\\mathbf{N}\\mathbf{N}$ LM are not optimized for text generation: The $k\\mathbf{N}\\mathbf{N}$ LM introduces two important hyperparameters, namely the relative weight between the two distribution $\\lambda$ , as well as softmax temperature for the $k\\mathbf{N}\\mathbf{N}$ distribution $\\tau_{KNN}$ . Recent work (Xu et al., 2023) highlights the significance of tuning $\\tau_{KNN}$ for achieving optimal $k\\mathbf{N}\\mathbf{N}$ LM performance, as measured by perplexity. Similarly, we investigate the coefficient parameter $\\lambda$ , which plays a vital role as it controls the relative importance assigned to the $k\\mathbf{N}\\mathbf{N}$ retriever and baseline LM. Existing works tune $\\lambda$ by the perplexity on the validation set. However, from Figure 4, we observe that the $\\lambda$ values that produce the lowest perplexity may not translate to the optimal value for text generation quality (measured by MAUVE). Instead of tuning $\\lambda$ for optimizing perplexity, we may want to consider context-dependent $\\lambda$ as in Drozdov et al. (2022) for generation (e.g., only use the retrieval distribution when it is very confident). Finally, interpolation may warrant the design of new decoding algorithms specialized for retrieval-augmented generation.",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
507,
|
| 1251 |
+
318,
|
| 1252 |
+
884,
|
| 1253 |
+
689
|
| 1254 |
+
],
|
| 1255 |
+
"page_idx": 7
|
| 1256 |
+
},
|
| 1257 |
+
{
|
| 1258 |
+
"type": "text",
|
| 1259 |
+
"text": "7 Conclusion",
|
| 1260 |
+
"text_level": 1,
|
| 1261 |
+
"bbox": [
|
| 1262 |
+
509,
|
| 1263 |
+
700,
|
| 1264 |
+
640,
|
| 1265 |
+
715
|
| 1266 |
+
],
|
| 1267 |
+
"page_idx": 7
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "text",
|
| 1271 |
+
"text": "In this work, we show that despite the significant perplexity improvement brought by interpolation-based retrieval-augmented LMs such as $k$ NN-LMs, such methods fail to improve the LMs' text generation performance. The text generation quality between $k$ NN-LMs and baseline LMs without retrieval show no significant difference according to both automatic text generation evaluation metrics and human evaluation. Upon closer analysis, we identify flaws in using $k$ NN-LMs to perform autoregressive text generation: the method only benefits a minority of token predictions, and the retriever's",
|
| 1272 |
+
"bbox": [
|
| 1273 |
+
507,
|
| 1274 |
+
726,
|
| 1275 |
+
884,
|
| 1276 |
+
917
|
| 1277 |
+
],
|
| 1278 |
+
"page_idx": 7
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "page_number",
|
| 1282 |
+
"text": "15030 8",
|
| 1283 |
+
"bbox": [
|
| 1284 |
+
477,
|
| 1285 |
+
927,
|
| 1286 |
+
526,
|
| 1287 |
+
952
|
| 1288 |
+
],
|
| 1289 |
+
"page_idx": 7
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "text",
|
| 1293 |
+
"text": "quality deteriorates when generating long-form text. We hope our findings can inspire future research to design better training and inference methods so that the impressive improvement of kNN-LMs in perplexity can better be translated into gains in text generation quality.",
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
112,
|
| 1296 |
+
84,
|
| 1297 |
+
487,
|
| 1298 |
+
181
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 8
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"text": "Ethics Statement",
|
| 1305 |
+
"text_level": 1,
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
114,
|
| 1308 |
+
192,
|
| 1309 |
+
265,
|
| 1310 |
+
206
|
| 1311 |
+
],
|
| 1312 |
+
"page_idx": 8
|
| 1313 |
+
},
|
| 1314 |
+
{
|
| 1315 |
+
"type": "text",
|
| 1316 |
+
"text": "In this work, we investigate the text generation quality of language models. Language models can generate text that is harmful, offensive or unfaithful. We advise using caution when relying on language models to generate text and adopting post-processing strategies on the language-model generated text to remove undesirable content. Additionally, training large language models can bring significant energy cost. We hope that our analysis of the kNN-LM and future works on this topic may lead to more efficient method of using language models without the need to re-train such models.",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
112,
|
| 1319 |
+
218,
|
| 1320 |
+
489,
|
| 1321 |
+
411
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 8
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "text",
|
| 1327 |
+
"text": "Limitations",
|
| 1328 |
+
"text_level": 1,
|
| 1329 |
+
"bbox": [
|
| 1330 |
+
114,
|
| 1331 |
+
422,
|
| 1332 |
+
218,
|
| 1333 |
+
437
|
| 1334 |
+
],
|
| 1335 |
+
"page_idx": 8
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "text",
|
| 1339 |
+
"text": "Our work does not study all data, model, and evaluation configurations of interpolation-based LMs. Additionally, we focus on the 100M token datastore size, although kNN-LM can scale effectively to datastores of 3B words. Using a larger datastore may lead to further perplexity decreases, but we do not think this contradicts our finding that text generation degrades as retrieval quality does. We focus exclusively on interpolation-based LMs in this work, but similar issues for other retrieval-augmented LMs such as RETRO (Borgeaud et al., 2021) may also exist and be worth investigating further. Finally, our human evaluation does not specifically account for diversity, although some dimensions of this are captured by our automated metrics. Due to the overall low quality of text generated by LMs with and without retrieval, reading their outputs results in high cognitive burden on annotators, which might be ameliorated by using stronger LMs than GPT-2.",
|
| 1340 |
+
"bbox": [
|
| 1341 |
+
112,
|
| 1342 |
+
447,
|
| 1343 |
+
489,
|
| 1344 |
+
770
|
| 1345 |
+
],
|
| 1346 |
+
"page_idx": 8
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "text",
|
| 1350 |
+
"text": "Acknowledgements",
|
| 1351 |
+
"text_level": 1,
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
114,
|
| 1354 |
+
781,
|
| 1355 |
+
285,
|
| 1356 |
+
797
|
| 1357 |
+
],
|
| 1358 |
+
"page_idx": 8
|
| 1359 |
+
},
|
| 1360 |
+
{
|
| 1361 |
+
"type": "text",
|
| 1362 |
+
"text": "We thank Zexuan Zhong and Danqi Chen for helpful discussion on TRIME and kNN-LM, and the UMass NLP group for feedback and discussion. We also thank the anonymous reviewers for their helpful comments.",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
112,
|
| 1365 |
+
806,
|
| 1366 |
+
487,
|
| 1367 |
+
885
|
| 1368 |
+
],
|
| 1369 |
+
"page_idx": 8
|
| 1370 |
+
},
|
| 1371 |
+
{
|
| 1372 |
+
"type": "text",
|
| 1373 |
+
"text": "This project was partially supported by awards IIS-2202506 and IIS-2046248 from the National",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
112,
|
| 1376 |
+
887,
|
| 1377 |
+
487,
|
| 1378 |
+
917
|
| 1379 |
+
],
|
| 1380 |
+
"page_idx": 8
|
| 1381 |
+
},
|
| 1382 |
+
{
|
| 1383 |
+
"type": "text",
|
| 1384 |
+
"text": "Science Foundation (NSF). This research was also supported in part by a research gift from Adobe.",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
507,
|
| 1387 |
+
84,
|
| 1388 |
+
880,
|
| 1389 |
+
116
|
| 1390 |
+
],
|
| 1391 |
+
"page_idx": 8
|
| 1392 |
+
},
|
| 1393 |
+
{
|
| 1394 |
+
"type": "text",
|
| 1395 |
+
"text": "References",
|
| 1396 |
+
"text_level": 1,
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
510,
|
| 1399 |
+
143,
|
| 1400 |
+
608,
|
| 1401 |
+
159
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 8
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "list",
|
| 1407 |
+
"sub_type": "ref_text",
|
| 1408 |
+
"list_items": [
|
| 1409 |
+
"Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In International Conference on Machine Learning, pages 468-485. PMLR.",
|
| 1410 |
+
"Soumya Sankar Basu, Ankit Singh Rawat, and Manzil Zaheer. 2022. Generalization properties of retrieval-based models. ArXiv, abs/2210.02617.",
|
| 1411 |
+
"Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, T. W. Hennigan, Saffron Huang, Lorenzo Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and L. Sifre. 2021. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning.",
|
| 1412 |
+
"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.",
|
| 1413 |
+
"Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.",
|
| 1414 |
+
"Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1415 |
+
"Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, and Mohit Iyyer. 2022. You can't pick your neighbors, or can you? when and how to rely on retrieval in the kNN-LM. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2997-3007, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 1416 |
+
],
|
| 1417 |
+
"bbox": [
|
| 1418 |
+
510,
|
| 1419 |
+
167,
|
| 1420 |
+
885,
|
| 1421 |
+
919
|
| 1422 |
+
],
|
| 1423 |
+
"page_idx": 8
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "page_number",
|
| 1427 |
+
"text": "15031 9",
|
| 1428 |
+
"bbox": [
|
| 1429 |
+
477,
|
| 1430 |
+
927,
|
| 1431 |
+
522,
|
| 1432 |
+
952
|
| 1433 |
+
],
|
| 1434 |
+
"page_idx": 8
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "list",
|
| 1438 |
+
"sub_type": "ref_text",
|
| 1439 |
+
"list_items": [
|
| 1440 |
+
"Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In International Conference on Learning Representations.",
|
| 1441 |
+
"Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrievalaugmented language model pre-training. In International Conference on Machine Learning.",
|
| 1442 |
+
"Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang, 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 1443 |
+
"Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. ArXiv, abs/2301.00303.",
|
| 1444 |
+
"Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick. 2021a. Efficient nearest neighbor language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5703-5714, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
|
| 1445 |
+
"Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James Glass. 2021b. Exposure bias versus self-recovery: Are distortions really incremental for autoregressive text generation? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5087-5102, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
|
| 1446 |
+
"Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.",
|
| 1447 |
+
"Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.",
|
| 1448 |
+
"Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models.",
|
| 1449 |
+
"Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations (ICLR)."
|
| 1450 |
+
],
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
115,
|
| 1453 |
+
85,
|
| 1454 |
+
485,
|
| 1455 |
+
917
|
| 1456 |
+
],
|
| 1457 |
+
"page_idx": 9
|
| 1458 |
+
},
|
| 1459 |
+
{
|
| 1460 |
+
"type": "list",
|
| 1461 |
+
"sub_type": "ref_text",
|
| 1462 |
+
"list_items": [
|
| 1463 |
+
"Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR).",
|
| 1464 |
+
"Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, and Kyle Lo 2023. Longeval: Guidelines for human evaluation of faithfulness in long-form summarization. In Conference of the European Chapter of the Association for Computational Linguistics.",
|
| 1465 |
+
"Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022. RankGen: Improving text generation with large ranking models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 199-232, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1466 |
+
"Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. *Hurdles to progress in long-form question answering* In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4940-4957, Online. Association for Computational Linguistics.",
|
| 1467 |
+
"Tian Lan, Deng Cai, Yan Wang, Heyan Huang, and Xian-Ling Mao. 2023. Copy is all you need. In The Eleventh International Conference on Learning Representations.",
|
| 1468 |
+
"Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. In Advances in Neural Information Processing Systems.",
|
| 1469 |
+
"Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.",
|
| 1470 |
+
"Alex Mallen, Akari Asai, Victor Zhong, Dajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2023. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. In ACL.",
|
| 1471 |
+
"R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2021. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. ArXiv, abs/2111.09509.",
|
| 1472 |
+
"Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models."
|
| 1473 |
+
],
|
| 1474 |
+
"bbox": [
|
| 1475 |
+
510,
|
| 1476 |
+
85,
|
| 1477 |
+
880,
|
| 1478 |
+
916
|
| 1479 |
+
],
|
| 1480 |
+
"page_idx": 9
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "page_number",
|
| 1484 |
+
"text": "15032 10",
|
| 1485 |
+
"bbox": [
|
| 1486 |
+
477,
|
| 1487 |
+
928,
|
| 1488 |
+
524,
|
| 1489 |
+
951
|
| 1490 |
+
],
|
| 1491 |
+
"page_idx": 9
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "list",
|
| 1495 |
+
"sub_type": "ref_text",
|
| 1496 |
+
"list_items": [
|
| 1497 |
+
"Don Metzler, Fernando Diaz, Hamed Zamani, Mike Bendersky, and Mostafa Dehghani. 2022. Retrieval enhanced machine learning. In SIGIR 2022: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Perspectives Track).",
|
| 1498 |
+
"Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. ArXiv, abs/2302.07842.",
|
| 1499 |
+
"Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entity-level factual consistency of obstructive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727-2733, Online. Association for Computational Linguistics.",
|
| 1500 |
+
"Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computational Linguistics.",
|
| 1501 |
+
"Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In Neural Information Processing Systems.",
|
| 1502 |
+
"Tiago Pimentel, Clara Isabel Meister, and Ryan Cotterell. 2023. On the usefulness of embeddings, clusters and strings for text generation evaluation. In The Eleventh International Conference on Learning Representations.",
|
| 1503 |
+
"Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
|
| 1504 |
+
"Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John F. J. Mellor, Irina Higgins, Antonia Creswell, Nathan McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, L. Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, N. K. Grigorev, Doug Fritz, Thibault Sotti aux, Mantas Pajarskas, Tobias Pohlen, Zhitao Gong"
|
| 1505 |
+
],
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
115,
|
| 1508 |
+
85,
|
| 1509 |
+
489,
|
| 1510 |
+
917
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 10
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "list",
|
| 1516 |
+
"sub_type": "ref_text",
|
| 1517 |
+
"list_items": [
|
| 1518 |
+
"Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew G. Johnson, Blake A. Hechtman, Laura Weidinger, Jason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem W. Ayoub, Jeff Stanway, L. L. Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. ArXiv, abs/2112.11446.",
|
| 1519 |
+
"Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap. 2019. Compressive transformers for long-range sequence modelling. arXiv preprint.",
|
| 1520 |
+
"Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations.",
|
| 1521 |
+
"Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models.",
|
| 1522 |
+
"Claude Elwood Shannon. 2001. A mathematical theory of communication. ACM SIGMOBILE mobile computing and communications review, 5(1):3-55.",
|
| 1523 |
+
"Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models. ArXiv, abs/2301.12652.",
|
| 1524 |
+
"Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, and Mohit Iyyer. 2022. Exploring document-level literary machine translation with parallel paragraphs from world literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9882-9902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 1525 |
+
"Maxim Tkachenko, Mikhail Malyuk, Andrey Holmanyuk, and Nikolai Liubimov. 2020-2022. Label Studio: Data labeling software. Open source software available from https://github.com/heartxlabs[label-studio.",
|
| 1526 |
+
"H. Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. *ArXiv*, abs/2212.10509.",
|
| 1527 |
+
"Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020."
|
| 1528 |
+
],
|
| 1529 |
+
"bbox": [
|
| 1530 |
+
510,
|
| 1531 |
+
85,
|
| 1532 |
+
882,
|
| 1533 |
+
917
|
| 1534 |
+
],
|
| 1535 |
+
"page_idx": 10
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "page_number",
|
| 1539 |
+
"text": "15033 11",
|
| 1540 |
+
"bbox": [
|
| 1541 |
+
477,
|
| 1542 |
+
928,
|
| 1543 |
+
524,
|
| 1544 |
+
951
|
| 1545 |
+
],
|
| 1546 |
+
"page_idx": 10
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "text",
|
| 1550 |
+
"text": "Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In International Conference on Learning Representations.",
|
| 1551 |
+
"bbox": [
|
| 1552 |
+
115,
|
| 1553 |
+
85,
|
| 1554 |
+
489,
|
| 1555 |
+
137
|
| 1556 |
+
],
|
| 1557 |
+
"page_idx": 11
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "text",
|
| 1561 |
+
"text": "Frank F. Xu, Uri Alon, and Graham Neubig. 2023. Why do nearest neighbor language models work? ArXiv, abs/2301.02828.",
|
| 1562 |
+
"bbox": [
|
| 1563 |
+
114,
|
| 1564 |
+
148,
|
| 1565 |
+
487,
|
| 1566 |
+
187
|
| 1567 |
+
],
|
| 1568 |
+
"page_idx": 11
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "text",
|
| 1572 |
+
"text": "Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362-373.",
|
| 1573 |
+
"bbox": [
|
| 1574 |
+
114,
|
| 1575 |
+
197,
|
| 1576 |
+
487,
|
| 1577 |
+
250
|
| 1578 |
+
],
|
| 1579 |
+
"page_idx": 11
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "text",
|
| 1583 |
+
"text": "Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation. In Conference on Empirical Methods in Natural Language Processing.",
|
| 1584 |
+
"bbox": [
|
| 1585 |
+
114,
|
| 1586 |
+
260,
|
| 1587 |
+
487,
|
| 1588 |
+
313
|
| 1589 |
+
],
|
| 1590 |
+
"page_idx": 11
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "text",
|
| 1594 |
+
"text": "A Appendix",
|
| 1595 |
+
"text_level": 1,
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
114,
|
| 1598 |
+
326,
|
| 1599 |
+
236,
|
| 1600 |
+
343
|
| 1601 |
+
],
|
| 1602 |
+
"page_idx": 11
|
| 1603 |
+
},
|
| 1604 |
+
{
|
| 1605 |
+
"type": "text",
|
| 1606 |
+
"text": "A.1 Examples of kNN-LM hurting the inference of the next-token",
|
| 1607 |
+
"text_level": 1,
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
114,
|
| 1610 |
+
351,
|
| 1611 |
+
430,
|
| 1612 |
+
381
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 11
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "text",
|
| 1618 |
+
"text": "We show examples where $k$ NN-LM hurts the inference of the next-token in Table 4",
|
| 1619 |
+
"bbox": [
|
| 1620 |
+
112,
|
| 1621 |
+
388,
|
| 1622 |
+
487,
|
| 1623 |
+
419
|
| 1624 |
+
],
|
| 1625 |
+
"page_idx": 11
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "text",
|
| 1629 |
+
"text": "A.2 Human evaluation interface and examples",
|
| 1630 |
+
"text_level": 1,
|
| 1631 |
+
"bbox": [
|
| 1632 |
+
114,
|
| 1633 |
+
431,
|
| 1634 |
+
416,
|
| 1635 |
+
462
|
| 1636 |
+
],
|
| 1637 |
+
"page_idx": 11
|
| 1638 |
+
},
|
| 1639 |
+
{
|
| 1640 |
+
"type": "text",
|
| 1641 |
+
"text": "From our human evaluation, we show the interface for our evaluators in Fig 5 and also selected representative examples of evaluators' comments in Table 5.",
|
| 1642 |
+
"bbox": [
|
| 1643 |
+
112,
|
| 1644 |
+
468,
|
| 1645 |
+
487,
|
| 1646 |
+
531
|
| 1647 |
+
],
|
| 1648 |
+
"page_idx": 11
|
| 1649 |
+
},
|
| 1650 |
+
{
|
| 1651 |
+
"type": "text",
|
| 1652 |
+
"text": "A.3 Models trained on PG-19 produce unhelpful artifacts",
|
| 1653 |
+
"text_level": 1,
|
| 1654 |
+
"bbox": [
|
| 1655 |
+
114,
|
| 1656 |
+
543,
|
| 1657 |
+
430,
|
| 1658 |
+
574
|
| 1659 |
+
],
|
| 1660 |
+
"page_idx": 11
|
| 1661 |
+
},
|
| 1662 |
+
{
|
| 1663 |
+
"type": "text",
|
| 1664 |
+
"text": "With retrieval from the datastore, the $k\\mathrm{NN}$ -LM improves the perplexity on the validation dataset of the PG-19 marginally from 28.9 to 28.2 but does not improve the text generation quality. Both the baseline LM and the $k\\mathrm{NN}$ -LM fine-tuned on the PG-19 dataset returns artifacts from the dataset (e.g. missing white-spaces and unnecessary line breaks), as shown in Table 6.",
|
| 1665 |
+
"bbox": [
|
| 1666 |
+
112,
|
| 1667 |
+
580,
|
| 1668 |
+
487,
|
| 1669 |
+
707
|
| 1670 |
+
],
|
| 1671 |
+
"page_idx": 11
|
| 1672 |
+
},
|
| 1673 |
+
{
|
| 1674 |
+
"type": "text",
|
| 1675 |
+
"text": "A.4 Seq-Rep- $N$ of generated text from the baseline-LM and kNN-LM",
|
| 1676 |
+
"text_level": 1,
|
| 1677 |
+
"bbox": [
|
| 1678 |
+
114,
|
| 1679 |
+
719,
|
| 1680 |
+
460,
|
| 1681 |
+
749
|
| 1682 |
+
],
|
| 1683 |
+
"page_idx": 11
|
| 1684 |
+
},
|
| 1685 |
+
{
|
| 1686 |
+
"type": "text",
|
| 1687 |
+
"text": "Even though kNN-LM does not improve the text generation quality overall, we observe an improvement in lexical diversity (lower Seq-Rep- $N$ ) from kNN-LM on the WikiText-103 dataset in Table 7. However, this improvement in text diversity is obtained at the cost of Entity-F1 (a proxy for factuality).",
|
| 1688 |
+
"bbox": [
|
| 1689 |
+
112,
|
| 1690 |
+
756,
|
| 1691 |
+
487,
|
| 1692 |
+
868
|
| 1693 |
+
],
|
| 1694 |
+
"page_idx": 11
|
| 1695 |
+
},
|
| 1696 |
+
{
|
| 1697 |
+
"type": "page_number",
|
| 1698 |
+
"text": "15034 12",
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
477,
|
| 1701 |
+
928,
|
| 1702 |
+
524,
|
| 1703 |
+
952
|
| 1704 |
+
],
|
| 1705 |
+
"page_idx": 11
|
| 1706 |
+
},
|
| 1707 |
+
{
|
| 1708 |
+
"type": "table",
|
| 1709 |
+
"img_path": "images/5ec841c479ee2c53687512275d13300913f56c11ddcc4c411aad4b45563e9edd.jpg",
|
| 1710 |
+
"table_caption": [],
|
| 1711 |
+
"table_footnote": [],
|
| 1712 |
+
"table_body": "<table><tr><td>Context</td><td colspan=\"2\">Ground-truth</td><td>Most Probable Tokens from base-LM vs kNN-LM</td><td>Analysis</td></tr><tr><td>The lyrics were inspired by a story ...... To me, that's the way a great rock 'n ' roll concert should be : a place where everyone comes together ... Maybe that 's the dream of all art : to break down the barriers and the divisions between</td><td>"people" \nbase-LM \n0.26 \nkNN-LM \n0.23</td><td>probability:</td><td>base-LM: \n"the"(0.20), "us"(0.09), "art"(0.03), "rock"(0.02) \nkNN-LM: \n"the"(0.23), "us"(0.07), "good"(0.02), "art"(0.02)</td><td>In this example the base-LM predicts the ground-truth noun token "people" with the highest probability of all tokens (0.26). However, after interpolating with the retrieval distribution, the kNN-LM decreases the probability of the ground-truth token.</td></tr><tr><td>Richmond finished the 1984 season 12th in points , with 11 .... In the Busch Series , he qualified at the pole position in the two races he entered , and won the Charlotte race . Richmond joined Hendrick Motorsports in 1986 , where he teamed up with veteran crew chief Harry Hyde . It took the team until the middle of the season'</td><td>"to" \nbase-LM \n0.78 \nkNN-LM \n0.64</td><td>probability:</td><td>base-LM: \n","(0.07), "for"(0.03), "when"(0.02), "that"(0.02) \nkNN-LM: \n","(0.10), "for"(0.06), ". ."(0.04), "and"(0.02)</td><td>The ground-truth token to be predicted is the preposition "the", which the base-LM correctly predicts with very high probability. However, the kNN-LM decreases the probability of the ground-truth token significantly compared to the base-LM.</td></tr></table>",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
152,
|
| 1715 |
+
141,
|
| 1716 |
+
843,
|
| 1717 |
+
334
|
| 1718 |
+
],
|
| 1719 |
+
"page_idx": 12
|
| 1720 |
+
},
|
| 1721 |
+
{
|
| 1722 |
+
"type": "text",
|
| 1723 |
+
"text": "Table 4: Examples where k:NN-LM hurts the inference of next-token (with different part-of-speech such as noun and preposition) by predicting of the gold token with a lower probability compared to the base-LM",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
112,
|
| 1726 |
+
344,
|
| 1727 |
+
880,
|
| 1728 |
+
373
|
| 1729 |
+
],
|
| 1730 |
+
"page_idx": 12
|
| 1731 |
+
},
|
| 1732 |
+
{
|
| 1733 |
+
"type": "image",
|
| 1734 |
+
"img_path": "images/198334f7cfe81891208cab852c6c2026bb10a9667d5e2b282ac696b139bbf3eb.jpg",
|
| 1735 |
+
"image_caption": [
|
| 1736 |
+
"Figure 5: The interface of the human evaluation. Each task consists of a context text, two continuations, two choices, and a free-form justification text box."
|
| 1737 |
+
],
|
| 1738 |
+
"image_footnote": [],
|
| 1739 |
+
"bbox": [
|
| 1740 |
+
127,
|
| 1741 |
+
500,
|
| 1742 |
+
872,
|
| 1743 |
+
816
|
| 1744 |
+
],
|
| 1745 |
+
"page_idx": 12
|
| 1746 |
+
},
|
| 1747 |
+
{
|
| 1748 |
+
"type": "footer",
|
| 1749 |
+
"text": "15035",
|
| 1750 |
+
"bbox": [
|
| 1751 |
+
477,
|
| 1752 |
+
928,
|
| 1753 |
+
524,
|
| 1754 |
+
939
|
| 1755 |
+
],
|
| 1756 |
+
"page_idx": 12
|
| 1757 |
+
},
|
| 1758 |
+
{
|
| 1759 |
+
"type": "page_number",
|
| 1760 |
+
"text": "13",
|
| 1761 |
+
"bbox": [
|
| 1762 |
+
489,
|
| 1763 |
+
940,
|
| 1764 |
+
507,
|
| 1765 |
+
952
|
| 1766 |
+
],
|
| 1767 |
+
"page_idx": 12
|
| 1768 |
+
},
|
| 1769 |
+
{
|
| 1770 |
+
"type": "table",
|
| 1771 |
+
"img_path": "images/b9866ae9928f7d8f6d8e63be71dbf863cd39941a7fa0732b9f31df440b14ebd9.jpg",
|
| 1772 |
+
"table_caption": [],
|
| 1773 |
+
"table_footnote": [],
|
| 1774 |
+
"table_body": "<table><tr><td>Context</td><td>Continuation</td><td>Comment</td><td>Error</td></tr><tr><td>[...] = = = Shootdown = = = A group of ZIPRA guerrillas, armed with a Strela-2 launcher, waited in the bush beneath Flight 825's flightpath, and</td><td>Continuation A-kNN-LM two soldiers opened fire on the guerrillas. [...] The ZIPRA and ZIPRA units exchanged fire, and several ZIPRA members were shot. After the fire was over, the ZIPRA and ZIPRA guerrillas returned fire and began shooting back, and one of the ZIPRA unsuccessfully attacked a ZAPU base at the head of Flight 825, forcing the major to retreat. == = = ZAPU retreat == [...] The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw. [...]</td><td>Both of these continuations have problems. I chose A because its problems seem to influence the meaning less than the problems in B. [...] Continuation A has some repeated words and in a few spots ZIPRA and ZAPU seem to get mixed up, leading to sentences like this: "The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw." It was easier to recover from those confusions than from the ones in Continuation B.</td><td>Repetitive content; contradiction</td></tr><tr><td>Hu was born in <unk> County, Anhui Province in 1584 or early 1585. [...] By 1619, Hu had moved to Nanjing where he lived with his wife Wu.</td><td>Continuation B-GPT-2 During this time he also became a leading scholar of the court, a position he held until 1624, [...] His last year at the academy was 1715. In 1725, Hu was appointed secretary of the government [...] In 1727, he was appointed as vice-president of the Academy, and he remained the academy's secretary until 1742, [...]</td><td>I selected that this was a difficult decision [...] However, Continuation B is an impossible choice as it has Hu being born in 1584 and still being alive in 1725 which of course is impossible. I was left with process of elimination in my opinion.</td><td>Illogical; wrong chronological details</td></tr><tr><td>The lanes of US 75, which are on the bottom level, are 20 feet (6m) below ground level, giving the structure a total height of 140 feet from bottom to top As part of the project, I-635 was widened to include four dedicated high-occupancy vehicle lanes (HOV lanes) that are barrier-separated and reversible. The roads on the five levels are: Level I: US 75, an eight lane highway, four lanes going each way; Level II: The junction of the two six-lane frontage roads, each having three through</td><td>Continuation A-kNN-LM traffic lanes with one lane in each direction. Level III: In the northbound direction, US 75 curves to the northeast and US 75 curves to the northwest to follow a more southerly routing. The northbound lanes are two lanes in each direction, while the southbound lanes are two lanes. [...] == Major intersections == [...] = Battle of the Kalka River = The Battle of the Kalka River (Polish : Kalka, pronounced ['kaʊ], or ['kaʊ'maʊ]) was a decisive battle of the 15th and 16th centuries of the 15th century in Austrian occupied Poland. The battle was fought in the middle of the Russo</td><td>Continuation A gives more information and is more specific than B. It has some lack of clarity - Level III has the road curving both northeast and northwest - but overall it's more informative than just "the intersection of two frontage roads." I would have ruled out A because of the Battle of the Kalka River section at the bottom. It doesn't really have a place in this text. But B also has the Battle of the Falkland Islands, so that's a wash. A it is.</td><td>Abrupt topic change; illogical</td></tr><tr><td>(Same as above)</td><td>Continuation B-GPT-2 traffic lanes. Level III: The intersection of the two six-lane frontage roads, each having four through traffic lanes. Level IV: The intersection of the two six-lane frontage roads, each having two through traffic lanes = = = Construction = = = The project was approved in July 1997 and completed on December 17, 1997, with the groundbreaking ceremony taking place at the State University of New York (SUNY) campus in New York City. [...] = = = Major intersections = = = [...] = Battle of the Falkland Islands =</td><td>(Same as above)</td><td>Abrupt topic change; repetitive content</td></tr></table>",
|
| 1775 |
+
"bbox": [
|
| 1776 |
+
152,
|
| 1777 |
+
85,
|
| 1778 |
+
848,
|
| 1779 |
+
664
|
| 1780 |
+
],
|
| 1781 |
+
"page_idx": 13
|
| 1782 |
+
},
|
| 1783 |
+
{
|
| 1784 |
+
"type": "table",
|
| 1785 |
+
"img_path": "images/7e44c2472d8875d14be0bbd87e1ae1736a1a50ea49afaab525eeaf917b258027.jpg",
|
| 1786 |
+
"table_caption": [
|
| 1787 |
+
"Table 5: Selected representative examples with evaluators' comments. The error categories in the last column are ours. Contents are shortened for the sake of space."
|
| 1788 |
+
],
|
| 1789 |
+
"table_footnote": [],
|
| 1790 |
+
"table_body": "<table><tr><td>Context</td><td>baseline-LM-generated text</td><td>kNN-LM-generated text</td></tr><tr><td>For my part,something within me answered to Kandinsky's art the first time I met with it. There was no question of looking for representation; harmony had been</td><td>in my brain, and I had been working at it ever since.That is why I was able to tell Kandinsky what was going on within me. Idid not need to be told about his melody. I knew that he was doing itwith the greatest energy and energy and energy and energy.</td><td>in my brain, and I had been working, like theothers, in that direction for some years. It was this which prompted me totry for a musical conception. I wanted to be able to make music to suit the moods and the contrasting circumstances of the moment,</td></tr><tr><td>Before going toMaria, they insist on doing the honors, and showing the objectsinterest the vicinity. So they take us to their barrack, a large-house, and thence to the front.To the latter spot our coachmandriving</td><td>the family out of the house, and we go through the sameprocess with them. Our next station is in the heart of San Mateo, where theyhave a housewith a room on the outside, and ... and afloor and a ceiling.</td><td>the family out of the house, and, with a chuckle, heexplains the advantages of boarding-schools in the United States, aswell as of boarding-schools in France and Spain. I am reminded of thisexception, and feel that there is a difference in the methods of boardingschools in the two countries.</td></tr></table>",
|
| 1791 |
+
"bbox": [
|
| 1792 |
+
152,
|
| 1793 |
+
717,
|
| 1794 |
+
845,
|
| 1795 |
+
873
|
| 1796 |
+
],
|
| 1797 |
+
"page_idx": 13
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "text",
|
| 1801 |
+
"text": "Table 6: Both the baseline LM and kNN-LM generate text that consists of artifacts from the dataset, e.g. missing white-spaces between tokens",
|
| 1802 |
+
"bbox": [
|
| 1803 |
+
112,
|
| 1804 |
+
881,
|
| 1805 |
+
882,
|
| 1806 |
+
910
|
| 1807 |
+
],
|
| 1808 |
+
"page_idx": 13
|
| 1809 |
+
},
|
| 1810 |
+
{
|
| 1811 |
+
"type": "footer",
|
| 1812 |
+
"text": "15036",
|
| 1813 |
+
"bbox": [
|
| 1814 |
+
477,
|
| 1815 |
+
927,
|
| 1816 |
+
524,
|
| 1817 |
+
939
|
| 1818 |
+
],
|
| 1819 |
+
"page_idx": 13
|
| 1820 |
+
},
|
| 1821 |
+
{
|
| 1822 |
+
"type": "page_number",
|
| 1823 |
+
"text": "14",
|
| 1824 |
+
"bbox": [
|
| 1825 |
+
489,
|
| 1826 |
+
939,
|
| 1827 |
+
509,
|
| 1828 |
+
952
|
| 1829 |
+
],
|
| 1830 |
+
"page_idx": 13
|
| 1831 |
+
},
|
| 1832 |
+
{
|
| 1833 |
+
"type": "table",
|
| 1834 |
+
"img_path": "images/ecd8d1f4bbd4569860b84a8d500e91192569e3f8277ebbb396318011ce840ec4.jpg",
|
| 1835 |
+
"table_caption": [],
|
| 1836 |
+
"table_footnote": [],
|
| 1837 |
+
"table_body": "<table><tr><td>Model</td><td>Seq-Rep-1</td><td>Seq-Rep-2</td><td>Seq-Rep-3</td><td>Seq-Rep-4</td></tr><tr><td colspan=\"5\">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small (no retrieval)</td><td>56.7</td><td>26.6</td><td>15.1</td><td>9.65</td></tr><tr><td>GPT-2 small (+ retrieval)</td><td>53.3</td><td>22.5</td><td>11.6</td><td>6.73</td></tr></table>",
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
305,
|
| 1840 |
+
432,
|
| 1841 |
+
690,
|
| 1842 |
+
510
|
| 1843 |
+
],
|
| 1844 |
+
"page_idx": 14
|
| 1845 |
+
},
|
| 1846 |
+
{
|
| 1847 |
+
"type": "text",
|
| 1848 |
+
"text": "Table 7: Even though the kNN-LM does not improve the overall text generation quality, we observe higher lexical diversity (lower Seq-Rep- $N$ ) in the kNN-LM-generated text, on the WikiText-103 dataset, using the GPT2-small model as the baseline LM.",
|
| 1849 |
+
"bbox": [
|
| 1850 |
+
112,
|
| 1851 |
+
519,
|
| 1852 |
+
882,
|
| 1853 |
+
563
|
| 1854 |
+
],
|
| 1855 |
+
"page_idx": 14
|
| 1856 |
+
},
|
| 1857 |
+
{
|
| 1858 |
+
"type": "page_number",
|
| 1859 |
+
"text": "15037 \n15",
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
477,
|
| 1862 |
+
927,
|
| 1863 |
+
526,
|
| 1864 |
+
952
|
| 1865 |
+
],
|
| 1866 |
+
"page_idx": 14
|
| 1867 |
+
}
|
| 1868 |
+
]
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/11b9a100-d4b3-4630-a237-79b0f1ed9076_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7797811be0344661b213749087a4b3b491958365ba6dd83528551344b4bd61d2
|
| 3 |
+
size 743028
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/full.md
ADDED
|
@@ -0,0 +1,315 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# kNN-LM Does Not Improve Open-ended Text Generation
|
| 2 |
+
|
| 3 |
+
Shufan Wang<sup>1</sup> Yixiao Song<sup>1</sup> Andrew Drozdov<sup>1</sup>
|
| 4 |
+
Aparna Garimella<sup>2</sup> Varun Manjunatha<sup>2</sup> Mohit Iyyer<sup>1</sup>
|
| 5 |
+
|
| 6 |
+
University of Massachusetts Amherst $^{1}$ Adobe Research $^{2}$ {shufanwang, yixiaosong, adrozdov, miyyer}@umass.edu {garimell,vmanjuna}@adobe.com
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the $k$ -NN-LM (Khandelwal et al., 2020), interpolate the LM's predicted distribution of the next word with a distribution formed from the most relevant retrievals for a given prefix. While the $k$ -NN-LM and related methods yield impressive decreases in perplexity, we discover that they do not exhibit corresponding improvements in open-ended generation quality, as measured by both automatic evaluation metrics (e.g., MAUVE) and human evaluations. Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to the baseline LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation. However, when decoding a long sequence at inference time, significant improvements on this smaller subset of tokens are washed out by slightly worse predictions on most tokens. Furthermore, we discover that the entropy of the retrieval distribution increases faster than that of the base LM as the generated sequence becomes longer, which indicates that retrieval is less reliable when using model-generated text as queries (i.e., is subject to exposure bias). We hope that our analysis spurs future work on improved decoding algorithms and interpolation strategies for retrieval-augmented language models.
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
Retrieval-augmented language models, which integrate non-parametric dense retrieval with autoregressive next-token prediction, have been validated with strong empirical performance across a variety of tasks (Metzler et al., 2022; Basu et al., 2022; Mialon et al., 2023) in addition to achieving low held-out perplexities on LM benchmarks. In this
|
| 15 |
+
|
| 16 |
+
paper, we study interpolation-based LMs, a subtype of retrieval-augmented LMs that compute the probability of the next token by interpolating between the softmax distribution of the original LM and a token distribution formed by retrieving over an external datastore. These methods, perhaps best exemplified by the kNN-LM (Khandelwal et al., 2020), are particularly attractive because they allow any pretrained LM to be retrofitted with a retrieval module without further training.
|
| 17 |
+
|
| 18 |
+
Despite these advantages, there is limited understanding about the text generation quality of interpolation-based LMs. In this study, we evaluate the quality of generated text from two such methods, kNN-LM and TRIME (Zhong et al., 2022), against the output of baseline LMs that do not use retrieval. Our evaluations involve open-ended text completions generated using different decoding algorithms on both the WikiText-103 and PG-19 datasets. We discover that interpolation-based LMs do not improve the quality of generated text, as measured by both automatic text generation metrics such as MAUVE (Pillutla et al., 2021) and human evaluation.
|
| 19 |
+
|
| 20 |
+
This result begs the question of why the text generation quality does not improve, as the perplexity of interpolation-based LMs is substantially lower than that of the baselines. Our analysis of the $k$ NN-LM model suggests two potential reasons for this lack of improvement:
|
| 21 |
+
|
| 22 |
+
1. $k$ -NN-LM actually worsens the predictions of the majority of tokens in the WikiText-103 test set. On aggregate, perplexity improves because of significantly improved predictions on a smaller subset of tokens. However, when generating a long sequence of tokens, these improvements are washed out by the worsened predictions on other tokens.
|
| 23 |
+
|
| 24 |
+
2. The quality of the retrieval distribution deteriorates faster than that of the LM's predicted
|
| 25 |
+
|
| 26 |
+
distribution as the length of the generation increases; in other words, the retrieval distribution is more vulnerable to exposure bias and can be easily thrown off by artifacts presented in model-generated text.
|
| 27 |
+
|
| 28 |
+
Unlike previous works that rely on perplexity to evaluate language modeling or BLEU to evaluate machine translation quality of kNN-LM-based models (Khandelwal et al., 2021), our work specifically studies the open-ended text generation capability of kNN-LMs with a range of automatic evaluation metrics as well as human evaluation. We demonstrate that, though they significantly lower perplexity, retrievers might also impair text generation performance of kNN-LMs. This finding suggests potential future directions for using retrieval during text generation, such as developing more robust retrieval components or employing retriever mechanisms more selectively during decoding.
|
| 29 |
+
|
| 30 |
+
# 2 Related Work
|
| 31 |
+
|
| 32 |
+
We present the most extensive study of open-ended text generation<sup>1</sup> from interpolation-based LMs such as kNN-LM (Khandelwal et al., 2020). Our results reveal that although these methods are effective at reducing perplexity, they can also be detrimental to text generation. Previous work finds that retrieval LMs are improved by selectively incorporating retrieval when conditions are favorable (He et al., 2021a; Alon et al., 2022; Drozdov et al., 2022; Mallen et al., 2023), although they only examine the teacher-forced setting or other tasks, e.g. question answering. The kNN-MT (Khandelwal et al., 2021) explores machine translation, which is a constrained task with short inputs, and thus not a good test of open-ended long-form generation.
|
| 33 |
+
|
| 34 |
+
The kNN-LM effectively scales retrieval to billions of tokens using a token-level non-parametric interpolation technique first introduced by Grave et al. (2017). Alternative retrieval-augmented models experiment with training the retriever (Zhong et al., 2022; Ram et al., 2023; Shi et al., 2023), interpolating vectors instead of token probabilities (Yogatama et al., 2021), scaling to trillions of tokens (Borgeaud et al., 2021), exploiting retrieval for strong few-shot learning (Izacard et al., 2022), and so on (Chen et al., 2017; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021; Rae
|
| 35 |
+
|
| 36 |
+
et al., 2021; Wu et al., 2022; Trivedi et al., 2022; He et al., 2022). Among these, kNN-LM stands out as a relatively simple and fundamental work. Our findings indicate important weaknesses of retrieval for text generation.
|
| 37 |
+
|
| 38 |
+
Reference-based metrics are not well suited to evaluate open-ended text generation (Novikova et al., 2017). Instead, effective automated approaches compare the machine generated and human language text distributions using samples (McCoy et al., 2021; Pillutla et al., 2021; Pimentel et al., 2023). Human evaluation remains the golden standard for natural language generation (Hashimoto et al., 2019; Celikyilmaz et al., 2020; Krishna et al., 2023).
|
| 39 |
+
|
| 40 |
+
# 3 Experimental setup
|
| 41 |
+
|
| 42 |
+
Using a variety of commonly used text generation evaluation metrics, we evaluate the text generation capability of interpolation-based LMs and compare them to baseline LMs (i.e., without $k$ -nearest-neighbor retrieval from an external datastore). In this section, we describe our experimental setup, including models, automatic evaluation metrics, data selection, and hyperparameters.
|
| 43 |
+
|
| 44 |
+
# 3.1 Models
|
| 45 |
+
|
| 46 |
+
We experiment with two interpolation-based LMs: the $k$ -NN-LM of Khandelwal et al. (2020), which augments an existing pre-trained LM with a retrieval module without any additional training, and TRIME (Zhong et al., 2022), a recent improvement over the $k$ -NN-LM that trains the retriever and LM jointly to further decrease perplexity.
|
| 47 |
+
|
| 48 |
+
kNN-LM: The kNN-LM is a pre-trained language model that uses retrieval to improve word prediction. We follow the procedure from Khandelwal et al. (2020) and Alon et al. (2022) $^2$ , and use the LM to encode token-level representations from a document collection (e.g., WikiText-103 training data) into a datastore where each token in document is converted into a key-value pair: a context vector $k_{i}$ representing the first $n - 1$ words and a value $v_{i}$ which is the $n$ -th word. During evaluation, the model calculates Euclidean distances $d(k,q_j)$ between the query vector $q_{j}$ and all the keys $k_{1},k_{2},\ldots k_{|V|}$ in the datastore. The values
|
| 49 |
+
|
| 50 |
+
from the retrieved documents define a new distribution of the next word:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
P _ {K N N} (w _ {t} | q _ {t}) \propto \sum_ {(k _ {i}, v _ {i})} \mathbb {1} _ {w _ {t} = v _ {i}} \exp (- d (k _ {i}, q _ {t}))
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
The model interpolates the LM's predicted distribution over the next token $P(w_{t}|q_{t})$ with the retrieval distribution with a tunable hyperparameter $\lambda$ :
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
P ^ {\prime} \left(w _ {t} \mid q _ {t}\right) = \lambda P _ {K N N} \left(w _ {t} \mid q _ {t}\right) + (1 - \lambda) P _ {L M} \left(w _ {t} \mid q _ {t}\right) \tag {1}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
To generate text from the $k\mathrm{NN}$ -LM, we apply a decoding strategy (e.g., greedy decoding or truncated sampling algorithms) using the final interpolated probability distribution $P'(w_t|q_t)$ .
|
| 63 |
+
|
| 64 |
+
TRIME: Note that in kNN-LM, the LM is trained without retrieval; the retrieval component is bolted on after training. Zhong et al. (2022) note that this approach is suboptimal, as the LM does not understand how to best use the retrieval. Thus, they propose TRIME, which uses an efficient in-batch strategy to incorporate retrievals during training. While kNN-LM relies on just one type of retrieval (from an external datastore), TRIME can retrieve from local, long-range, as well as external context. We use the $\mathrm{TRIME}_{\mathrm{EXT}}$ configuration in all of our experiments, which also uses a linear interpolation between LM and retrieval distributions (as in Equation 1) to produce the final probability distribution. The baseline LM (no external retrieval) retrieves from example-level local and long context, but has no access to a huge-scale external datastore.
|
| 65 |
+
|
| 66 |
+
# 3.2 Constructing an evaluation dataset
|
| 67 |
+
|
| 68 |
+
We sample from WikiText-103 (Merit et al., 2016) to construct our main evaluation dataset; in Section 4, we also perform an analysis experiment on the PG-19 dataset (fictional books) to test whether our findings hold across domains. We choose WikiText-103 because it is the most commonly used dataset for evaluating interpolation-based LMs; indeed, the main experiments from both kNN-LM and TRIME demonstrate that the retrieval component decreases held-out perplexity on this dataset compared to the baseline LM. Specifically, we randomly sample 5K examples<sup>3</sup> from the
|
| 69 |
+
|
| 70 |
+
validation set of WikiText-103. 4
|
| 71 |
+
|
| 72 |
+
# 3.3 Automatic evaluation metrics
|
| 73 |
+
|
| 74 |
+
For all models tested, we compare the quality of text generated by the baseline LM with and without the $k$ -NN retrieval component over the external datastore. We measure quality via the following automatic metrics:
|
| 75 |
+
|
| 76 |
+
MAUVE: MAUVE is an evaluation metric for open-ended text generation (Pillutla et al., 2021) that achieves high correlation with human judgments of text quality. It measures the distribution similarity between the generated text and the reference text. Higher MAUVE scores indicate closer distance between the distribution of the generated text and that of reference text.
|
| 77 |
+
|
| 78 |
+
RankGen: Given a prefix and several possible continuations (suffixes), RankGen (Krishna et al., 2022) outputs a score for each suffix, measuring the relevance between the prefix and suffix. Higher RankGen scores indicate stronger relevance between generated suffix with the given prefix. We thus measure the RankGen score between prefix and generated suffix for each of the two models.
|
| 79 |
+
|
| 80 |
+
GPT-3 perplexity: We use GPT-3 (Brown et al., 2020), a large-scale pre-trained language model, to compute the perplexity of text generated with and without interpolation conditioned on the same prefix. Lower GPT-3 perplexity indicates stronger relevance between prefix and generated suffix and the better fluency of the generated suffix.
|
| 81 |
+
|
| 82 |
+
Entity-F1: Previous works (Nan et al., 2021; Lee et al., 2022) use the percentage of hallucinated named entities (entities that appear in the generated text but not in the reference text) or the ratio of named entity overlaps between the generated text and reference text to estimate the factuality of the generated text. In our work, we compute the F1 scores between the named entities from the generated text and reference text as a proxy for entity hallucination. Higher F1 scores may correlate to fewer instances of hallucinated entities.
|
| 83 |
+
|
| 84 |
+
Seq-Rep-1: We follow Welleck et al. (2020) and use the percentage of unique unigrams (Seq-Rep-1)
|
| 85 |
+
|
| 86 |
+
in the text as a metric for lexical diversity in the text. Higher Seq-Rep-1 scores indicate lower diversity (more repetition) in the generated text. $^6$
|
| 87 |
+
|
| 88 |
+
# 3.4 Model configurations and hyperparameters
|
| 89 |
+
|
| 90 |
+
In this work, we leverage pretrained model and datastore checkpoints released by prior work, and also train our own interpolation-based LMs.
|
| 91 |
+
|
| 92 |
+
Baseline LM details: For kNN-LM, we use the implementations from Alon et al. (2022) and Khandelwal et al. (2020). The model in Alon et al. (2022) relies on a backbone 117M-parameter GPT-2 small model (Radford et al., 2019) fine-tuned on the WikiText-103 training data. The external datastore is constructed by the same backbone model, and both the pretrained LM and datastore are publicly released by Alon et al. (2022). We also test the model in Khandelwal et al. (2020), which proposes the first kNN-LM. Khandelwal et al. (2020) uses a 247M-parameter Transformer LM trained from scratch on WikiText-103 and the datastore is computed using the trained Transformer LM. For TRIME, we adopt the 247M-parameter $\mathrm{TRIME}_{\mathrm{ext}}$ model trained from scratch on WikiText-103 and publicly released by Zhong et al. (2022). Our "non-retrieval" baseline is the same model without external retrieval; in other words, it has access to only the local memory (recent tokens) and long-range memory (in-batch tokens). In all three set-ups, the external datastore is constructed using the training dataset of WikiText-103; the datastores from Zhong et al. (2022) and Khandelwal et al. (2020) both have 103M entries, while the datastore from Alon et al. (2022) has 117M entries (the discrepancy is due to tokenization differences between the models).
|
| 93 |
+
|
| 94 |
+
Perplexity improvements from retrieval: All models studied in this paper substantially decrease perplexity on WikiText-103's validation set when interpolation is enabled. For the model in Alon et al. (2022), the base GPT-2 perplexity is 14.8, and it decreases to 12.6 (-2.2) after interpolation. The kNN-LM in (Khandelwal et al., 2020) decreases perplexity from 17.96 (no retrieval) to 16.06 (-1.9) after interpolation. Meanwhile, TRIME decreases
|
| 95 |
+
|
| 96 |
+
<table><tr><td>Model</td><td>MAUVE↑</td><td>PPLGPT-3↓</td><td>RankGen↑</td><td>EntityF1↑</td><td>SeqRep1↓</td></tr><tr><td colspan="6">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small(no retrieval)</td><td>77.7</td><td>13.1</td><td>11.7</td><td>14.2</td><td>56.7</td></tr><tr><td>GPT-2 small(+ retrieval)</td><td>79.2</td><td>14.8</td><td>11.7</td><td>13.1</td><td>53.3</td></tr><tr><td colspan="6">kNN-LM (Khandelwal et al., 2020) with and without external retrieval</td></tr><tr><td>Transformer(no retrieval)</td><td>89.5</td><td>20.4</td><td>12.9</td><td>12.1</td><td>41.8</td></tr><tr><td>Transformer(+ retrieval)</td><td>90.7</td><td>28.9</td><td>12.5</td><td>9.77</td><td>37.9</td></tr><tr><td colspan="6">TRIMEXT with and without external retrieval from Zhong et al. (2022)</td></tr><tr><td>TRIME(no retrieval)</td><td>90.6</td><td>22.2</td><td>13.1</td><td>11.3</td><td>40.1</td></tr><tr><td>TRIME(+ retrieval)</td><td>87.3</td><td>23.8</td><td>12.5</td><td>9.80</td><td>38.5</td></tr></table>
|
| 97 |
+
|
| 98 |
+
Table 1: Automatic evaluation metrics do not show consistent improvement in generation quality for interpolation-based LMs compared to their non-retrieval baseline LMs. We evaluate three set-ups: 1) kNN-LM with GPT2 as the baseline (top), 2) the original kNN-LM proposed in (Khandelwal et al., 2020) which trains a Transformer LM from scratch on the WikiText-103 training data (middle), and 3) TRIME which trains both the LM and the retrieval mechanism (bottom).
|
| 99 |
+
|
| 100 |
+
perplexity from 17.0 (no retrieval) to 15.5 (-1.5) after interpolation.
|
| 101 |
+
|
| 102 |
+
Hyperparameters: To generate text, we use the hyperparameters recommended by the authors that yield low perplexities on the WikiText-103 validation set. For the model in Alon et al. (2022) and Khandelwal et al. (2020), the softmax temperature is set to 1.0 and the interpolation coefficient between the LM distribution and the retrieval distribution $\lambda$ is set to 0.25. For TRIME(Zhong et al., 2022), the softmax temperature is set to 1.25 and the $\lambda$ is 0.3. For most of our experiments (e.g., those in Table 1), unless otherwise specified, we use nucleus sampling (Holtzman et al., 2020) with $p = 0.8$ for text generation.
|
| 103 |
+
|
| 104 |
+
# 4 Results
|
| 105 |
+
|
| 106 |
+
We find that despite incorporating the retrieval component and interpolating information from the baseline LM and retrieval, these methods do not yield any significant improvement to text generation performance, and even worsen it by some metrics (Table 1). In this section, we provide an overview of our main results, perform more fine-grained analyses, and describe a human evaluation that supports the conclusions drawn from automatic metrics.
|
| 107 |
+
|
| 108 |
+
Interpolation-based LMs do not improve automatic text generation evaluation metrics: We find that none of the three models significantly improve generation quality compared to the baseline LM, as shown by various metrics (Table 1). For the model in Alon et al. (2022) (top row in Table 1), while the MAUVE score improves by 1.5 points with retrieval, the perplexity of GPT-3 increases on retrieval-augmented generations, and the RankGen score is identical. For the model in Khandelwal et al. (2020) (middle row in Table 1), retrievals improves the MAUVE score even less significantly (1.2 points) but worsens perplexity of GPT-3, RankGen and Entity-F1. For TRIME (bottom row in Table 1), the non-retrieval baseline is actually slightly better across MAUVE, GPT-3 perplexity, RankGen and Entity-F1. In other words, there is no convincing winner; furthermore, contrary to the expectation that kNN-LMs reduce hallucination by retrieving (and potentially copying) from the datastore, we do not observe any improvement in the Entity F1 scores with the gold suffix. We observe a marginal improvement in lexical diversity of the generations (shown by the lower Seq-Rep-1 score $^{8}$ ).
|
| 109 |
+
|
| 110 |
+
These results hold across different decoding algorithms: The results in Table 1 are all from nucleus sampling. What if we change the decoding algorithm? To investigate the impact of decoding algorithm on generation quality, we evaluate the kNN-LM on three different decoding algorithms: greedy decoding, top- $k$ sampling, and beam search. We observe in Table 2 that none of these decoding algorithms changes the result: there is no clear winner between models with and without retrieval.
|
| 111 |
+
|
| 112 |
+
These results hold across different datasets: In addition to WikiText-103, we also evaluate the text generation performance of the kNN-LM on the PG-19 dataset (Rae et al., 2020), which predominantly comprises fictional books and presents a distinct thematic variation to Wikipedia. We construct an evaluation dataset from PG-19 similarly to our constructed evaluation dataset from WikiText-103 in Section 3.2. The baseline LM is GPT2-
|
| 113 |
+
|
| 114 |
+
<table><tr><td>Model</td><td>Nucleus Sampling</td><td>Top-k Sampling</td><td>Greedy Decoding</td></tr><tr><td colspan="4">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small (no retrieval)</td><td>77.7</td><td>87.1</td><td>2.32</td></tr><tr><td>GPT-2 small (+ retrieval)</td><td>79.2</td><td>87.5</td><td>2.44</td></tr></table>
|
| 115 |
+
|
| 116 |
+
Table 2: The observation that $k$ -NN-LM does not significantly improve text generation performance (measured here via MAUVE) is consistent across a variety of decoding algorithms: nucleus sampling, top-k sampling ( $k = 40$ ) and greedy decoding. We note that beam search decoding often generates repetitive text and therefore scores poorly with MAUVE.
|
| 117 |
+
|
| 118 |
+
<table><tr><td>Model</td><td>MAUVE↑</td><td>PPLGPT-3↓</td><td>RankGen↑</td><td>EntityF1↑</td><td>SeqRep1↓</td></tr><tr><td colspan="6">kNN-LM with and without retrieval from PG-19 (Rae et al., 2019)</td></tr><tr><td>GPT-2 small(no retrieval)</td><td>8.00</td><td>17.3</td><td>4.13</td><td>5.63</td><td>47.6</td></tr><tr><td>GPT-2 small(+ retrieval)</td><td>9.26</td><td>18.8</td><td>3.62</td><td>4.87</td><td>44.5</td></tr></table>
|
| 119 |
+
|
| 120 |
+
Table 3: Consistent with our findings in WikiText-103 dataset, we find in PG-19 (fictional books) that $k$ NN-LM does not yield consistent improvement in text generation quality compared to no-retrieval baseline LMs.
|
| 121 |
+
|
| 122 |
+
small model fine-tuned on the PG-19 dataset for three epochs (with 28.9 perplexity on the validation dataset).<sup>10</sup> Table 3 shows that on the PG-19 dataset, kNN-LM also does not improve text generation quality. While (marginally) improving perplexity, the kNN-LM often returns unhelpful artifacts from the PG19 dataset (see examples in Appendix A.3).
|
| 123 |
+
|
| 124 |
+
# 4.1 Human evaluation
|
| 125 |
+
|
| 126 |
+
Having found that interpolation-based LMs do not notably improve text generation quality according to automatic evaluation metrics, we turn next to human evaluation, which is known to be more reliable for generation tasks (Celikyilmaz et al., 2020; Krishna et al., 2021), to compare the text generated by the kNN-LM vs. the baseline GPT-2 model from Alon et al. (2022). We hired three English teachers/editors on the freelance marketplace Upwork. The evaluation was conducted on the platform Label Studio (Tkachenko et al., 2020-2022).<sup>11</sup> The
|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
Figure 1: The plot presents how many times each type of generations ( $k\mathrm{NN}$ -LM or GPT-2) is chosen by the evaluators. The dark area in each bar shows that the choices were made confidently. The light area represents the choices between $k\mathrm{NN}$ -LM and GPT-2 that were hard but the evaluator still chose the corresponding type. Overall, annotators preferred GPT-2 baseline texts $51\%$ of the time compared to $49\%$ for $k\mathrm{NN}$ -LM.
|
| 130 |
+
|
| 131 |
+
annotators were experienced in text generation evaluation and hired after careful selection.
|
| 132 |
+
|
| 133 |
+
The annotators were given a prefix and two continuations of the context (one generated by the baseline LM and one generated with retrieval, with randomized presentation order). The evaluators' task was to decide which continuation is better, indicate whether it was hard to choose between the two following Thai et al. (2022), and justify their choice in 3 to 4 sentences.[12] The evaluation focused on whether the generated text is grammatical, fluent, consistent, and logical.[13]
|
| 134 |
+
|
| 135 |
+
Human evaluation shows no definitive winner between kNN-LM and GPT-2 either: On aggregate, baseline GPT-2 generations were preferred $51\%$ of the time, vs. $49\%$ for kNN-LM. Additionally, the three annotators report that the decision was difficult for $37\%$ of all cases. For Rater1 and Rater3, the rates of difficult to choose are as high as $42\%$ and $47\%$ while for Rater2 it is $22\%$ . Out of the 45 comparison pairs, the three annotators only agree on their choices in 17 instances $(37.78\%)$ , resulting in a Fleiss Kappa score 0.17 (slight agreement). Figure 1 presents the evaluator preference when comparing the kNN-LM to GPT-2 generations.
|
| 136 |
+
|
| 137 |
+
# Both models make catastrophic errors at similar rates: A qualitative analysis of the the evaluators'
|
| 138 |
+
|
| 139 |
+
justifications reveals that both kNN-LM and GPT-2 make catastrophic mistakes. Table 5 gives four examples of bad continuations, along with the evaluators' comments and our categorization of the errors. In the first row of the table, Continuation A generated by the kNN-LM contains repetitive content (i.e., $==ZAPU$ retreat $==$ ), and confuses ZAPA and ZIPRA at multiple places. The GPT-2 continuation in the second row states that a person was born in 1584 but was still alive in 1742; the generation in the third row by the kNN-LM claims that U.S. Route 75 curves both northeast and northwest in the northbound direction. Furthermore, both the GPT-2 and kNN-LM's generations change topics abruptly as shown in the lower half of Table 5. Overall, the quantitative and qualitative analyses of the human evaluation results show that the kNN-LM does not clearly improve over its base GPT-2 model despite its significant improvement in perplexity.
|
| 140 |
+
|
| 141 |
+
# 5 Why do kNN-LMs fail to improve text generation quality?
|
| 142 |
+
|
| 143 |
+
Our evaluations (both human and automatic) do not show a significant quality increase when interpolating an LM's predicted probability distribution with one formed via retrieval over large external datastores. In this section, we try to understand why we do not observe an improvement by empirically analyzing the $k$ NN-LM and find two potential reasons: (1) despite lowering the aggregate perplexity, $k$ NN-LMs only improve the perplexity of $42\%$ of all test tokens, which suggests that the improved quality of a subset of tokens could be counter-balanced by worsened predictions on other tokens that do not benefit from the $k$ NN-LM. Moreover, we find the entropy of the retrieval distribution to increase at a faster rate than that of the baseline LM as the model generates longer sequences. This difference implies that the retriever distribution is getting noisier as more tokens are sampled, potentially due to the exposure bias stemming from the retriever having to rely on the sampled text as the query.
|
| 144 |
+
|
| 145 |
+
# 5.1 KNN-LMs only benefits a subset of tokens
|
| 146 |
+
|
| 147 |
+
Many studies have shown that kNN-LMs decrease perplexity via retrieval interpolation (Khandelwal et al., 2020; Alon et al., 2022; Drozdov et al., 2022). Previous work (Drozdov et al., 2022; Zhong et al., 2022) has also suggested that kNN-LMs benefit the inference of tokens of various part-of-speech (POS) tags to different degrees (by lowering the perplexity
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
Figure 2: Across all POS tags, we observe that $k$ NN-LM does not increase the probability of the majority of gold next token predictions. For verbs, pronouns, and adjectives, it only helps $< 40\%$ of the time (i.e., it hurts the predictions of the majority of these tokens).
|
| 151 |
+
|
| 152 |
+
of the gold token). However, these works focus on aggregate perplexity averaged across tokens in the test data but do not look at individual tokens and the percentage that actually benefit from retrieval.
|
| 153 |
+
|
| 154 |
+
Using the dataset we selected from WikiText-103, we compute the percentage of gold tokens from our test examples that are assigned lower perplexity (higher probability) by the $k$ NN-LM compared to the base LM. We find that only $42\%$ of the tokens benefit from $k$ NN-LMs, while the remaining $58\%$ of the tokens are adversely affected by the $k$ NN-LM (i.e., the $k$ NN-LM assigns a lower probability to the gold token compared to the base-LM). Moreover, we calculate the percentage of gold tokens that benefit from $k$ NN-LM in each POS category (Figure 2) and consistently find the similar result that $k$ NN-LM only helps reduce the perplexity for a smaller subset of tokens. We show examples of $k$ NN-LM negatively impacting the next-token prediction (assigning the gold token with lower probability than the base-LM) in Table 4.
|
| 155 |
+
|
| 156 |
+
This means that despite lowering the aggregate perplexity across the test sets, the $k\mathrm{NN}$ -LM is more likely to hurt, instead of help, the inference of each individual token. Therefore, we hypothesize that during text generation, as the model samples a sequence of tokens, the advantages brought by $k\mathrm{NN}$ -LM to a smaller subset of tokens are offset by other tokens, for which $k\mathrm{NN}$ -LM may even have a detrimental impact on the inference.
|
| 157 |
+
|
| 158 |
+

|
| 159 |
+
Figure 3: We plot the ratio between the Shannon entropy of the retriever's next-token distribution and that of the baseline LM softmax distribution, as the number of generated tokens increases. The ratio increases for longer model-generated sequences, indicating that the retriever becomes less confident than the baseline LM as decoding progresses.
|
| 160 |
+
|
| 161 |
+
# 5.2 The retriever becomes less reliable with longer generated sequences
|
| 162 |
+
|
| 163 |
+
Additionally, we observe that as the model generates longer sequences of text, the retriever component from $k$ NN-LM becomes less confident and reliable in returning a high-quality next-token distribution. Since the $k$ NN-LM relies on interpolating the next-token distribution from the baseline LM and that from the retriever, a lower quality retriever distribution can compromise the resulting next-token distribution and adversely affect the text generation performance.
|
| 164 |
+
|
| 165 |
+
We plot the ratio of Shannon entropy (Shannon, 2001) between the retriever distribution and that of the baseline LM distribution on the next token (with respect to the index of the token generated) and find that the retriever's entropy is increasing at a faster rate compared to that from the base-LM (Figure 3). A higher entropy indicates lower level of confidence (closer to a uniform distribution over all tokens) and suggests that the retriever, when sampling long sequences, may be less reliable in identifying the high-quality tokens.
|
| 166 |
+
|
| 167 |
+
We hypothesize that the worsened reliability of the retriever over longer sampled sequences is likely a result of the exposure bias during text generation (i.e., at test-time, the retriever has to rely on model-generated queries that may contain artifacts or other distributional differences from human-written text). The retriever in kNN-LM
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
Figure 4: The interpolation coefficient $\lambda$ optimized for validation perplexity does not necessarily lead to the best text generation quality (measured by MAUVE).
|
| 171 |
+
|
| 172 |
+
is non-parametric since both the input prefix and the context from the datastore are encoded by the LM (without any additional retrieval parameters), which has been adapted to the training corpus of WikiText-103. However, during text generation, as the model iteratively samples tokens and appends them to the input prefix, the input context is more likely to deviate from those in the training corpus, hence, becomes more out-of-distribution and challenging for the retriever to accurately process.
|
| 173 |
+
|
| 174 |
+
# 6 Discussion
|
| 175 |
+
|
| 176 |
+
In addition to the limitations of interpolation-based LMs described in Section 5, we hypothesize that there are other potential factors that contribute to the shortcomings of the kNN-LM for text generation. Specifically, it is possible that the interpolation may impede the language models' ability for self-recovery, and also that integrating the retrieval distribution can potentially introduce additional burdens related to hyperparameter tuning, which may not be optimized for text generation. We discuss these potential issues here as they are interesting avenues to explore for future work.
|
| 177 |
+
|
| 178 |
+
Retrieval interpolation may damage the self-recovery ability of LMs: Language models exhibit some degree of self-recovery abilities (He et al., 2021b), i.e., they can regain fluency and coherence even after previously generating poor-quality tokens. This self-recovery capability is attributed to the LM's ability to pay close attention to recent context and ignore the long-range past context. However, we hypothesize that when interpolation-based LMs encounter artifacts (e.g., non-factual or disfluent text) in a distorted pre
|
| 179 |
+
|
| 180 |
+
fix $\tilde{q}_t$ , they may be less likely to recover, as the retrievals may further increase the probability of completions that resemble those artifacts. Furthermore, as we continuously sample and append tokens to the prefix, which the retriever uses as the query to construct $P_{KNN}(w_t|\tilde{q}_t)$ , the retriever may encounter additional exposure bias as shown in Section 5.2, negatively impacting the quality of $P_{KNN}(w_t|\tilde{q}_t)$ . Thus, even when the baseline LMs "recover" from distorted past context by producing a high-quality distribution over the next-token prediction $P_{LM}(w_t|\tilde{q}_t)$ , the retriever may re-introduce the distortion by interpolating $P_{LM}(w_t|\tilde{q}_t)$ with $P_{KNN}(w_t|\tilde{q}_t)$ .
|
| 181 |
+
|
| 182 |
+
Hyperparameters introduced by $k\mathbf{N}\mathbf{N}$ LM are not optimized for text generation: The $k\mathbf{N}\mathbf{N}$ LM introduces two important hyperparameters, namely the relative weight between the two distribution $\lambda$ , as well as softmax temperature for the $k\mathbf{N}\mathbf{N}$ distribution $\tau_{KNN}$ . Recent work (Xu et al., 2023) highlights the significance of tuning $\tau_{KNN}$ for achieving optimal $k\mathbf{N}\mathbf{N}$ LM performance, as measured by perplexity. Similarly, we investigate the coefficient parameter $\lambda$ , which plays a vital role as it controls the relative importance assigned to the $k\mathbf{N}\mathbf{N}$ retriever and baseline LM. Existing works tune $\lambda$ by the perplexity on the validation set. However, from Figure 4, we observe that the $\lambda$ values that produce the lowest perplexity may not translate to the optimal value for text generation quality (measured by MAUVE). Instead of tuning $\lambda$ for optimizing perplexity, we may want to consider context-dependent $\lambda$ as in Drozdov et al. (2022) for generation (e.g., only use the retrieval distribution when it is very confident). Finally, interpolation may warrant the design of new decoding algorithms specialized for retrieval-augmented generation.
|
| 183 |
+
|
| 184 |
+
# 7 Conclusion
|
| 185 |
+
|
| 186 |
+
In this work, we show that despite the significant perplexity improvement brought by interpolation-based retrieval-augmented LMs such as $k$ NN-LMs, such methods fail to improve the LMs' text generation performance. The text generation quality between $k$ NN-LMs and baseline LMs without retrieval show no significant difference according to both automatic text generation evaluation metrics and human evaluation. Upon closer analysis, we identify flaws in using $k$ NN-LMs to perform autoregressive text generation: the method only benefits a minority of token predictions, and the retriever's
|
| 187 |
+
|
| 188 |
+
quality deteriorates when generating long-form text. We hope our findings can inspire future research to design better training and inference methods so that the impressive improvement of kNN-LMs in perplexity can better be translated into gains in text generation quality.
|
| 189 |
+
|
| 190 |
+
# Ethics Statement
|
| 191 |
+
|
| 192 |
+
In this work, we investigate the text generation quality of language models. Language models can generate text that is harmful, offensive or unfaithful. We advise using caution when relying on language models to generate text and adopting post-processing strategies on the language-model generated text to remove undesirable content. Additionally, training large language models can bring significant energy cost. We hope that our analysis of the kNN-LM and future works on this topic may lead to more efficient method of using language models without the need to re-train such models.
|
| 193 |
+
|
| 194 |
+
# Limitations
|
| 195 |
+
|
| 196 |
+
Our work does not study all data, model, and evaluation configurations of interpolation-based LMs. Additionally, we focus on the 100M token datastore size, although kNN-LM can scale effectively to datastores of 3B words. Using a larger datastore may lead to further perplexity decreases, but we do not think this contradicts our finding that text generation degrades as retrieval quality does. We focus exclusively on interpolation-based LMs in this work, but similar issues for other retrieval-augmented LMs such as RETRO (Borgeaud et al., 2021) may also exist and be worth investigating further. Finally, our human evaluation does not specifically account for diversity, although some dimensions of this are captured by our automated metrics. Due to the overall low quality of text generated by LMs with and without retrieval, reading their outputs results in high cognitive burden on annotators, which might be ameliorated by using stronger LMs than GPT-2.
|
| 197 |
+
|
| 198 |
+
# Acknowledgements
|
| 199 |
+
|
| 200 |
+
We thank Zexuan Zhong and Danqi Chen for helpful discussion on TRIME and kNN-LM, and the UMass NLP group for feedback and discussion. We also thank the anonymous reviewers for their helpful comments.
|
| 201 |
+
|
| 202 |
+
This project was partially supported by awards IIS-2202506 and IIS-2046248 from the National
|
| 203 |
+
|
| 204 |
+
Science Foundation (NSF). This research was also supported in part by a research gift from Adobe.
|
| 205 |
+
|
| 206 |
+
# References
|
| 207 |
+
|
| 208 |
+
Uri Alon, Frank Xu, Junxian He, Sudipta Sengupta, Dan Roth, and Graham Neubig. 2022. Neuro-symbolic language modeling with automaton-augmented retrieval. In International Conference on Machine Learning, pages 468-485. PMLR.
|
| 209 |
+
Soumya Sankar Basu, Ankit Singh Rawat, and Manzil Zaheer. 2022. Generalization properties of retrieval-based models. ArXiv, abs/2210.02617.
|
| 210 |
+
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, T. W. Hennigan, Saffron Huang, Lorenzo Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and L. Sifre. 2021. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning.
|
| 211 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 212 |
+
Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799.
|
| 213 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
|
| 214 |
+
Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, and Mohit Iyyer. 2022. You can't pick your neighbors, or can you? when and how to rely on retrieval in the kNN-LM. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2997-3007, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 215 |
+
|
| 216 |
+
Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language models with a continuous cache. In International Conference on Learning Representations.
|
| 217 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrievalaugmented language model pre-training. In International Conference on Machine Learning.
|
| 218 |
+
Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang, 2019. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1689-1701, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 219 |
+
Hangfeng He, Hongming Zhang, and Dan Roth. 2022. Rethinking with retrieval: Faithful large language model inference. ArXiv, abs/2301.00303.
|
| 220 |
+
Junxian He, Graham Neubig, and Taylor Berg-Kirkpatrick. 2021a. Efficient nearest neighbor language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5703-5714, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 221 |
+
Tianxing He, Jingzhao Zhang, Zhiming Zhou, and James Glass. 2021b. Exposure bias versus self-recovery: Are distortions really incremental for autoregressive text generation? In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5087-5102, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 222 |
+
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations.
|
| 223 |
+
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
|
| 224 |
+
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot Learning with Retrieval Augmented Language Models.
|
| 225 |
+
Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neighbor machine translation. In International Conference on Learning Representations (ICLR).
|
| 226 |
+
|
| 227 |
+
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In International Conference on Learning Representations (ICLR).
|
| 228 |
+
Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, and Kyle Lo 2023. Longeval: Guidelines for human evaluation of faithfulness in long-form summarization. In Conference of the European Chapter of the Association for Computational Linguistics.
|
| 229 |
+
Kalpesh Krishna, Yapei Chang, John Wieting, and Mohit Iyyer. 2022. RankGen: Improving text generation with large ranking models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 199-232, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 230 |
+
Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. *Hurdles to progress in long-form question answering* In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4940-4957, Online. Association for Computational Linguistics.
|
| 231 |
+
Tian Lan, Deng Cai, Yan Wang, Heyan Huang, and Xian-Ling Mao. 2023. Copy is all you need. In The Eleventh International Conference on Learning Representations.
|
| 232 |
+
Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Factuality enhanced language models for open-ended text generation. In Advances in Neural Information Processing Systems.
|
| 233 |
+
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.
|
| 234 |
+
Alex Mallen, Akari Asai, Victor Zhong, Dajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. 2023. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. In ACL.
|
| 235 |
+
R. Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2021. How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven. ArXiv, abs/2111.09509.
|
| 236 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models.
|
| 237 |
+
|
| 238 |
+
Don Metzler, Fernando Diaz, Hamed Zamani, Mike Bendersky, and Mostafa Dehghani. 2022. Retrieval enhanced machine learning. In SIGIR 2022: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Perspectives Track).
|
| 239 |
+
Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. ArXiv, abs/2302.07842.
|
| 240 |
+
Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entity-level factual consistency of obstructive text summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2727-2733, Online. Association for Computational Linguistics.
|
| 241 |
+
Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 242 |
+
Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. In Neural Information Processing Systems.
|
| 243 |
+
Tiago Pimentel, Clara Isabel Meister, and Ryan Cotterell. 2023. On the usefulness of embeddings, clusters and strings for text generation evaluation. In The Eleventh International Conference on Learning Representations.
|
| 244 |
+
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 245 |
+
Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri, Saffron Huang, Jonathan Uesato, John F. J. Mellor, Irina Higgins, Antonia Creswell, Nathan McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden, Esme Sutherland, Karen Simonyan, Michela Paganini, L. Sifre, Lena Martens, Xiang Lorraine Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, N. K. Grigorev, Doug Fritz, Thibault Sotti aux, Mantas Pajarskas, Tobias Pohlen, Zhitao Gong
|
| 246 |
+
|
| 247 |
+
Daniel Toyama, Cyprien de Masson d'Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin, Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew G. Johnson, Blake A. Hechtman, Laura Weidinger, Jason Gabriel, William S. Isaac, Edward Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem W. Ayoub, Jeff Stanway, L. L. Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. 2021. Scaling language models: Methods, analysis & insights from training gopher. ArXiv, abs/2112.11446.
|
| 248 |
+
Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, Chloe Hillier, and Timothy P Lillicrap. 2019. Compressive transformers for long-range sequence modelling. arXiv preprint.
|
| 249 |
+
Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations.
|
| 250 |
+
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models.
|
| 251 |
+
Claude Elwood Shannon. 2001. A mathematical theory of communication. ACM SIGMOBILE mobile computing and communications review, 5(1):3-55.
|
| 252 |
+
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen tau Yih. 2023. Replug: Retrieval-augmented black-box language models. ArXiv, abs/2301.12652.
|
| 253 |
+
Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, and Mohit Iyyer. 2022. Exploring document-level literary machine translation with parallel paragraphs from world literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9882-9902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 254 |
+
Maxim Tkachenko, Mikhail Malyuk, Andrey Holmanyuk, and Nikolai Liubimov. 2020-2022. Label Studio: Data labeling software. Open source software available from https://github.com/heartxlabs[label-studio.
|
| 255 |
+
H. Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. *ArXiv*, abs/2212.10509.
|
| 256 |
+
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020.
|
| 257 |
+
|
| 258 |
+
Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing transformers. In International Conference on Learning Representations.
|
| 259 |
+
|
| 260 |
+
Frank F. Xu, Uri Alon, and Graham Neubig. 2023. Why do nearest neighbor language models work? ArXiv, abs/2301.02828.
|
| 261 |
+
|
| 262 |
+
Dani Yogatama, Cyprien de Masson d'Autume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362-373.
|
| 263 |
+
|
| 264 |
+
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Training language models with memory augmentation. In Conference on Empirical Methods in Natural Language Processing.
|
| 265 |
+
|
| 266 |
+
# A Appendix
|
| 267 |
+
|
| 268 |
+
# A.1 Examples of kNN-LM hurting the inference of the next-token
|
| 269 |
+
|
| 270 |
+
We show examples where $k$ NN-LM hurts the inference of the next-token in Table 4
|
| 271 |
+
|
| 272 |
+
# A.2 Human evaluation interface and examples
|
| 273 |
+
|
| 274 |
+
From our human evaluation, we show the interface for our evaluators in Fig 5 and also selected representative examples of evaluators' comments in Table 5.
|
| 275 |
+
|
| 276 |
+
# A.3 Models trained on PG-19 produce unhelpful artifacts
|
| 277 |
+
|
| 278 |
+
With retrieval from the datastore, the $k\mathrm{NN}$ -LM improves the perplexity on the validation dataset of the PG-19 marginally from 28.9 to 28.2 but does not improve the text generation quality. Both the baseline LM and the $k\mathrm{NN}$ -LM fine-tuned on the PG-19 dataset returns artifacts from the dataset (e.g. missing white-spaces and unnecessary line breaks), as shown in Table 6.
|
| 279 |
+
|
| 280 |
+
# A.4 Seq-Rep- $N$ of generated text from the baseline-LM and kNN-LM
|
| 281 |
+
|
| 282 |
+
Even though kNN-LM does not improve the text generation quality overall, we observe an improvement in lexical diversity (lower Seq-Rep- $N$ ) from kNN-LM on the WikiText-103 dataset in Table 7. However, this improvement in text diversity is obtained at the cost of Entity-F1 (a proxy for factuality).
|
| 283 |
+
|
| 284 |
+
<table><tr><td>Context</td><td colspan="2">Ground-truth</td><td>Most Probable Tokens from base-LM vs kNN-LM</td><td>Analysis</td></tr><tr><td>The lyrics were inspired by a story ...... To me, that's the way a great rock 'n ' roll concert should be : a place where everyone comes together ... Maybe that 's the dream of all art : to break down the barriers and the divisions between</td><td>"people"
|
| 285 |
+
base-LM
|
| 286 |
+
0.26
|
| 287 |
+
kNN-LM
|
| 288 |
+
0.23</td><td>probability:</td><td>base-LM:
|
| 289 |
+
"the"(0.20), "us"(0.09), "art"(0.03), "rock"(0.02)
|
| 290 |
+
kNN-LM:
|
| 291 |
+
"the"(0.23), "us"(0.07), "good"(0.02), "art"(0.02)</td><td>In this example the base-LM predicts the ground-truth noun token "people" with the highest probability of all tokens (0.26). However, after interpolating with the retrieval distribution, the kNN-LM decreases the probability of the ground-truth token.</td></tr><tr><td>Richmond finished the 1984 season 12th in points , with 11 .... In the Busch Series , he qualified at the pole position in the two races he entered , and won the Charlotte race . Richmond joined Hendrick Motorsports in 1986 , where he teamed up with veteran crew chief Harry Hyde . It took the team until the middle of the season'</td><td>"to"
|
| 292 |
+
base-LM
|
| 293 |
+
0.78
|
| 294 |
+
kNN-LM
|
| 295 |
+
0.64</td><td>probability:</td><td>base-LM:
|
| 296 |
+
","(0.07), "for"(0.03), "when"(0.02), "that"(0.02)
|
| 297 |
+
kNN-LM:
|
| 298 |
+
","(0.10), "for"(0.06), ". ."(0.04), "and"(0.02)</td><td>The ground-truth token to be predicted is the preposition "the", which the base-LM correctly predicts with very high probability. However, the kNN-LM decreases the probability of the ground-truth token significantly compared to the base-LM.</td></tr></table>
|
| 299 |
+
|
| 300 |
+
Table 4: Examples where k:NN-LM hurts the inference of next-token (with different part-of-speech such as noun and preposition) by predicting of the gold token with a lower probability compared to the base-LM
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
Figure 5: The interface of the human evaluation. Each task consists of a context text, two continuations, two choices, and a free-form justification text box.
|
| 304 |
+
|
| 305 |
+
<table><tr><td>Context</td><td>Continuation</td><td>Comment</td><td>Error</td></tr><tr><td>[...] = = = Shootdown = = = A group of ZIPRA guerrillas, armed with a Strela-2 launcher, waited in the bush beneath Flight 825's flightpath, and</td><td>Continuation A-kNN-LM two soldiers opened fire on the guerrillas. [...] The ZIPRA and ZIPRA units exchanged fire, and several ZIPRA members were shot. After the fire was over, the ZIPRA and ZIPRA guerrillas returned fire and began shooting back, and one of the ZIPRA unsuccessfully attacked a ZAPU base at the head of Flight 825, forcing the major to retreat. == = = ZAPU retreat == [...] The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw. [...]</td><td>Both of these continuations have problems. I chose A because its problems seem to influence the meaning less than the problems in B. [...] Continuation A has some repeated words and in a few spots ZIPRA and ZAPU seem to get mixed up, leading to sentences like this: "The ZIPRA moved up the hill and captured several ZIPRA members, forcing them to withdraw." It was easier to recover from those confusions than from the ones in Continuation B.</td><td>Repetitive content; contradiction</td></tr><tr><td>Hu was born in <unk> County, Anhui Province in 1584 or early 1585. [...] By 1619, Hu had moved to Nanjing where he lived with his wife Wu.</td><td>Continuation B-GPT-2 During this time he also became a leading scholar of the court, a position he held until 1624, [...] His last year at the academy was 1715. In 1725, Hu was appointed secretary of the government [...] In 1727, he was appointed as vice-president of the Academy, and he remained the academy's secretary until 1742, [...]</td><td>I selected that this was a difficult decision [...] However, Continuation B is an impossible choice as it has Hu being born in 1584 and still being alive in 1725 which of course is impossible. I was left with process of elimination in my opinion.</td><td>Illogical; wrong chronological details</td></tr><tr><td>The lanes of US 75, which are on the bottom level, are 20 feet (6m) below ground level, giving the structure a total height of 140 feet from bottom to top As part of the project, I-635 was widened to include four dedicated high-occupancy vehicle lanes (HOV lanes) that are barrier-separated and reversible. The roads on the five levels are: Level I: US 75, an eight lane highway, four lanes going each way; Level II: The junction of the two six-lane frontage roads, each having three through</td><td>Continuation A-kNN-LM traffic lanes with one lane in each direction. Level III: In the northbound direction, US 75 curves to the northeast and US 75 curves to the northwest to follow a more southerly routing. The northbound lanes are two lanes in each direction, while the southbound lanes are two lanes. [...] == Major intersections == [...] = Battle of the Kalka River = The Battle of the Kalka River (Polish : Kalka, pronounced ['kaʊ], or ['kaʊ'maʊ]) was a decisive battle of the 15th and 16th centuries of the 15th century in Austrian occupied Poland. The battle was fought in the middle of the Russo</td><td>Continuation A gives more information and is more specific than B. It has some lack of clarity - Level III has the road curving both northeast and northwest - but overall it's more informative than just "the intersection of two frontage roads." I would have ruled out A because of the Battle of the Kalka River section at the bottom. It doesn't really have a place in this text. But B also has the Battle of the Falkland Islands, so that's a wash. A it is.</td><td>Abrupt topic change; illogical</td></tr><tr><td>(Same as above)</td><td>Continuation B-GPT-2 traffic lanes. Level III: The intersection of the two six-lane frontage roads, each having four through traffic lanes. Level IV: The intersection of the two six-lane frontage roads, each having two through traffic lanes = = = Construction = = = The project was approved in July 1997 and completed on December 17, 1997, with the groundbreaking ceremony taking place at the State University of New York (SUNY) campus in New York City. [...] = = = Major intersections = = = [...] = Battle of the Falkland Islands =</td><td>(Same as above)</td><td>Abrupt topic change; repetitive content</td></tr></table>
|
| 306 |
+
|
| 307 |
+
Table 5: Selected representative examples with evaluators' comments. The error categories in the last column are ours. Contents are shortened for the sake of space.
|
| 308 |
+
|
| 309 |
+
<table><tr><td>Context</td><td>baseline-LM-generated text</td><td>kNN-LM-generated text</td></tr><tr><td>For my part,something within me answered to Kandinsky's art the first time I met with it. There was no question of looking for representation; harmony had been</td><td>in my brain, and I had been working at it ever since.That is why I was able to tell Kandinsky what was going on within me. Idid not need to be told about his melody. I knew that he was doing itwith the greatest energy and energy and energy and energy.</td><td>in my brain, and I had been working, like theothers, in that direction for some years. It was this which prompted me totry for a musical conception. I wanted to be able to make music to suit the moods and the contrasting circumstances of the moment,</td></tr><tr><td>Before going toMaria, they insist on doing the honors, and showing the objectsinterest the vicinity. So they take us to their barrack, a large-house, and thence to the front.To the latter spot our coachmandriving</td><td>the family out of the house, and we go through the sameprocess with them. Our next station is in the heart of San Mateo, where theyhave a housewith a room on the outside, and ... and afloor and a ceiling.</td><td>the family out of the house, and, with a chuckle, heexplains the advantages of boarding-schools in the United States, aswell as of boarding-schools in France and Spain. I am reminded of thisexception, and feel that there is a difference in the methods of boardingschools in the two countries.</td></tr></table>
|
| 310 |
+
|
| 311 |
+
Table 6: Both the baseline LM and kNN-LM generate text that consists of artifacts from the dataset, e.g. missing white-spaces between tokens
|
| 312 |
+
|
| 313 |
+
<table><tr><td>Model</td><td>Seq-Rep-1</td><td>Seq-Rep-2</td><td>Seq-Rep-3</td><td>Seq-Rep-4</td></tr><tr><td colspan="5">kNN-LM with and without retrieval from Alon et al. (2022)</td></tr><tr><td>GPT-2 small (no retrieval)</td><td>56.7</td><td>26.6</td><td>15.1</td><td>9.65</td></tr><tr><td>GPT-2 small (+ retrieval)</td><td>53.3</td><td>22.5</td><td>11.6</td><td>6.73</td></tr></table>
|
| 314 |
+
|
| 315 |
+
Table 7: Even though the kNN-LM does not improve the overall text generation quality, we observe higher lexical diversity (lower Seq-Rep- $N$ ) in the kNN-LM-generated text, on the WikiText-103 dataset, using the GPT2-small model as the baseline LM.
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:11c5ea2c1f1adcca188c6a5541d90073c9b1da5c027082972ccb5e8fac05e136
|
| 3 |
+
size 767518
|
2023/kNN-LM Does Not Improve Open-ended Text Generation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_content_list.json
ADDED
|
@@ -0,0 +1,1149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "mAggretriever: A Simple Yet Effective Approach to Zero-Shot Multilingual Dense Retrieval",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
230,
|
| 8 |
+
90,
|
| 9 |
+
766,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Sheng-Chieh Lin $^{1*}$ , Amin Ahmad $^{2}$ , Jimmy Lin $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
294,
|
| 19 |
+
149,
|
| 20 |
+
702,
|
| 21 |
+
168
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup> University of Waterloo <sup>2</sup> Vectora",
|
| 28 |
+
"bbox": [
|
| 29 |
+
337,
|
| 30 |
+
172,
|
| 31 |
+
658,
|
| 32 |
+
190
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{s269lin,jimmylin}@uwaterloo.ca,amin@vectora.com",
|
| 39 |
+
"bbox": [
|
| 40 |
+
211,
|
| 41 |
+
197,
|
| 42 |
+
791,
|
| 43 |
+
212
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
260,
|
| 53 |
+
252,
|
| 54 |
+
339,
|
| 55 |
+
268
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "Multilingual information retrieval (MLIR) is a crucial yet challenging task due to the need for human annotations in multiple languages, making training data creation labor-intensive. In this paper, we introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformers (e.g., mBERT and XLM-R) for dense retrieval. To enhance training and inference efficiency, we employ approximate masked-language modeling prediction for computing lexical features, reducing $70 - 85\\%$ GPU memory requirement for mAggretriever fine-tuning. Empirical results demonstrate that mAggretriever, fine-tuned solely on English training data, surpasses existing state-of-the-art multilingual dense retrieval models that undergo further training on large-scale MLIR training data. Our code is available at https://github.com/castorini/dhr.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
141,
|
| 64 |
+
281,
|
| 65 |
+
463,
|
| 66 |
+
565
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
580,
|
| 77 |
+
260,
|
| 78 |
+
594
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Fine-tuning a pre-trained transformer has proven to be highly effective in many tasks of natural language processing, including information retrieval (IR). Despite its success, the recent state-of-the-art dense retrieval (DR) models (Ni et al., 2022; Lin et al., 2023a) predominantly focus on English. This bias arises from the fact that constructing an effective DR model requires a substantial amount of annotated training data, which is predominantly available in English datasets (Bajaj et al., 2016; Kwiatkowski et al., 2019). This makes it challenging for users of low-resource languages to benefit from the recent IR progress.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
112,
|
| 87 |
+
606,
|
| 88 |
+
489,
|
| 89 |
+
815
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "To address this issue, researchers have explored leveraging pre-trained multilingual transformers, such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), which exhibit promising language transferability even when fine-tuned on",
|
| 96 |
+
"bbox": [
|
| 97 |
+
112,
|
| 98 |
+
816,
|
| 99 |
+
489,
|
| 100 |
+
896
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "English datasets alone (Zhang et al., 2023b). However, Izacard et al. (2022) emphasize the importance of contrastive pre-training on multilingual corpora to achieve improved language transferability in IR. Other approaches utilize multilingual parallel corpora or translation pairs for contrastive pretraining (Feng et al., 2022) or fine-tuning (Reimers and Gurevych, 2020; Bonifacio et al., 2021). However, these solutions are viable only when significant computational resources or multilingual parallel data are available.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
252,
|
| 110 |
+
885,
|
| 111 |
+
428
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Recently, Lin et al. (2023b) demonstrate that the existing DR models solely using [CLS] (or averaged pooling) do not fully exploit the capability from pre-trained transformers for retrieval. Lin et al. (2023b) propose Aggretriever by combining the semantic and lexical features from the respective two components, [CLS] and masked language modeling (MLM) prediction, which shows superior effectiveness on diverse English retrieval tasks. This inspires us to ask the question: Can Aggre-triever benefit multilingual retrieval?",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
430,
|
| 121 |
+
885,
|
| 122 |
+
606
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "Extending Aggretriever to support multilingual retrieval poses challenges due to increased computation costs for extracting lexical features from the MLM component as the vocabulary size of the pre-trained model grows, making fine-tuning with limited resources challenging. In this work, we propose two simple approaches to approximate MLM prediction to extract lexical features from multilingual pre-trained transformers, making the training of mAggretriever possible in one GPU within 40 GBs of memory. Remarkably, mAggretriever exhibits strong retrieval capability across multiple languages despite being fine-tuned on English data.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
507,
|
| 131 |
+
608,
|
| 132 |
+
885,
|
| 133 |
+
816
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "The paper is structured as follows: we begin by providing background information on standard DR and Aggretriever. We then present our proposed approaches to tackle the computational challenges",
|
| 140 |
+
"bbox": [
|
| 141 |
+
507,
|
| 142 |
+
818,
|
| 143 |
+
882,
|
| 144 |
+
882
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "page_footnote",
|
| 150 |
+
"text": "<sup>1</sup>In this paper, we refer multilingual retrieval to monolingual retrieval across multiple languages.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
509,
|
| 153 |
+
892,
|
| 154 |
+
882,
|
| 155 |
+
917
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "page_footnote",
|
| 161 |
+
"text": "*Work done during Sheng-Chieh's internship at Vectara.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
114,
|
| 164 |
+
904,
|
| 165 |
+
460,
|
| 166 |
+
917
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "page_number",
|
| 172 |
+
"text": "11688",
|
| 173 |
+
"bbox": [
|
| 174 |
+
475,
|
| 175 |
+
927,
|
| 176 |
+
524,
|
| 177 |
+
940
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 0
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "footer",
|
| 183 |
+
"text": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11688-11696 December 6-10, 2023 ©2023 Association for Computational Linguistics",
|
| 184 |
+
"bbox": [
|
| 185 |
+
152,
|
| 186 |
+
945,
|
| 187 |
+
843,
|
| 188 |
+
972
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 0
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"text": "involved in extending Aggretriever to support multilingual retrieval. Subsequently, we compare mAgregtriever with other state-of-the-art multilingual DR models and explore the potential for extending mAgregtriever to enable cross-lingual retrieval.",
|
| 195 |
+
"bbox": [
|
| 196 |
+
112,
|
| 197 |
+
84,
|
| 198 |
+
489,
|
| 199 |
+
165
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 1
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"text": "2 Background",
|
| 206 |
+
"text_level": 1,
|
| 207 |
+
"bbox": [
|
| 208 |
+
114,
|
| 209 |
+
177,
|
| 210 |
+
257,
|
| 211 |
+
193
|
| 212 |
+
],
|
| 213 |
+
"page_idx": 1
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"type": "text",
|
| 217 |
+
"text": "Dense Retrieval. Given a query with sequential tokens $q = ([CLS], q_1, \\dots, q_n)$ , our task is to retrieve a list of passages to maximize some ranking metric such as nDCG or MRR. Standard dense retrieval (DR) models (Reimers and Gurevych, 2019; Karpukhin et al., 2020) based on pre-trained language models encode queries and passages as low dimensional [CLS] vectors with a bi-encoder architecture and use the dot product between the encoded vectors as the similarity score:",
|
| 218 |
+
"bbox": [
|
| 219 |
+
112,
|
| 220 |
+
203,
|
| 221 |
+
490,
|
| 222 |
+
363
|
| 223 |
+
],
|
| 224 |
+
"page_idx": 1
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"type": "equation",
|
| 228 |
+
"text": "\n$$\n\\operatorname {s i m} _ {\\mathrm {C L S}} (q, p) \\triangleq \\mathbf {e} _ {q _ {[ \\mathrm {C L S} ]}} \\cdot \\mathbf {e} _ {p _ {[ \\mathrm {C L S} ]}}, \\tag {1}\n$$\n",
|
| 229 |
+
"text_format": "latex",
|
| 230 |
+
"bbox": [
|
| 231 |
+
181,
|
| 232 |
+
376,
|
| 233 |
+
487,
|
| 234 |
+
395
|
| 235 |
+
],
|
| 236 |
+
"page_idx": 1
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "text",
|
| 240 |
+
"text": "where $\\mathbf{e}_{q_{[CLS]}}$ and respective query and passage $\\mathbf{e}_{p_{[CLS]}}$ are the [CLS] vectors at the last layer of a pre-trained language model (e.g., BERT).",
|
| 241 |
+
"bbox": [
|
| 242 |
+
112,
|
| 243 |
+
409,
|
| 244 |
+
487,
|
| 245 |
+
456
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "Aggretriever. In addition to using [CLS] vectors to capture semantic textual features, Lin et al. (2023b) further propose to capture lexical textual features from the pre-trained MLM prediction head by projecting each contextualized token embedding $\\mathbf{e}_{q_i}$ into a high-dimensional vector in the wordpiece lexical space:",
|
| 252 |
+
"bbox": [
|
| 253 |
+
112,
|
| 254 |
+
466,
|
| 255 |
+
489,
|
| 256 |
+
579
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "equation",
|
| 262 |
+
"text": "\n$$\n\\mathbf {p} _ {q _ {i}} = \\operatorname {s o f t m a x} \\left(\\mathbf {e} _ {q _ {i}} \\cdot \\mathbf {W} _ {\\mathrm {m l m}} + \\mathbf {b} _ {\\mathrm {m l m}}\\right), \\tag {2}\n$$\n",
|
| 263 |
+
"text_format": "latex",
|
| 264 |
+
"bbox": [
|
| 265 |
+
161,
|
| 266 |
+
593,
|
| 267 |
+
487,
|
| 268 |
+
611
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "where $\\mathbf{e}_{q_i} \\in \\mathbb{R}^d$ , $\\mathbf{W}_{\\mathrm{mlm}} \\in \\mathbb{R}^{d \\times |\\mathrm{V}_{\\mathrm{wp}}|}$ , and $\\mathbf{b}_{\\mathrm{mlm}} \\in \\mathbb{R}^{|\\mathrm{V}_{\\mathrm{wp}}|}$ are the weights of the pre-trained MLM linear projector, and $\\mathbf{p}_{q_i} \\in \\mathbb{R}^{|\\mathrm{V}_{\\mathrm{wp}}|}$ is the $i$ -th contextualized token represented by a probability distribution over the BERT wordpiece vocabulary, $\\mathrm{V}_{\\mathrm{wp}}$ . Weighted max pooling is then performed over the sequential representations $(\\mathbf{p}_{q_1}, \\mathbf{p}_{q_2}, \\dots, \\mathbf{p}_{q_l})$ to obtain a single-vector lexical representation:",
|
| 275 |
+
"bbox": [
|
| 276 |
+
112,
|
| 277 |
+
623,
|
| 278 |
+
489,
|
| 279 |
+
753
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "equation",
|
| 285 |
+
"text": "\n$$\n\\mathbf {v} _ {q} [ v ] = \\max _ {i \\in (1, 2, \\dots , l)} w _ {i} \\cdot \\mathbf {p} _ {q _ {i}} [ v ], \\tag {3}\n$$\n",
|
| 286 |
+
"text_format": "latex",
|
| 287 |
+
"bbox": [
|
| 288 |
+
186,
|
| 289 |
+
766,
|
| 290 |
+
487,
|
| 291 |
+
791
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 1
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "text",
|
| 297 |
+
"text": "where $w_{i} = |\\mathbf{e}_{q_{i}}\\cdot \\mathbf{W} + \\mathbf{b}| \\in \\mathbb{R}^{1}$ is a positive scalar and $v \\in \\{1,2,\\dots ,|\\mathrm{V}_{\\mathrm{wp}}|\\}$ ; $\\mathbf{W} \\in \\mathbb{R}^{d\\times 1}$ and $\\mathbf{b} \\in \\mathbb{R}^{1}$ are trainable weights. Note that the scalar $w_{i}$ for each token $q_{i}$ is essential to capture term importance, which $\\mathbf{p}_{q_i}$ alone cannot capture since it is normalized by softmax. Note that the [CLS] token embedding is excluded since it is used for",
|
| 298 |
+
"bbox": [
|
| 299 |
+
112,
|
| 300 |
+
804,
|
| 301 |
+
489,
|
| 302 |
+
917
|
| 303 |
+
],
|
| 304 |
+
"page_idx": 1
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"type": "image",
|
| 308 |
+
"img_path": "images/56fed92bf5b2f281284a5b6646774438b22e2c4aca5dcd14843f90941f55aa0e.jpg",
|
| 309 |
+
"image_caption": [
|
| 310 |
+
"Figure 1: Illustration of MLM prediction for the input token $q_{1}$ . (1) Whole-vocabulary prediction considers all languages while target-language prediction only considers the tokens in one language; e.g., (2) source language (Spanish) or (3) English. (4) Self prediction assigns probability of one to itself. 0. denotes less than 0.1."
|
| 311 |
+
],
|
| 312 |
+
"image_footnote": [],
|
| 313 |
+
"bbox": [
|
| 314 |
+
527,
|
| 315 |
+
82,
|
| 316 |
+
867,
|
| 317 |
+
284
|
| 318 |
+
],
|
| 319 |
+
"page_idx": 1
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"type": "text",
|
| 323 |
+
"text": "next-sentence prediction during pre-training; thus, does not carry much lexical information.",
|
| 324 |
+
"bbox": [
|
| 325 |
+
507,
|
| 326 |
+
406,
|
| 327 |
+
882,
|
| 328 |
+
436
|
| 329 |
+
],
|
| 330 |
+
"page_idx": 1
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"type": "text",
|
| 334 |
+
"text": "The wordpiece lexical representation $\\mathbf{v}_q$ is then compressed, without supervision, into low-dimensional vector $\\mathbf{agg}_q^\\star$ and concatenated with the [CLS] vector. The similarity score between a given $q - p$ pair is computed with their dot product:",
|
| 335 |
+
"bbox": [
|
| 336 |
+
507,
|
| 337 |
+
437,
|
| 338 |
+
882,
|
| 339 |
+
518
|
| 340 |
+
],
|
| 341 |
+
"page_idx": 1
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"type": "equation",
|
| 345 |
+
"text": "\n$$\n\\begin{array}{l} \\operatorname {s i m} (q, p) \\triangleq \\left(\\mathbf {e} _ {q _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {v} _ {q}\\right) \\cdot \\left(\\mathbf {e} _ {p _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {v} _ {p}\\right) \\tag {4} \\\\ \\approx \\left(\\epsilon_ {q _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {a g g} _ {q} ^ {\\star}\\right) \\cdot \\left(\\epsilon_ {p _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {a g g} _ {p} ^ {\\star}\\right), \\\\ \\end{array}\n$$\n",
|
| 346 |
+
"text_format": "latex",
|
| 347 |
+
"bbox": [
|
| 348 |
+
514,
|
| 349 |
+
527,
|
| 350 |
+
882,
|
| 351 |
+
567
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 1
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "where $\\mathbf{e}_{q_{[\\mathrm{CLS}]}}$ and $\\mathbf{v}_q$ capture semantic and lexical textual features from BERT, respectively. Following Lin et al. (2023b), we linearly project $\\mathbf{e}_{q_{[\\mathrm{CLS}]}}$ into 128 dimension and compress $\\mathbf{v}_q$ into 640 dimension; i.e., $\\epsilon_{q_{[\\mathrm{CLS}]}}\\in \\mathbb{R}^{128}$ and $\\mathbf{agg}_q^\\star \\in \\mathbb{R}^{640}$ . We refer readers to Lin et al. (2023b) for the details of compressing $\\mathbf{v}_q$ into $\\mathbf{agg}_q^\\star$ .",
|
| 358 |
+
"bbox": [
|
| 359 |
+
507,
|
| 360 |
+
577,
|
| 361 |
+
884,
|
| 362 |
+
692
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 1
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "text",
|
| 368 |
+
"text": "3 Our Approach",
|
| 369 |
+
"text_level": 1,
|
| 370 |
+
"bbox": [
|
| 371 |
+
507,
|
| 372 |
+
701,
|
| 373 |
+
672,
|
| 374 |
+
718
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 1
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "text",
|
| 380 |
+
"text": "In this work, we extend Aggretriever to multilingual retrieval, mAggretriever. However, directly applying Eq. (2) to multilingual pre-trained MLM heads over the whole vocabulary increases computation cost for both training and inference due to large vocabulary size $|\\mathrm{V_{wp}}|$ in MLM projector, $\\mathbf{W}_{\\mathrm{mlm}}$ (Nair et al., 2022; Lassance, 2023). For example, the mBERT and XLM-R have respective vocabulary size of 120K and 250K (vs BERT's 35K) in the MLM projector. To address the issue, we propose two computationally efficient approaches to approximate MLM prediction in Eq. (2).",
|
| 381 |
+
"bbox": [
|
| 382 |
+
505,
|
| 383 |
+
726,
|
| 384 |
+
884,
|
| 385 |
+
919
|
| 386 |
+
],
|
| 387 |
+
"page_idx": 1
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"type": "page_number",
|
| 391 |
+
"text": "11689",
|
| 392 |
+
"bbox": [
|
| 393 |
+
477,
|
| 394 |
+
927,
|
| 395 |
+
524,
|
| 396 |
+
940
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 1
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "table",
|
| 402 |
+
"img_path": "images/fde26a106a5c25285b0faa80c2e1a6348d01706b584e6ae3a473b5b9267fdb82.jpg",
|
| 403 |
+
"table_caption": [
|
| 404 |
+
"Table 1: Training and inference cost comparisons on MS MARCO with XLM-R base backbone."
|
| 405 |
+
],
|
| 406 |
+
"table_footnote": [],
|
| 407 |
+
"table_body": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">vector components</td><td rowspan=\"2\">MLM prediction</td><td colspan=\"2\">training cost</td><td colspan=\"2\">query encoding latency</td></tr><tr><td>[CLS]</td><td>agg*</td><td>GPU memory</td><td>total time</td><td>CPU</td><td>GPU</td></tr><tr><td>(1) XLM-RCLS</td><td>768 dim.</td><td>0 dim.</td><td>-</td><td>21 GBs</td><td>5.5 hrs</td><td>105 ms/q</td><td>9.1 ms/q</td></tr><tr><td>(2) XLM-RAGG</td><td></td><td></td><td>whole vocabulary</td><td>OOM (140 GBs)</td><td>-</td><td>400 ms/q</td><td>15.1 ms/q</td></tr><tr><td>(3) XLM-RtgAGG</td><td>128 dim.</td><td>640 dim.</td><td>target language</td><td>39 GBs</td><td>9.5 hrs</td><td>248 ms/q</td><td>11.4 ms/q</td></tr><tr><td>(4) XLM-RselfAGG</td><td></td><td></td><td>self</td><td>21 GBs</td><td>5.5 hrs</td><td>123 ms/q</td><td>9.1 ms/q</td></tr></table>",
|
| 408 |
+
"bbox": [
|
| 409 |
+
119,
|
| 410 |
+
107,
|
| 411 |
+
878,
|
| 412 |
+
204
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 2
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "text",
|
| 418 |
+
"text": "Target-language prediction. One intuitive approach is to compute the probability distribution over the target-language token set of interest, denoted as $\\mathbf{V}_{\\mathrm{wp}}^{\\mathrm{tg}}$ , by replacing $\\mathbf{W}_{\\mathrm{mlm}}$ and $\\mathbf{b}_{\\mathrm{mlm}}$ in Eq. (2) with $\\mathbf{W}_{\\mathrm{mlm}}^{\\mathrm{tg}}$ and $\\mathbf{b}_{\\mathrm{mlm}}^{\\mathrm{tg}}$ , respectively.",
|
| 419 |
+
"bbox": [
|
| 420 |
+
112,
|
| 421 |
+
228,
|
| 422 |
+
489,
|
| 423 |
+
311
|
| 424 |
+
],
|
| 425 |
+
"page_idx": 2
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"type": "equation",
|
| 429 |
+
"text": "\n$$\n\\left\\{ \\begin{array}{l l} \\mathbf {W} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [:, v ] = \\mathbf {W} _ {\\mathrm {m l m}} [:, v ]; \\\\ \\mathbf {b} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [ v ] = \\mathbf {b} _ {\\mathrm {m l m}} [ v ], & \\text {i f} v \\in \\mathbf {V} _ {\\mathrm {w p}} ^ {\\mathrm {t g}} \\\\ \\mathbf {W} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [:, v ] = \\overrightarrow {0}; \\mathbf {b} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [ v ] = 0, & \\text {o t h e r w i s e}. \\end{array} \\right.\n$$\n",
|
| 430 |
+
"text_format": "latex",
|
| 431 |
+
"bbox": [
|
| 432 |
+
129,
|
| 433 |
+
319,
|
| 434 |
+
473,
|
| 435 |
+
380
|
| 436 |
+
],
|
| 437 |
+
"page_idx": 2
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"type": "text",
|
| 441 |
+
"text": "From the above equation, we only have to compute the matrix multiplication and softmax among $|\\mathrm{V}_{\\mathrm{wp}}^{\\mathrm{tg}}|$ instead of $|\\mathrm{V}_{\\mathrm{wp}}|$ tokens. This approach assumes that only the tokens corresponding to the target language have responses when conducting MLM prediction; i.e., $\\mathbf{p}_{q_i}[v] = 0$ if $v \\notin \\mathrm{V}_{\\mathrm{wp}}^{\\mathrm{tg}}$ .",
|
| 442 |
+
"bbox": [
|
| 443 |
+
112,
|
| 444 |
+
388,
|
| 445 |
+
487,
|
| 446 |
+
487
|
| 447 |
+
],
|
| 448 |
+
"page_idx": 2
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"type": "text",
|
| 452 |
+
"text": "Self prediction. Since BERT MLM pre-training task includes recovering the unmasked token itself, to further save computation cost, we may approximate MLM prediction by assigning a value of one to the token $q_{i}$ itself, and zero to the others:",
|
| 453 |
+
"bbox": [
|
| 454 |
+
112,
|
| 455 |
+
493,
|
| 456 |
+
489,
|
| 457 |
+
574
|
| 458 |
+
],
|
| 459 |
+
"page_idx": 2
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"type": "equation",
|
| 463 |
+
"text": "\n$$\n\\mathbf {p} _ {q _ {i}} = x _ {j} \\in \\{0, 1 \\} ^ {\\left| \\mathrm {V} _ {\\mathrm {w p}} \\right|} \\text {f o r} j \\in \\{\\operatorname {t o k} _ {-} \\mathrm {i d} (q _ {i}) \\}. \\tag {5}\n$$\n",
|
| 464 |
+
"text_format": "latex",
|
| 465 |
+
"bbox": [
|
| 466 |
+
124,
|
| 467 |
+
583,
|
| 468 |
+
487,
|
| 469 |
+
604
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 2
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "text",
|
| 475 |
+
"text": "The operation removes the computationally expensive matrix multiplication and softmax operation in Eq. (2). Note that, combining Eq. (3) and (5), the lexical representations, $\\mathbf{v}_q$ , from self prediction can be considered bag-of-word vectors with learned term weights.",
|
| 476 |
+
"bbox": [
|
| 477 |
+
112,
|
| 478 |
+
613,
|
| 479 |
+
489,
|
| 480 |
+
709
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 2
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"text": "Figure 1 illustrates whole-vocabulary, our proposed target-language and self MLM predictions. Table 1 compares the training and inference cost of the standard DR (row 1) and mA ggtretriever with different MLM prediction strategies on MS MARCO dataset using the backbone of XLM-R base. The training and query encoding settings are detailed in Section 4.2. We observe training with target-language (English in our case) MLM prediction reduces the GPU memory requirement compared to whole vocabulary (row 3 vs 2) while self prediction yields training and inference efficiency on par with standard DR (row 4 vs 1).",
|
| 487 |
+
"bbox": [
|
| 488 |
+
112,
|
| 489 |
+
709,
|
| 490 |
+
489,
|
| 491 |
+
917
|
| 492 |
+
],
|
| 493 |
+
"page_idx": 2
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "text",
|
| 497 |
+
"text": "4 Experimental Setups",
|
| 498 |
+
"text_level": 1,
|
| 499 |
+
"bbox": [
|
| 500 |
+
507,
|
| 501 |
+
227,
|
| 502 |
+
724,
|
| 503 |
+
243
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 2
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "text",
|
| 509 |
+
"text": "4.1 Datasets and Metrics",
|
| 510 |
+
"text_level": 1,
|
| 511 |
+
"bbox": [
|
| 512 |
+
507,
|
| 513 |
+
255,
|
| 514 |
+
721,
|
| 515 |
+
269
|
| 516 |
+
],
|
| 517 |
+
"page_idx": 2
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"type": "text",
|
| 521 |
+
"text": "We use the MS MARCO passage ranking dataset introduced by Bajaj et al. (2016), comprising a corpus with 8.8M passages and 500K training queries. Model supervised language (English) retrieval effectiveness is evaluated on the 6980 (MARCO dev) queries from the development set with one relevant passage per query on average. Following the established procedure, we report MRR@10 and R@1000 as the metrics.",
|
| 522 |
+
"bbox": [
|
| 523 |
+
505,
|
| 524 |
+
277,
|
| 525 |
+
884,
|
| 526 |
+
420
|
| 527 |
+
],
|
| 528 |
+
"page_idx": 2
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"type": "text",
|
| 532 |
+
"text": "We evaluate model zero-shot retrieval effectiveness in other languages using MIRACL dataset introduced by Zhang et al., comprising around 77k queries over Wikipedia in 18 languages with over 700k high-quality relevance judgments by native speakers. We use the publicly available development queries and their relevance judgements in 15 languages (two surprise languages and English are excluded). Following Zhang et al., we report macro averaged nDCG10 and R@100 over the 15 languages and list the full numbers in Appendix A.1.",
|
| 533 |
+
"bbox": [
|
| 534 |
+
507,
|
| 535 |
+
424,
|
| 536 |
+
884,
|
| 537 |
+
617
|
| 538 |
+
],
|
| 539 |
+
"page_idx": 2
|
| 540 |
+
},
|
| 541 |
+
{
|
| 542 |
+
"type": "text",
|
| 543 |
+
"text": "Finally, we study how to conduct crosslingual retrieval using mA ggtretriever on XQuAD-R dataset introduced by Roy et al. (2020), consisting of parallel queries and corpora with 11 languages. We conduct retrieval using the queries with $X_{Q}$ language against the corpus with $X_{C}$ language and report the macro-averaged MAP@100 over all the cross-lingual combinations of the 110 language pairs $(X_{Q} \\neq X_{C})$ , and the other 11 monolingual combinations $(X_{Q} = X_{C})$ .",
|
| 544 |
+
"bbox": [
|
| 545 |
+
507,
|
| 546 |
+
618,
|
| 547 |
+
884,
|
| 548 |
+
778
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 2
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "Table 2 reports the data statistics of MIRACL and XQuAD-R. Note that the candidates of MIRACL are passages while the candidates of XQuAD-R are chunked sentences from XQuAD corpora. Note that although XQuAD corpora is the manually rewritten multilingual parallel corpora (Artetxe et al., 2020), the numbers of chunked sentences are",
|
| 555 |
+
"bbox": [
|
| 556 |
+
507,
|
| 557 |
+
780,
|
| 558 |
+
885,
|
| 559 |
+
892
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 2
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "page_footnote",
|
| 565 |
+
"text": "$^{2}$ Datasets: MIRACL, XQuAD-R",
|
| 566 |
+
"bbox": [
|
| 567 |
+
509,
|
| 568 |
+
903,
|
| 569 |
+
715,
|
| 570 |
+
917
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 2
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "page_number",
|
| 576 |
+
"text": "11690",
|
| 577 |
+
"bbox": [
|
| 578 |
+
475,
|
| 579 |
+
927,
|
| 580 |
+
524,
|
| 581 |
+
940
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 2
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "table",
|
| 587 |
+
"img_path": "images/3593fa230f9f563473b828ee3a864ac52dc55eef6a9222e08a2cfb778df11378.jpg",
|
| 588 |
+
"table_caption": [
|
| 589 |
+
"Table 2: MIRACL and XQuAD-R data statistics."
|
| 590 |
+
],
|
| 591 |
+
"table_footnote": [],
|
| 592 |
+
"table_body": "<table><tr><td rowspan=\"2\">language</td><td rowspan=\"2\">ISO</td><td colspan=\"2\">MIRACL Dev</td><td colspan=\"2\">XQuAD-R</td></tr><tr><td># queries</td><td># candidates</td><td># queries</td><td># candidates</td></tr><tr><td>Arabic</td><td>ar</td><td>2,869</td><td>2,061,414</td><td>1,190</td><td>1,222</td></tr><tr><td>Bengali</td><td>bn</td><td>411</td><td>297,265</td><td>-</td><td>-</td></tr><tr><td>German</td><td>de</td><td>-</td><td>-</td><td>1,190</td><td>1,276</td></tr><tr><td>Greek</td><td>el</td><td>-</td><td>-</td><td>1,190</td><td>1,234</td></tr><tr><td>English</td><td>en</td><td>648</td><td>32,893,221</td><td>1,190</td><td>1,180</td></tr><tr><td>Spanish</td><td>es</td><td>799</td><td>10,373,953</td><td>1,190</td><td>1,215</td></tr><tr><td>Persian</td><td>fa</td><td>632</td><td>2,207,172</td><td>-</td><td>-</td></tr><tr><td>Finnish</td><td>fi</td><td>1,271</td><td>1,883,509</td><td>-</td><td>-</td></tr><tr><td>French</td><td>fr</td><td>343</td><td>14,636,953</td><td>-</td><td>-</td></tr><tr><td>Hindi</td><td>hi</td><td>350</td><td>506,264</td><td>1,190</td><td>1,244</td></tr><tr><td>Indonesian</td><td>id</td><td>960</td><td>1,446,315</td><td>-</td><td>-</td></tr><tr><td>Japanese</td><td>ja</td><td>860</td><td>6,953,614</td><td>-</td><td>-</td></tr><tr><td>Korean</td><td>ko</td><td>213</td><td>1,486,752</td><td>-</td><td>-</td></tr><tr><td>Russian</td><td>ru</td><td>1,252</td><td>9,543,918</td><td>1,190</td><td>1,219</td></tr><tr><td>Swahili</td><td>sw</td><td>482</td><td>131,924</td><td>-</td><td>-</td></tr><tr><td>Telugu</td><td>te</td><td>828</td><td>518,079</td><td>-</td><td>-</td></tr><tr><td>Thai</td><td>th</td><td>733</td><td>542,166</td><td>1,190</td><td>852</td></tr><tr><td>Turkish</td><td>tr</td><td>-</td><td>-</td><td>1,190</td><td>1,167</td></tr><tr><td>Vietnamese</td><td>vi</td><td>-</td><td>-</td><td>1,190</td><td>1,209</td></tr><tr><td>Chinese</td><td>zh</td><td>393</td><td>4,934,368</td><td>1,190</td><td>1,196</td></tr></table>",
|
| 593 |
+
"bbox": [
|
| 594 |
+
122,
|
| 595 |
+
107,
|
| 596 |
+
485,
|
| 597 |
+
373
|
| 598 |
+
],
|
| 599 |
+
"page_idx": 3
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "text",
|
| 603 |
+
"text": "different between languages. Also note that for each query, MIRACL has multiple relevant candidates while XQuAD only has one.",
|
| 604 |
+
"bbox": [
|
| 605 |
+
112,
|
| 606 |
+
400,
|
| 607 |
+
489,
|
| 608 |
+
448
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 3
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "4.2 Implementation Details",
|
| 615 |
+
"text_level": 1,
|
| 616 |
+
"bbox": [
|
| 617 |
+
112,
|
| 618 |
+
461,
|
| 619 |
+
346,
|
| 620 |
+
476
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "text",
|
| 626 |
+
"text": "Models. We apply mAggretriever to two 12-layer pre-trained multilingual models: (1) mBERT; (2) XLM-R. $^3$ We compare models fine-tuned solely using [CLS] vector and based on mAggretriever using whole-vocabulary MLM prediction with the subscripts \"CLS\" and \"AGG\", respectively, e.g., $\\mathrm{mBERT}_{\\mathrm{CLS}}$ and $\\mathrm{mBERT}_{\\mathrm{AGG}}$ . We report the two variants of mAggretriever with target-language and self prediction; e.g., $\\mathrm{mBERT}_{\\mathrm{AGG}}^{\\mathrm{tg}}$ and $\\mathrm{mBERT}_{\\mathrm{AGG}}^{\\mathrm{self}}$ . For target-language prediction, we train mAggretriever using English token prediction and run inference using the corresponding language of each corpus, and for whole-vocabulary prediction, we finetune with half of the batch size. In addition, we report the numbers of BM25 and mDPR from Zhang et al. as reference points, and the two state-of-the-art multilingual retrievers: (1) mContriever (Izacard et al., 2022), pre-trained on multilingual corpora with 29 languages and further fine-tuned on MS MARCO dataset; $^3$ (2) Cohere (API), whose numbers are copied from Kamalloo et al. (2023). $^4$",
|
| 627 |
+
"bbox": [
|
| 628 |
+
112,
|
| 629 |
+
483,
|
| 630 |
+
489,
|
| 631 |
+
820
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "text",
|
| 637 |
+
"text": "Training and Inference. We train our models on a single A100 GPU with 80 GB memory for 6 epochs (around 100k steps) with learning rate",
|
| 638 |
+
"bbox": [
|
| 639 |
+
112,
|
| 640 |
+
832,
|
| 641 |
+
487,
|
| 642 |
+
881
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "7e-6. Each batch includes 24 queries, and for each query, we randomly sample one positive and seven negative passages. All the negatives are sampled from the MS MARCO \"small\" triples training set, which is created using BM25. During training, we minimize the negative log likelihood as the standard contrastive loss and following Karpukhin et al. (2020), for each query, we consider all the (positive and negative) passages from the other triplets in the batch as in-batch negative samples. We set the maximum input length for the query and the passage to 32 and 128, respectively, at both training and inference stages for MS MARCO. For MIRACL and XQUAD-R, we use the maximum input length of 128 and 256 for the query and passage, respectively. Note that we lowercase all the queries and passages for mAggretriever. We measure query encoding latency on the 6980 MS MARCO development queries with the batch size of 1 and single thread on a Linux machine with 12 Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz and 88G of RAM.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
507,
|
| 651 |
+
84,
|
| 652 |
+
884,
|
| 653 |
+
420
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "Target Token Set Construction. For each language of corpus in MIRACL and XQuAD-R, we tokenize and lowercase all the passages and collect the unique tokens in the corpus as the target token set. For example, when fine-tuning on MS MARCO dataset, we use the token set built from MS MARCO corpus. While conducting target-language MLM prediction on MIRACL Arabic queries and corpus, we use the token set collected from Arabic corpus as our target token set. Note that self MLM prediction does not require collecting the token set for the target language.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
507,
|
| 662 |
+
430,
|
| 663 |
+
885,
|
| 664 |
+
625
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "text",
|
| 670 |
+
"text": "5 Results",
|
| 671 |
+
"text_level": 1,
|
| 672 |
+
"bbox": [
|
| 673 |
+
507,
|
| 674 |
+
634,
|
| 675 |
+
608,
|
| 676 |
+
651
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 3
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "5.1 Results on MIRACL",
|
| 683 |
+
"text_level": 1,
|
| 684 |
+
"bbox": [
|
| 685 |
+
507,
|
| 686 |
+
661,
|
| 687 |
+
717,
|
| 688 |
+
676
|
| 689 |
+
],
|
| 690 |
+
"page_idx": 3
|
| 691 |
+
},
|
| 692 |
+
{
|
| 693 |
+
"type": "text",
|
| 694 |
+
"text": "Table 3 reports models' retrieval effectiveness on MS MARCO and MIRACL development queries. We first observe that mAggretriever, incorporating lexical features, not only outperforms its CLS counterpart in supervised English retrieval (MARCO), but also exhibits superior transferability to other languages (MIRACL) regardless of backbone. In addition, $\\mathrm{mBERT}_{\\mathrm{AGG}}$ without introducing contrastive pre-training on multilingual corpora, outperforms mContriever in MIRACL in terms of nDCG@10. Note that $\\mathrm{mBERT}_{\\mathrm{AGG}}$ and mContriever are both initialized from mBERT. We",
|
| 695 |
+
"bbox": [
|
| 696 |
+
507,
|
| 697 |
+
681,
|
| 698 |
+
884,
|
| 699 |
+
873
|
| 700 |
+
],
|
| 701 |
+
"page_idx": 3
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"type": "page_footnote",
|
| 705 |
+
"text": "<sup>5</sup>Our preliminary experiments on MS MARCO show that lowercase improves mAggretriever while degrades its CLS counterpart.",
|
| 706 |
+
"bbox": [
|
| 707 |
+
509,
|
| 708 |
+
879,
|
| 709 |
+
882,
|
| 710 |
+
917
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 3
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "page_footnote",
|
| 716 |
+
"text": "3 Model checkpoints: mBERT, XLM-R, mContribiever",
|
| 717 |
+
"bbox": [
|
| 718 |
+
112,
|
| 719 |
+
889,
|
| 720 |
+
440,
|
| 721 |
+
904
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 3
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "page_footnote",
|
| 727 |
+
"text": "$^{4}$ Cohere multilingual retrieval model",
|
| 728 |
+
"bbox": [
|
| 729 |
+
112,
|
| 730 |
+
904,
|
| 731 |
+
341,
|
| 732 |
+
917
|
| 733 |
+
],
|
| 734 |
+
"page_idx": 3
|
| 735 |
+
},
|
| 736 |
+
{
|
| 737 |
+
"type": "page_number",
|
| 738 |
+
"text": "11691",
|
| 739 |
+
"bbox": [
|
| 740 |
+
477,
|
| 741 |
+
927,
|
| 742 |
+
524,
|
| 743 |
+
940
|
| 744 |
+
],
|
| 745 |
+
"page_idx": 3
|
| 746 |
+
},
|
| 747 |
+
{
|
| 748 |
+
"type": "table",
|
| 749 |
+
"img_path": "images/019f00b69982fbe0f4422a537cba102881903ead740fa0990947ba4563e48c8b.jpg",
|
| 750 |
+
"table_caption": [
|
| 751 |
+
"Table 3: Supervised (MARCO) English and zero-shot (MIRACL) multilingual retrieval effectiveness comparisons. Full numbers are listed in Appendix A.1."
|
| 752 |
+
],
|
| 753 |
+
"table_footnote": [],
|
| 754 |
+
"table_body": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">MARCO Dev</td><td colspan=\"2\">MIRACL Dev</td></tr><tr><td colspan=\"2\">English</td><td colspan=\"2\">15 lang. macro avg.</td></tr><tr><td></td><td>MRR@10</td><td>R@1K</td><td>nDCG@10</td><td>R@100</td></tr><tr><td>(a) BM25</td><td>18.8</td><td>85.8</td><td>39.6</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>29.6</td><td>94.6</td><td>41.7</td><td>78.9</td></tr><tr><td>(c) mContriesver</td><td>27.4</td><td>97.1</td><td>43.8</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>50.1</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>29.1</td><td>93.6</td><td>36.1</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>34.3</td><td>95.8</td><td>44.4</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>34.5</td><td>96.1</td><td>44.4</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>34.2</td><td>95.6</td><td>46.9</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>31.1</td><td>93.8</td><td>39.3</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>34.7</td><td>96.1</td><td>52.9</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>35.0</td><td>96.2</td><td>53.3</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>35.0</td><td>96.0</td><td>53.3</td><td>86.3</td></tr></table>",
|
| 755 |
+
"bbox": [
|
| 756 |
+
122,
|
| 757 |
+
135,
|
| 758 |
+
485,
|
| 759 |
+
347
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 4
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "text",
|
| 765 |
+
"text": "hypothesize that mContriever's high recall (i.e., R@1K) comes from its pre-training on multilingual corpora with 29 languages.",
|
| 766 |
+
"bbox": [
|
| 767 |
+
112,
|
| 768 |
+
370,
|
| 769 |
+
487,
|
| 770 |
+
419
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 4
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "text",
|
| 776 |
+
"text": "Switching to XLM-R backbone, mAggretriever even outperforms Cohere (API). Furthermore, we notice that mAaggretriever with XLM-R backbone improves over mBERT more than its CLS counterpart does. For example, in the case of MIRACL, XLM- $\\mathsf{R}_{\\mathsf{AGG}}^{\\mathrm{tg}}$ exhibits a significant improvement over $\\mathsf{mBERT}_{\\mathsf{AGG}}^{\\mathrm{tg}}$ from an nDCG@10 score of 44.4 to 53.3, whereas XLM- $\\mathsf{R}_{\\mathsf{CLS}}$ only sees a modest improvement over $\\mathsf{mBERT}_{\\mathsf{CLS}}$ from 36.1 to 39.3. This notable enhancement highlights mAaggretriever's ability to effectively utilize a superior pre-trained language model.",
|
| 777 |
+
"bbox": [
|
| 778 |
+
112,
|
| 779 |
+
419,
|
| 780 |
+
489,
|
| 781 |
+
613
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 4
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "text",
|
| 787 |
+
"text": "Finally, compared to whole-vocabulary and proposed approximate MLM predictions, we observe that self prediction shows comparable and even strong language transferability. We hypothesize that MLM prediction learned from English data cannot transfer well to other languages. It is worth mentioning that compared to whole-vocabulary MLM prediction, the proposed approximate MLM prediction, target-language and self prediction, are advantageous for real-world deployment since they show almost no effectiveness drop (sometimes even better) but require far less training and inference cost as shown in Table 1.",
|
| 788 |
+
"bbox": [
|
| 789 |
+
112,
|
| 790 |
+
613,
|
| 791 |
+
489,
|
| 792 |
+
821
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 4
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "5.2 Results on XQuAD-R",
|
| 799 |
+
"text_level": 1,
|
| 800 |
+
"bbox": [
|
| 801 |
+
112,
|
| 802 |
+
833,
|
| 803 |
+
331,
|
| 804 |
+
848
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 4
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "text",
|
| 810 |
+
"text": "In Table 4, we directly apply XLM-R based models fine-tuned on MS MARCO to XQuAD-R dataset. In the experiment, we try different MLM prediction settings for $\\mathrm{XLM - R_{AGG}^{tg}}$ . For example, instead of",
|
| 811 |
+
"bbox": [
|
| 812 |
+
112,
|
| 813 |
+
854,
|
| 814 |
+
489,
|
| 815 |
+
921
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 4
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "table",
|
| 821 |
+
"img_path": "images/d0bd7f9444d2d7738afd1255feec6affc0ec02b0a85a157d1574832d2573d07c.jpg",
|
| 822 |
+
"table_caption": [
|
| 823 |
+
"Table 4: Zero-shot retrieval effectiveness on XQuAD-R. $X_{Q}\\left(X_{C}\\right)$ denotes the language of queries (corpus)."
|
| 824 |
+
],
|
| 825 |
+
"table_footnote": [],
|
| 826 |
+
"table_body": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">MLM target lang.</td><td colspan=\"2\">XQuAD-R</td></tr><tr><td>query</td><td>corpus</td><td>\\( {X}_{Q} = {X}_{C} \\)</td><td>\\( {X}_{Q} \\neq {X}_{C} \\)</td></tr><tr><td></td><td></td><td></td><td colspan=\"2\">MAP@100</td></tr><tr><td>(1) XLM- \\( {\\mathrm{R}}_{\\mathrm{{CLS}}} \\)</td><td>-</td><td>-</td><td>73.1</td><td>57.5</td></tr><tr><td>(2) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}} \\)</td><td>-</td><td>-</td><td>77.4</td><td>41.8</td></tr><tr><td>(3)</td><td>\\( {X}_{Q} \\)</td><td>\\( {X}_{C} \\)</td><td>77.4</td><td>36.0</td></tr><tr><td>(4) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}}^{\\mathrm{{tg}}} \\)</td><td>\\( {X}_{C} \\)</td><td>\\( {X}_{C} \\)</td><td>77.4</td><td>44.9</td></tr><tr><td>(5)</td><td>English</td><td>English</td><td>73.5</td><td>51.7</td></tr><tr><td>(6) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}}^{\\text{self }} \\)</td><td>-</td><td>-</td><td>77.3</td><td>36.2</td></tr></table>",
|
| 827 |
+
"bbox": [
|
| 828 |
+
515,
|
| 829 |
+
122,
|
| 830 |
+
878,
|
| 831 |
+
249
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 4
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "text",
|
| 837 |
+
"text": "using respective query and corpus source language as the target language (row 3), we use the language corresponding to each corpus (row 4) or English (row 5) as target language for both queries and corpus. Note that rows 3 and 4 are the same when both queries and corpus are in the same language $(X_{Q} = X_{C})$ .",
|
| 838 |
+
"bbox": [
|
| 839 |
+
507,
|
| 840 |
+
279,
|
| 841 |
+
884,
|
| 842 |
+
391
|
| 843 |
+
],
|
| 844 |
+
"page_idx": 4
|
| 845 |
+
},
|
| 846 |
+
{
|
| 847 |
+
"type": "text",
|
| 848 |
+
"text": "We observe that mAggretriever shows relatively poor cross-lingual retrieval effectiveness ( $X_Q \\neq X_C$ ) compared to its CLS counterpart (rows 2,3,6 vs 1). When aligning the MLM prediction target language for queries and corpus, the cross-lingual retrieval effectiveness sees improvement (rows 3 vs 4,5). These results show that MLM prediction head potentially can be used as a translation layer to project query and corpus into the same language, which is also reported by Nair et al. (2022). It is possible to leverage the transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022) and the established parallel training data (Bonifacio et al., 2021) to improve mAggretriever's cross-lingual retrieval capability, which we leave for future work.",
|
| 849 |
+
"bbox": [
|
| 850 |
+
507,
|
| 851 |
+
395,
|
| 852 |
+
884,
|
| 853 |
+
652
|
| 854 |
+
],
|
| 855 |
+
"page_idx": 4
|
| 856 |
+
},
|
| 857 |
+
{
|
| 858 |
+
"type": "text",
|
| 859 |
+
"text": "6 Conclusion",
|
| 860 |
+
"text_level": 1,
|
| 861 |
+
"bbox": [
|
| 862 |
+
507,
|
| 863 |
+
675,
|
| 864 |
+
640,
|
| 865 |
+
690
|
| 866 |
+
],
|
| 867 |
+
"page_idx": 4
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"text": "In this paper, we introduce mAggretriever, an extension of Aggretriever for multilingual retrieval, by combining lexical and semantic features in pretrained language models for dense retrieval. We propose target-language and self MLM predictions to enhance the efficiency of mAggretriever. Our study highlights the efficiency advantage of self MLM prediction in multilingual retrieval, while target-language MLM prediction offers flexibility for cross-lingual retrieval. Importantly, mAggretriever, solely fine-tuned on English data, demonstrates competitive multilingual retrieval capability compared to other state-of-the-art dense retrievers.",
|
| 872 |
+
"bbox": [
|
| 873 |
+
507,
|
| 874 |
+
709,
|
| 875 |
+
884,
|
| 876 |
+
919
|
| 877 |
+
],
|
| 878 |
+
"page_idx": 4
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "page_number",
|
| 882 |
+
"text": "11692",
|
| 883 |
+
"bbox": [
|
| 884 |
+
477,
|
| 885 |
+
927,
|
| 886 |
+
524,
|
| 887 |
+
940
|
| 888 |
+
],
|
| 889 |
+
"page_idx": 4
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"text": "Limitations",
|
| 894 |
+
"text_level": 1,
|
| 895 |
+
"bbox": [
|
| 896 |
+
114,
|
| 897 |
+
84,
|
| 898 |
+
220,
|
| 899 |
+
99
|
| 900 |
+
],
|
| 901 |
+
"page_idx": 5
|
| 902 |
+
},
|
| 903 |
+
{
|
| 904 |
+
"type": "text",
|
| 905 |
+
"text": "Our research primarily focuses on enhancing multilingual retrieval, specifically targeting monolingual retrieval zero-shot transfer to non-English languages. We plan to extend our study to improve cross-lingual retrieval by leveraging transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022). In addition, we only discuss how to improve zero-shot language transferability of dense retrieval. It is possible to further improve model effectiveness by leveraging existing multilingual training data (Bonifacio et al., 2021; Zhang et al., 2021) and better negative mining strategies (Shen et al., 2022). Finally, due to space limitation, we compare mAggre-triever with previous state-of-the-art multilingual retrievers on Mr. TyDi (Zhang et al., 2021) in Appendix A.2.",
|
| 906 |
+
"bbox": [
|
| 907 |
+
115,
|
| 908 |
+
110,
|
| 909 |
+
490,
|
| 910 |
+
382
|
| 911 |
+
],
|
| 912 |
+
"page_idx": 5
|
| 913 |
+
},
|
| 914 |
+
{
|
| 915 |
+
"type": "text",
|
| 916 |
+
"text": "Acknowledgements",
|
| 917 |
+
"text_level": 1,
|
| 918 |
+
"bbox": [
|
| 919 |
+
114,
|
| 920 |
+
397,
|
| 921 |
+
285,
|
| 922 |
+
413
|
| 923 |
+
],
|
| 924 |
+
"page_idx": 5
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"text": "This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. We thank the anonymous referees who provided useful feedback to improve this work.",
|
| 929 |
+
"bbox": [
|
| 930 |
+
112,
|
| 931 |
+
423,
|
| 932 |
+
489,
|
| 933 |
+
502
|
| 934 |
+
],
|
| 935 |
+
"page_idx": 5
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"text": "References",
|
| 940 |
+
"text_level": 1,
|
| 941 |
+
"bbox": [
|
| 942 |
+
114,
|
| 943 |
+
532,
|
| 944 |
+
213,
|
| 945 |
+
546
|
| 946 |
+
],
|
| 947 |
+
"page_idx": 5
|
| 948 |
+
},
|
| 949 |
+
{
|
| 950 |
+
"type": "list",
|
| 951 |
+
"sub_type": "ref_text",
|
| 952 |
+
"list_items": [
|
| 953 |
+
"Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proc. ACL, pages 4623-4637.",
|
| 954 |
+
"Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268.",
|
| 955 |
+
"Luiz Henrique Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mMARCO: A multilingual version of MS MARCO passage ranking dataset. arXiv:2108.13897.",
|
| 956 |
+
"Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proc. NAACL, pages 3576-3588.",
|
| 957 |
+
"Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco"
|
| 958 |
+
],
|
| 959 |
+
"bbox": [
|
| 960 |
+
115,
|
| 961 |
+
555,
|
| 962 |
+
489,
|
| 963 |
+
917
|
| 964 |
+
],
|
| 965 |
+
"page_idx": 5
|
| 966 |
+
},
|
| 967 |
+
{
|
| 968 |
+
"type": "list",
|
| 969 |
+
"sub_type": "ref_text",
|
| 970 |
+
"list_items": [
|
| 971 |
+
"Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proc. ACL, pages 8440-8451.",
|
| 972 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, pages 4171-4186.",
|
| 973 |
+
"Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proc. ACL, pages 878-891.",
|
| 974 |
+
"Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research.",
|
| 975 |
+
"Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluating embedding APIs for information retrieval. arXiv:2305.06300.",
|
| 976 |
+
"Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proc. EMNLP, pages 6769-6781.",
|
| 977 |
+
"Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7:452-466.",
|
| 978 |
+
"Carlos Lassance. 2023. Extending English IR methods to multi-lingual IR. arXiv:2302.14723.",
|
| 979 |
+
"Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen tau Yih, and Xilun Chen. 2023a. How to train your DRAGON: Diverse augmentation towards generalizable dense retrieval. arXiv:2302.07452.",
|
| 980 |
+
"Sheng-Chieh Lin, Minghan Li, and Jimmy Lin. 2023b. Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval. Transactions of the Association for Computational Linguistics, pages 436-452.",
|
| 981 |
+
"Suraj Nair, Eugene Yang, Dawn Lawrie, James Mayfield, and Douglas W. Oard. 2022. Learning a sparse representation model for neural CLIR. In Proc. DE-SIRES.",
|
| 982 |
+
"Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022."
|
| 983 |
+
],
|
| 984 |
+
"bbox": [
|
| 985 |
+
510,
|
| 986 |
+
85,
|
| 987 |
+
884,
|
| 988 |
+
917
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 5
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "page_number",
|
| 994 |
+
"text": "11693",
|
| 995 |
+
"bbox": [
|
| 996 |
+
477,
|
| 997 |
+
927,
|
| 998 |
+
524,
|
| 999 |
+
940
|
| 1000 |
+
],
|
| 1001 |
+
"page_idx": 5
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "list",
|
| 1005 |
+
"sub_type": "ref_text",
|
| 1006 |
+
"list_items": [
|
| 1007 |
+
"Large dual encoders are generalizable retrievers. In Proc. EMNLP, pages 9844-9855.",
|
| 1008 |
+
"Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERT-networks. In Proc. EMNLP, pages 3982-3992.",
|
| 1009 |
+
"Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proc. EMNLP, pages 4512-4525.",
|
| 1010 |
+
"Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. In Proc. EMNLP, pages 5919-5930.",
|
| 1011 |
+
"Tianhao Shen, Mingtong Liu, Ming Zhou, and Deyi Xiong. 2022. Recovering gold from black sand: Multilingual dense passage retrieval with hard and false negative samples. In Proc. EMNLP, pages 10659-10670.",
|
| 1012 |
+
"Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, and Daxin Jiang. 2022. Unsupervised context aware sentence representation pretraining for multi-lingual dense retrieval. arXiv:2206.03281.",
|
| 1013 |
+
"Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, and Nan Duan. 2023a. Modeling sequential sentence relation to improve cross-lingual dense retrieval. In Proc. ICLR.",
|
| 1014 |
+
"Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proc. Workshop on Multilingual Representation Learning, pages 127-137.",
|
| 1015 |
+
"Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2023b. Toward best practices for training multilingual dense retrieval models. ACM Trans. Inf. Syst.",
|
| 1016 |
+
"Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages. Transactions of the Association for Computational Linguistics, pages 1114-1131."
|
| 1017 |
+
],
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
115,
|
| 1020 |
+
85,
|
| 1021 |
+
489,
|
| 1022 |
+
728
|
| 1023 |
+
],
|
| 1024 |
+
"page_idx": 6
|
| 1025 |
+
},
|
| 1026 |
+
{
|
| 1027 |
+
"type": "page_number",
|
| 1028 |
+
"text": "11694",
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
477,
|
| 1031 |
+
928,
|
| 1032 |
+
524,
|
| 1033 |
+
940
|
| 1034 |
+
],
|
| 1035 |
+
"page_idx": 6
|
| 1036 |
+
},
|
| 1037 |
+
{
|
| 1038 |
+
"type": "text",
|
| 1039 |
+
"text": "A Appendix",
|
| 1040 |
+
"text_level": 1,
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
114,
|
| 1043 |
+
84,
|
| 1044 |
+
238,
|
| 1045 |
+
99
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 7
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "text",
|
| 1051 |
+
"text": "A.1 Full Results on MIRACL",
|
| 1052 |
+
"text_level": 1,
|
| 1053 |
+
"bbox": [
|
| 1054 |
+
114,
|
| 1055 |
+
109,
|
| 1056 |
+
361,
|
| 1057 |
+
123
|
| 1058 |
+
],
|
| 1059 |
+
"page_idx": 7
|
| 1060 |
+
},
|
| 1061 |
+
{
|
| 1062 |
+
"type": "text",
|
| 1063 |
+
"text": "Table 5 reports the detailed nDCG@10 and R@100 numbers on MIRACL 15 languages for all the compared models. Note that we do not use English dataset when evaluating on MIRACL.",
|
| 1064 |
+
"bbox": [
|
| 1065 |
+
112,
|
| 1066 |
+
130,
|
| 1067 |
+
489,
|
| 1068 |
+
193
|
| 1069 |
+
],
|
| 1070 |
+
"page_idx": 7
|
| 1071 |
+
},
|
| 1072 |
+
{
|
| 1073 |
+
"type": "text",
|
| 1074 |
+
"text": "A.2 Comparisons on Mr. TyDi",
|
| 1075 |
+
"text_level": 1,
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
114,
|
| 1078 |
+
205,
|
| 1079 |
+
369,
|
| 1080 |
+
219
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 7
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "Some previous state-of-the-art multilingual retrievers are evaluated on the test queries of Mr. TyDi (Zhang et al., 2021), the multilingual retrieval dataset similar to MIRACL but with sparse relevance judgements and less languages. In order to compare with the previous models, following the model inference settings in MIRACL, we evaluate our mAggretriever with the backbone of XLM-R. The full results are tabulated in Table 6. We still observe that all the variants of mAggretriever outperform previous state of the art in 6 out of 10 languages. Note that most of the previous retrievers undergo multilingual contrastive learning. For example, mContrieser are pre-trained on the corpus with 29 languages while mColBERT are fine-tuned on multilingual MS MARCO dataset introduced by Bonifacio et al. (2021).",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
112,
|
| 1089 |
+
225,
|
| 1090 |
+
490,
|
| 1091 |
+
500
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 7
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "page_number",
|
| 1097 |
+
"text": "11695",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
477,
|
| 1100 |
+
927,
|
| 1101 |
+
526,
|
| 1102 |
+
940
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 7
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "table",
|
| 1108 |
+
"img_path": "images/bc601713adbba9b70d41dc26153e4a25c40d97d2fa5eb3b20340fee409ada011.jpg",
|
| 1109 |
+
"table_caption": [
|
| 1110 |
+
"Table 5: MIRACL multilingual retrieval comparisons. Bold denotes the best effectiveness."
|
| 1111 |
+
],
|
| 1112 |
+
"table_footnote": [],
|
| 1113 |
+
"table_body": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"16\">MIRACL Dev</td></tr><tr><td>ar</td><td>bn</td><td>es</td><td>fa</td><td>fi</td><td>fr</td><td>hi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>zh</td><td>avg.</td></tr><tr><td></td><td colspan=\"16\">nDCG@10</td></tr><tr><td>(a) BM25</td><td>48.1</td><td>50.8</td><td>31.9</td><td>33.3</td><td>55.1</td><td>18.3</td><td>45.3</td><td>44.9</td><td>36.9</td><td>41.9</td><td>33.4</td><td>38.3</td><td>49.4</td><td>48.4</td><td>18.0</td><td>39.6</td></tr><tr><td>(b) mDPR</td><td>49.9</td><td>44.3</td><td>47.8</td><td>48.0</td><td>47.2</td><td>43.5</td><td>38.3</td><td>27.2</td><td>43.9</td><td>41.9</td><td>40.7</td><td>29.9</td><td>35.6</td><td>35.8</td><td>51.2</td><td>41.7</td></tr><tr><td>(c) mContriever</td><td>52.5</td><td>50.0</td><td>41.8</td><td>21.5</td><td>60.2</td><td>31.4</td><td>28.6</td><td>39.2</td><td>42.4</td><td>48.3</td><td>39.1</td><td>56.0</td><td>52.8</td><td>51.7</td><td>41.0</td><td>43.8</td></tr><tr><td>(d) Cohere (API)</td><td>61.7</td><td>59.4</td><td>23.3</td><td>47.1</td><td>63.4</td><td>46.2</td><td>49.3</td><td>44.6</td><td>46.0</td><td>49.6</td><td>46.9</td><td>61.1</td><td>61.3</td><td>54.6</td><td>36.5</td><td>50.1</td></tr><tr><td>(1) mBERTCLS</td><td>50.9</td><td>45.2</td><td>31.7</td><td>30.5</td><td>48.5</td><td>30.5</td><td>37.7</td><td>19.8</td><td>43.4</td><td>40.0</td><td>27.0</td><td>21.2</td><td>36.4</td><td>37.7</td><td>43.9</td><td>36.1</td></tr><tr><td>(2) mBERTAGG</td><td>56.9</td><td>48.5</td><td>44.0</td><td>43.1</td><td>63.4</td><td>40.8</td><td>34.1</td><td>40.3</td><td>49.3</td><td>43.9</td><td>44.9</td><td>44.6</td><td>44.7</td><td>27.2</td><td>40.2</td><td>44.4</td></tr><tr><td>(3) mBERTtgAGG</td><td>55.7</td><td>45.5</td><td>45.4</td><td>42.4</td><td>62.3</td><td>42.9</td><td>34.3</td><td>38.7</td><td>47.7</td><td>43.4</td><td>45.2</td><td>45.5</td><td>43.9</td><td>32.5</td><td>41.5</td><td>44.4</td></tr><tr><td>(4) mBERTselfAGG</td><td>59.4</td><td>51.0</td><td>44.6</td><td>44.5</td><td>65.3</td><td>43.6</td><td>37.4</td><td>42.1</td><td>50.2</td><td>47.8</td><td>46.3</td><td>48.5</td><td>48.0</td><td>31.1</td><td>44.3</td><td>46.9</td></tr><tr><td>(5) XLM-RCLS</td><td>46.6</td><td>46.6</td><td>29.9</td><td>43.5</td><td>44.4</td><td>28.5</td><td>41.7</td><td>31.7</td><td>40.7</td><td>45.6</td><td>27.5</td><td>22.5</td><td>50.5</td><td>53.9</td><td>35.9</td><td>39.3</td></tr><tr><td>(6) XLM-RAGG</td><td>60.6</td><td>60.4</td><td>42.5</td><td>46.2</td><td>66.2</td><td>43.0</td><td>44.8</td><td>47.8</td><td>53.3</td><td>58.2</td><td>44.6</td><td>46.6</td><td>72.1</td><td>66.0</td><td>41.4</td><td>52.9</td></tr><tr><td>(7) XLM-RtgAGG</td><td>60.2</td><td>60.9</td><td>44.3</td><td>46.7</td><td>65.0</td><td>43.8</td><td>48.0</td><td>47.2</td><td>53.3</td><td>59.1</td><td>45.8</td><td>45.2</td><td>70.2</td><td>67.1</td><td>42.9</td><td>53.3</td></tr><tr><td>(8) XLM-RselfAGG</td><td>61.4</td><td>61.4</td><td>42.9</td><td>46.5</td><td>66.2</td><td>41.3</td><td>46.2</td><td>48.4</td><td>53.9</td><td>57.9</td><td>46.5</td><td>47.5</td><td>71.2</td><td>66.8</td><td>41.0</td><td>53.3</td></tr><tr><td></td><td colspan=\"16\">R@100</td></tr><tr><td>(a) BM25</td><td>88.9</td><td>90.9</td><td>70.2</td><td>73.1</td><td>89.1</td><td>65.3</td><td>86.8</td><td>90.4</td><td>80.5</td><td>78.3</td><td>66.1</td><td>70.1</td><td>83.1</td><td>88.7</td><td>56.0</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>84.1</td><td>81.9</td><td>86.4</td><td>89.8</td><td>78.8</td><td>91.5</td><td>77.6</td><td>57.3</td><td>82.5</td><td>73.7</td><td>79.7</td><td>61.6</td><td>76.2</td><td>67.8</td><td>94.4</td><td>78.9</td></tr><tr><td>(c) mContriever</td><td>92.5</td><td>92.1</td><td>84.1</td><td>65.4</td><td>95.3</td><td>82.4</td><td>64.6</td><td>80.2</td><td>87.8</td><td>87.5</td><td>85.0</td><td>91.1</td><td>96.1</td><td>93.6</td><td>90.3</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>84.3</td><td>81.9</td><td>67.1</td><td>84.1</td><td>61.2</td><td>72.9</td><td>76.3</td><td>47.7</td><td>80.7</td><td>71.9</td><td>61.1</td><td>50.4</td><td>75.5</td><td>69.7</td><td>90.0</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>86.9</td><td>83.8</td><td>81.0</td><td>81.0</td><td>90.2</td><td>83.2</td><td>70.9</td><td>76.6</td><td>84.1</td><td>71.7</td><td>80.8</td><td>79.4</td><td>82.4</td><td>60.0</td><td>81.3</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>86.2</td><td>82.2</td><td>82.1</td><td>79.9</td><td>89.7</td><td>82.9</td><td>72.6</td><td>75.7</td><td>84.3</td><td>76.3</td><td>80.5</td><td>80.9</td><td>81.7</td><td>63.8</td><td>82.6</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>88.7</td><td>85.1</td><td>80.1</td><td>82.4</td><td>92.1</td><td>86.5</td><td>77.6</td><td>80.2</td><td>85.5</td><td>78.4</td><td>82.0</td><td>81.5</td><td>83.4</td><td>65.9</td><td>84.5</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>79.2</td><td>82.6</td><td>63.6</td><td>79.9</td><td>75.2</td><td>66.8</td><td>76.9</td><td>62.7</td><td>77.2</td><td>77.6</td><td>62.5</td><td>48.8</td><td>87.0</td><td>89.2</td><td>78.9</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>89.8</td><td>92.5</td><td>78.8</td><td>85.4</td><td>92.6</td><td>81.8</td><td>85.1</td><td>83.5</td><td>89.0</td><td>88.5</td><td>78.9</td><td>79.1</td><td>95.4</td><td>95.1</td><td>80.6</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>89.3</td><td>93.1</td><td>80.4</td><td>84.6</td><td>91.8</td><td>81.4</td><td>84.5</td><td>82.6</td><td>88.9</td><td>86.6</td><td>79.5</td><td>77.7</td><td>94.8</td><td>95.1</td><td>79.9</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>90.2</td><td>92.8</td><td>78.2</td><td>84.5</td><td>93.1</td><td>81.2</td><td>82.5</td><td>84.6</td><td>89.8</td><td>88.3</td><td>80.8</td><td>79.4</td><td>95.5</td><td>94.5</td><td>79.0</td><td>86.3</td></tr></table>",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
117,
|
| 1116 |
+
164,
|
| 1117 |
+
878,
|
| 1118 |
+
533
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 8
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "table",
|
| 1124 |
+
"img_path": "images/aa9cd4bbe9af8af2d9484ef3c45dbad18b8c4d7054ac649a1d39c24412bce281.jpg",
|
| 1125 |
+
"table_caption": [
|
| 1126 |
+
"Table 6: Mr. TyDi multilingual retrieval comparisons with state-of-the-art multilingual retrievers. Bold denotes the best effectiveness. multi CL denotes multilingual contrastive learning."
|
| 1127 |
+
],
|
| 1128 |
+
"table_footnote": [],
|
| 1129 |
+
"table_body": "<table><tr><td rowspan=\"2\">Models</td><td rowspan=\"2\">multi CL</td><td colspan=\"11\">Mr. TyDi Test</td></tr><tr><td>ar</td><td>bn</td><td>fi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>avg.</td></tr><tr><td></td><td></td><td colspan=\"11\">MRR@100</td></tr><tr><td>(a) BM25 (Zhang et al., 2021)</td><td>X</td><td>36.7</td><td>41.3</td><td>28.8</td><td>38.2</td><td>21.7</td><td>28.1</td><td>32.9</td><td>39.6</td><td>42.4</td><td>41.7</td><td>35.1</td></tr><tr><td>(b) mColBERT (Bonifacio et al., 2021)</td><td>✓</td><td>55.3</td><td>48.8</td><td>41.3</td><td>55.5</td><td>36.6</td><td>36.7</td><td>48.2</td><td>44.8</td><td>61.6</td><td>-</td><td>-</td></tr><tr><td>(c) mContriever (Izacard et al., 2022)</td><td>✓</td><td>43.4</td><td>42.3</td><td>35.1</td><td>42.6</td><td>32.4</td><td>34.2</td><td>36.1</td><td>51.2</td><td>37.4</td><td>40.2</td><td>39.5</td></tr><tr><td>(d) CCP (Wu et al., 2022)</td><td>X</td><td>42.6</td><td>45.7</td><td>37.2</td><td>46.2</td><td>37.7</td><td>34.6</td><td>36.0</td><td>39.2</td><td>47.0</td><td>48.9</td><td>41.5</td></tr><tr><td>(e) MSM (Zhang et al., 2023a)</td><td>X</td><td>51.6</td><td>53.0</td><td>39.4</td><td>50.5</td><td>32.0</td><td>36.8</td><td>37.2</td><td>43.4</td><td>62.6</td><td>53.5</td><td>44.7</td></tr><tr><td>(1) XLM-RCLS</td><td>X</td><td>41.9</td><td>40.8</td><td>27.8</td><td>39.9</td><td>32.5</td><td>33.0</td><td>27.7</td><td>23.7</td><td>54.2</td><td>46.1</td><td>36.8</td></tr><tr><td>(2) XLM-RAGG</td><td>X</td><td>52.3</td><td>55.8</td><td>43.2</td><td>55.0</td><td>40.4</td><td>40.5</td><td>41.5</td><td>45.1</td><td>77.5</td><td>57.3</td><td>50.8</td></tr><tr><td>(3) XLM-RtgAGG</td><td>X</td><td>52.3</td><td>55.2</td><td>43.0</td><td>54.8</td><td>41.1</td><td>40.4</td><td>44.9</td><td>46.0</td><td>76.2</td><td>58.7</td><td>51.2</td></tr><tr><td>(4) XLM-RselfAGG</td><td>X</td><td>52.0</td><td>58.5</td><td>42.6</td><td>54.8</td><td>39.2</td><td>41.6</td><td>44.3</td><td>47.3</td><td>74.7</td><td>58.3</td><td>51.3</td></tr></table>",
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
119,
|
| 1132 |
+
694,
|
| 1133 |
+
878,
|
| 1134 |
+
859
|
| 1135 |
+
],
|
| 1136 |
+
"page_idx": 8
|
| 1137 |
+
},
|
| 1138 |
+
{
|
| 1139 |
+
"type": "page_number",
|
| 1140 |
+
"text": "11696",
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
477,
|
| 1143 |
+
928,
|
| 1144 |
+
524,
|
| 1145 |
+
940
|
| 1146 |
+
],
|
| 1147 |
+
"page_idx": 8
|
| 1148 |
+
}
|
| 1149 |
+
]
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_model.json
ADDED
|
@@ -0,0 +1,1461 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.231,
|
| 7 |
+
0.091,
|
| 8 |
+
0.768,
|
| 9 |
+
0.131
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "mAggretriever: A Simple Yet Effective Approach to Zero-Shot Multilingual Dense Retrieval"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.295,
|
| 18 |
+
0.15,
|
| 19 |
+
0.704,
|
| 20 |
+
0.169
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Sheng-Chieh Lin\\(^{1*}\\), Amin Ahmad\\(^{2}\\), Jimmy Lin\\(^{1}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.339,
|
| 29 |
+
0.173,
|
| 30 |
+
0.66,
|
| 31 |
+
0.191
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "<sup>1</sup> University of Waterloo <sup>2</sup> Vectora"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.212,
|
| 40 |
+
0.198,
|
| 41 |
+
0.792,
|
| 42 |
+
0.214
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "{s269lin,jimmylin}@uwaterloo.ca,amin@vectora.com"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "title",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.261,
|
| 51 |
+
0.253,
|
| 52 |
+
0.341,
|
| 53 |
+
0.269
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Abstract"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.142,
|
| 62 |
+
0.282,
|
| 63 |
+
0.464,
|
| 64 |
+
0.566
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Multilingual information retrieval (MLIR) is a crucial yet challenging task due to the need for human annotations in multiple languages, making training data creation labor-intensive. In this paper, we introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformers (e.g., mBERT and XLM-R) for dense retrieval. To enhance training and inference efficiency, we employ approximate masked-language modeling prediction for computing lexical features, reducing \\(70 - 85\\%\\) GPU memory requirement for mAggretriever fine-tuning. Empirical results demonstrate that mAggretriever, fine-tuned solely on English training data, surpasses existing state-of-the-art multilingual dense retrieval models that undergo further training on large-scale MLIR training data. Our code is available at https://github.com/castorini/dhr."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.115,
|
| 73 |
+
0.581,
|
| 74 |
+
0.262,
|
| 75 |
+
0.595
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "1 Introduction"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.113,
|
| 84 |
+
0.607,
|
| 85 |
+
0.49,
|
| 86 |
+
0.816
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Fine-tuning a pre-trained transformer has proven to be highly effective in many tasks of natural language processing, including information retrieval (IR). Despite its success, the recent state-of-the-art dense retrieval (DR) models (Ni et al., 2022; Lin et al., 2023a) predominantly focus on English. This bias arises from the fact that constructing an effective DR model requires a substantial amount of annotated training data, which is predominantly available in English datasets (Bajaj et al., 2016; Kwiatkowski et al., 2019). This makes it challenging for users of low-resource languages to benefit from the recent IR progress."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.113,
|
| 95 |
+
0.817,
|
| 96 |
+
0.49,
|
| 97 |
+
0.897
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "To address this issue, researchers have explored leveraging pre-trained multilingual transformers, such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), which exhibit promising language transferability even when fine-tuned on"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.508,
|
| 106 |
+
0.253,
|
| 107 |
+
0.886,
|
| 108 |
+
0.429
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "English datasets alone (Zhang et al., 2023b). However, Izacard et al. (2022) emphasize the importance of contrastive pre-training on multilingual corpora to achieve improved language transferability in IR. Other approaches utilize multilingual parallel corpora or translation pairs for contrastive pretraining (Feng et al., 2022) or fine-tuning (Reimers and Gurevych, 2020; Bonifacio et al., 2021). However, these solutions are viable only when significant computational resources or multilingual parallel data are available."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.508,
|
| 117 |
+
0.431,
|
| 118 |
+
0.886,
|
| 119 |
+
0.607
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Recently, Lin et al. (2023b) demonstrate that the existing DR models solely using [CLS] (or averaged pooling) do not fully exploit the capability from pre-trained transformers for retrieval. Lin et al. (2023b) propose Aggretriever by combining the semantic and lexical features from the respective two components, [CLS] and masked language modeling (MLM) prediction, which shows superior effectiveness on diverse English retrieval tasks. This inspires us to ask the question: Can Aggre-triever benefit multilingual retrieval?"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.508,
|
| 128 |
+
0.609,
|
| 129 |
+
0.886,
|
| 130 |
+
0.818
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "Extending Aggretriever to support multilingual retrieval poses challenges due to increased computation costs for extracting lexical features from the MLM component as the vocabulary size of the pre-trained model grows, making fine-tuning with limited resources challenging. In this work, we propose two simple approaches to approximate MLM prediction to extract lexical features from multilingual pre-trained transformers, making the training of mAggretriever possible in one GPU within 40 GBs of memory. Remarkably, mAggretriever exhibits strong retrieval capability across multiple languages despite being fine-tuned on English data."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.508,
|
| 139 |
+
0.819,
|
| 140 |
+
0.884,
|
| 141 |
+
0.883
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "The paper is structured as follows: we begin by providing background information on standard DR and Aggretriever. We then present our proposed approaches to tackle the computational challenges"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.51,
|
| 150 |
+
0.893,
|
| 151 |
+
0.883,
|
| 152 |
+
0.919
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "<sup>1</sup>In this paper, we refer multilingual retrieval to monolingual retrieval across multiple languages."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.115,
|
| 161 |
+
0.905,
|
| 162 |
+
0.461,
|
| 163 |
+
0.919
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "*Work done during Sheng-Chieh's internship at Vectara."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "page_number",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.477,
|
| 172 |
+
0.928,
|
| 173 |
+
0.526,
|
| 174 |
+
0.941
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "11688"
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "footer",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.153,
|
| 183 |
+
0.946,
|
| 184 |
+
0.845,
|
| 185 |
+
0.973
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11688-11696 December 6-10, 2023 ©2023 Association for Computational Linguistics"
|
| 189 |
+
}
|
| 190 |
+
],
|
| 191 |
+
[
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.113,
|
| 196 |
+
0.085,
|
| 197 |
+
0.49,
|
| 198 |
+
0.166
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "involved in extending Aggretriever to support multilingual retrieval. Subsequently, we compare mAgregtriever with other state-of-the-art multilingual DR models and explore the potential for extending mAgregtriever to enable cross-lingual retrieval."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "title",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.115,
|
| 207 |
+
0.178,
|
| 208 |
+
0.258,
|
| 209 |
+
0.195
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "2 Background"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.113,
|
| 218 |
+
0.204,
|
| 219 |
+
0.491,
|
| 220 |
+
0.365
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Dense Retrieval. Given a query with sequential tokens \\( q = ([CLS], q_1, \\dots, q_n) \\), our task is to retrieve a list of passages to maximize some ranking metric such as nDCG or MRR. Standard dense retrieval (DR) models (Reimers and Gurevych, 2019; Karpukhin et al., 2020) based on pre-trained language models encode queries and passages as low dimensional [CLS] vectors with a bi-encoder architecture and use the dot product between the encoded vectors as the similarity score:"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "equation",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.182,
|
| 229 |
+
0.378,
|
| 230 |
+
0.489,
|
| 231 |
+
0.397
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "\\[\n\\operatorname {s i m} _ {\\mathrm {C L S}} (q, p) \\triangleq \\mathbf {e} _ {q _ {[ \\mathrm {C L S} ]}} \\cdot \\mathbf {e} _ {p _ {[ \\mathrm {C L S} ]}}, \\tag {1}\n\\]"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.114,
|
| 240 |
+
0.41,
|
| 241 |
+
0.489,
|
| 242 |
+
0.457
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "where \\(\\mathbf{e}_{q_{[CLS]}}\\) and respective query and passage \\(\\mathbf{e}_{p_{[CLS]}}\\) are the [CLS] vectors at the last layer of a pre-trained language model (e.g., BERT)."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.113,
|
| 251 |
+
0.467,
|
| 252 |
+
0.49,
|
| 253 |
+
0.58
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "Aggretriever. In addition to using [CLS] vectors to capture semantic textual features, Lin et al. (2023b) further propose to capture lexical textual features from the pre-trained MLM prediction head by projecting each contextualized token embedding \\(\\mathbf{e}_{q_i}\\) into a high-dimensional vector in the wordpiece lexical space:"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "equation",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.163,
|
| 262 |
+
0.594,
|
| 263 |
+
0.488,
|
| 264 |
+
0.612
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "\\[\n\\mathbf {p} _ {q _ {i}} = \\operatorname {s o f t m a x} \\left(\\mathbf {e} _ {q _ {i}} \\cdot \\mathbf {W} _ {\\mathrm {m l m}} + \\mathbf {b} _ {\\mathrm {m l m}}\\right), \\tag {2}\n\\]"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.113,
|
| 273 |
+
0.624,
|
| 274 |
+
0.49,
|
| 275 |
+
0.754
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "where \\(\\mathbf{e}_{q_i} \\in \\mathbb{R}^d\\), \\(\\mathbf{W}_{\\mathrm{mlm}} \\in \\mathbb{R}^{d \\times |\\mathrm{V}_{\\mathrm{wp}}|}\\), and \\(\\mathbf{b}_{\\mathrm{mlm}} \\in \\mathbb{R}^{|\\mathrm{V}_{\\mathrm{wp}}|}\\) are the weights of the pre-trained MLM linear projector, and \\(\\mathbf{p}_{q_i} \\in \\mathbb{R}^{|\\mathrm{V}_{\\mathrm{wp}}|}\\) is the \\(i\\)-th contextualized token represented by a probability distribution over the BERT wordpiece vocabulary, \\(\\mathrm{V}_{\\mathrm{wp}}\\). Weighted max pooling is then performed over the sequential representations \\((\\mathbf{p}_{q_1}, \\mathbf{p}_{q_2}, \\dots, \\mathbf{p}_{q_l})\\) to obtain a single-vector lexical representation:"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "equation",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.188,
|
| 284 |
+
0.768,
|
| 285 |
+
0.488,
|
| 286 |
+
0.793
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "\\[\n\\mathbf {v} _ {q} [ v ] = \\max _ {i \\in (1, 2, \\dots , l)} w _ {i} \\cdot \\mathbf {p} _ {q _ {i}} [ v ], \\tag {3}\n\\]"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.113,
|
| 295 |
+
0.806,
|
| 296 |
+
0.49,
|
| 297 |
+
0.919
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "where \\(w_{i} = |\\mathbf{e}_{q_{i}}\\cdot \\mathbf{W} + \\mathbf{b}| \\in \\mathbb{R}^{1}\\) is a positive scalar and \\(v \\in \\{1,2,\\dots ,|\\mathrm{V}_{\\mathrm{wp}}|\\}\\); \\(\\mathbf{W} \\in \\mathbb{R}^{d\\times 1}\\) and \\(\\mathbf{b} \\in \\mathbb{R}^{1}\\) are trainable weights. Note that the scalar \\(w_{i}\\) for each token \\(q_{i}\\) is essential to capture term importance, which \\(\\mathbf{p}_{q_i}\\) alone cannot capture since it is normalized by softmax. Note that the [CLS] token embedding is excluded since it is used for"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "image",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.529,
|
| 306 |
+
0.083,
|
| 307 |
+
0.868,
|
| 308 |
+
0.285
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": null
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "image_caption",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.508,
|
| 317 |
+
0.296,
|
| 318 |
+
0.885,
|
| 319 |
+
0.382
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "Figure 1: Illustration of MLM prediction for the input token \\( q_{1} \\). (1) Whole-vocabulary prediction considers all languages while target-language prediction only considers the tokens in one language; e.g., (2) source language (Spanish) or (3) English. (4) Self prediction assigns probability of one to itself. 0. denotes less than 0.1."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.508,
|
| 328 |
+
0.407,
|
| 329 |
+
0.884,
|
| 330 |
+
0.437
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "next-sentence prediction during pre-training; thus, does not carry much lexical information."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.508,
|
| 339 |
+
0.438,
|
| 340 |
+
0.884,
|
| 341 |
+
0.519
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "The wordpiece lexical representation \\(\\mathbf{v}_q\\) is then compressed, without supervision, into low-dimensional vector \\(\\mathbf{agg}_q^\\star\\) and concatenated with the [CLS] vector. The similarity score between a given \\(q - p\\) pair is computed with their dot product:"
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "equation",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.515,
|
| 350 |
+
0.529,
|
| 351 |
+
0.883,
|
| 352 |
+
0.568
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "\\[\n\\begin{array}{l} \\operatorname {s i m} (q, p) \\triangleq \\left(\\mathbf {e} _ {q _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {v} _ {q}\\right) \\cdot \\left(\\mathbf {e} _ {p _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {v} _ {p}\\right) \\tag {4} \\\\ \\approx \\left(\\epsilon_ {q _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {a g g} _ {q} ^ {\\star}\\right) \\cdot \\left(\\epsilon_ {p _ {[ \\mathrm {C L S} ]}} \\oplus \\mathbf {a g g} _ {p} ^ {\\star}\\right), \\\\ \\end{array}\n\\]"
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.508,
|
| 361 |
+
0.578,
|
| 362 |
+
0.885,
|
| 363 |
+
0.693
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "where \\(\\mathbf{e}_{q_{[\\mathrm{CLS}]}}\\) and \\(\\mathbf{v}_q\\) capture semantic and lexical textual features from BERT, respectively. Following Lin et al. (2023b), we linearly project \\(\\mathbf{e}_{q_{[\\mathrm{CLS}]}}\\) into 128 dimension and compress \\(\\mathbf{v}_q\\) into 640 dimension; i.e., \\(\\epsilon_{q_{[\\mathrm{CLS}]}}\\in \\mathbb{R}^{128}\\) and \\(\\mathbf{agg}_q^\\star \\in \\mathbb{R}^{640}\\). We refer readers to Lin et al. (2023b) for the details of compressing \\(\\mathbf{v}_q\\) into \\(\\mathbf{agg}_q^\\star\\)."
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "title",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.509,
|
| 372 |
+
0.702,
|
| 373 |
+
0.673,
|
| 374 |
+
0.719
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "3 Our Approach"
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.507,
|
| 383 |
+
0.727,
|
| 384 |
+
0.885,
|
| 385 |
+
0.92
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "In this work, we extend Aggretriever to multilingual retrieval, mAggretriever. However, directly applying Eq. (2) to multilingual pre-trained MLM heads over the whole vocabulary increases computation cost for both training and inference due to large vocabulary size \\(|\\mathrm{V_{wp}}|\\) in MLM projector, \\(\\mathbf{W}_{\\mathrm{mlm}}\\) (Nair et al., 2022; Lassance, 2023). For example, the mBERT and XLM-R have respective vocabulary size of 120K and 250K (vs BERT's 35K) in the MLM projector. To address the issue, we propose two computationally efficient approaches to approximate MLM prediction in Eq. (2)."
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "page_number",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.478,
|
| 394 |
+
0.928,
|
| 395 |
+
0.526,
|
| 396 |
+
0.941
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": "11689"
|
| 400 |
+
}
|
| 401 |
+
],
|
| 402 |
+
[
|
| 403 |
+
{
|
| 404 |
+
"type": "table_caption",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.173,
|
| 407 |
+
0.083,
|
| 408 |
+
0.823,
|
| 409 |
+
0.098
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "Table 1: Training and inference cost comparisons on MS MARCO with XLM-R base backbone."
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "table",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.12,
|
| 418 |
+
0.108,
|
| 419 |
+
0.88,
|
| 420 |
+
0.205
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">vector components</td><td rowspan=\"2\">MLM prediction</td><td colspan=\"2\">training cost</td><td colspan=\"2\">query encoding latency</td></tr><tr><td>[CLS]</td><td>agg*</td><td>GPU memory</td><td>total time</td><td>CPU</td><td>GPU</td></tr><tr><td>(1) XLM-RCLS</td><td>768 dim.</td><td>0 dim.</td><td>-</td><td>21 GBs</td><td>5.5 hrs</td><td>105 ms/q</td><td>9.1 ms/q</td></tr><tr><td>(2) XLM-RAGG</td><td></td><td></td><td>whole vocabulary</td><td>OOM (140 GBs)</td><td>-</td><td>400 ms/q</td><td>15.1 ms/q</td></tr><tr><td>(3) XLM-RtgAGG</td><td>128 dim.</td><td>640 dim.</td><td>target language</td><td>39 GBs</td><td>9.5 hrs</td><td>248 ms/q</td><td>11.4 ms/q</td></tr><tr><td>(4) XLM-RselfAGG</td><td></td><td></td><td>self</td><td>21 GBs</td><td>5.5 hrs</td><td>123 ms/q</td><td>9.1 ms/q</td></tr></table>"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.113,
|
| 429 |
+
0.229,
|
| 430 |
+
0.49,
|
| 431 |
+
0.312
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "Target-language prediction. One intuitive approach is to compute the probability distribution over the target-language token set of interest, denoted as \\(\\mathbf{V}_{\\mathrm{wp}}^{\\mathrm{tg}}\\), by replacing \\(\\mathbf{W}_{\\mathrm{mlm}}\\) and \\(\\mathbf{b}_{\\mathrm{mlm}}\\) in Eq. (2) with \\(\\mathbf{W}_{\\mathrm{mlm}}^{\\mathrm{tg}}\\) and \\(\\mathbf{b}_{\\mathrm{mlm}}^{\\mathrm{tg}}\\), respectively."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "equation",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.13,
|
| 440 |
+
0.321,
|
| 441 |
+
0.474,
|
| 442 |
+
0.381
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "\\[\n\\left\\{ \\begin{array}{l l} \\mathbf {W} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [:, v ] = \\mathbf {W} _ {\\mathrm {m l m}} [:, v ]; \\\\ \\mathbf {b} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [ v ] = \\mathbf {b} _ {\\mathrm {m l m}} [ v ], & \\text {i f} v \\in \\mathbf {V} _ {\\mathrm {w p}} ^ {\\mathrm {t g}} \\\\ \\mathbf {W} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [:, v ] = \\overrightarrow {0}; \\mathbf {b} _ {\\mathrm {m l m}} ^ {\\mathrm {t g}} [ v ] = 0, & \\text {o t h e r w i s e}. \\end{array} \\right.\n\\]"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.113,
|
| 451 |
+
0.389,
|
| 452 |
+
0.489,
|
| 453 |
+
0.488
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "From the above equation, we only have to compute the matrix multiplication and softmax among \\(|\\mathrm{V}_{\\mathrm{wp}}^{\\mathrm{tg}}|\\) instead of \\(|\\mathrm{V}_{\\mathrm{wp}}|\\) tokens. This approach assumes that only the tokens corresponding to the target language have responses when conducting MLM prediction; i.e., \\(\\mathbf{p}_{q_i}[v] = 0\\) if \\(v \\notin \\mathrm{V}_{\\mathrm{wp}}^{\\mathrm{tg}}\\)."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.113,
|
| 462 |
+
0.494,
|
| 463 |
+
0.49,
|
| 464 |
+
0.575
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "Self prediction. Since BERT MLM pre-training task includes recovering the unmasked token itself, to further save computation cost, we may approximate MLM prediction by assigning a value of one to the token \\( q_{i} \\) itself, and zero to the others:"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "equation",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.125,
|
| 473 |
+
0.584,
|
| 474 |
+
0.489,
|
| 475 |
+
0.605
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "\\[\n\\mathbf {p} _ {q _ {i}} = x _ {j} \\in \\{0, 1 \\} ^ {\\left| \\mathrm {V} _ {\\mathrm {w p}} \\right|} \\text {f o r} j \\in \\{\\operatorname {t o k} _ {-} \\mathrm {i d} (q _ {i}) \\}. \\tag {5}\n\\]"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.113,
|
| 484 |
+
0.614,
|
| 485 |
+
0.49,
|
| 486 |
+
0.71
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "The operation removes the computationally expensive matrix multiplication and softmax operation in Eq. (2). Note that, combining Eq. (3) and (5), the lexical representations, \\(\\mathbf{v}_q\\), from self prediction can be considered bag-of-word vectors with learned term weights."
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.113,
|
| 495 |
+
0.711,
|
| 496 |
+
0.49,
|
| 497 |
+
0.919
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "Figure 1 illustrates whole-vocabulary, our proposed target-language and self MLM predictions. Table 1 compares the training and inference cost of the standard DR (row 1) and mA ggtretriever with different MLM prediction strategies on MS MARCO dataset using the backbone of XLM-R base. The training and query encoding settings are detailed in Section 4.2. We observe training with target-language (English in our case) MLM prediction reduces the GPU memory requirement compared to whole vocabulary (row 3 vs 2) while self prediction yields training and inference efficiency on par with standard DR (row 4 vs 1)."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "title",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.509,
|
| 506 |
+
0.228,
|
| 507 |
+
0.725,
|
| 508 |
+
0.244
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "4 Experimental Setups"
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "title",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.509,
|
| 517 |
+
0.256,
|
| 518 |
+
0.722,
|
| 519 |
+
0.27
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "4.1 Datasets and Metrics"
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.507,
|
| 528 |
+
0.278,
|
| 529 |
+
0.885,
|
| 530 |
+
0.422
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "We use the MS MARCO passage ranking dataset introduced by Bajaj et al. (2016), comprising a corpus with 8.8M passages and 500K training queries. Model supervised language (English) retrieval effectiveness is evaluated on the 6980 (MARCO dev) queries from the development set with one relevant passage per query on average. Following the established procedure, we report MRR@10 and R@1000 as the metrics."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.508,
|
| 539 |
+
0.425,
|
| 540 |
+
0.885,
|
| 541 |
+
0.618
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "We evaluate model zero-shot retrieval effectiveness in other languages using MIRACL dataset introduced by Zhang et al., comprising around 77k queries over Wikipedia in 18 languages with over 700k high-quality relevance judgments by native speakers. We use the publicly available development queries and their relevance judgements in 15 languages (two surprise languages and English are excluded). Following Zhang et al., we report macro averaged nDCG10 and R@100 over the 15 languages and list the full numbers in Appendix A.1."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.508,
|
| 550 |
+
0.619,
|
| 551 |
+
0.885,
|
| 552 |
+
0.78
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "Finally, we study how to conduct crosslingual retrieval using mA ggtretriever on XQuAD-R dataset introduced by Roy et al. (2020), consisting of parallel queries and corpora with 11 languages. We conduct retrieval using the queries with \\(X_{Q}\\) language against the corpus with \\(X_{C}\\) language and report the macro-averaged MAP@100 over all the cross-lingual combinations of the 110 language pairs \\((X_{Q} \\neq X_{C})\\), and the other 11 monolingual combinations \\((X_{Q} = X_{C})\\)."
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.508,
|
| 561 |
+
0.781,
|
| 562 |
+
0.887,
|
| 563 |
+
0.893
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "Table 2 reports the data statistics of MIRACL and XQuAD-R. Note that the candidates of MIRACL are passages while the candidates of XQuAD-R are chunked sentences from XQuAD corpora. Note that although XQuAD corpora is the manually rewritten multilingual parallel corpora (Artetxe et al., 2020), the numbers of chunked sentences are"
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "page_footnote",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.51,
|
| 572 |
+
0.904,
|
| 573 |
+
0.716,
|
| 574 |
+
0.919
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": "\\(^{2}\\) Datasets: MIRACL, XQuAD-R"
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "page_number",
|
| 581 |
+
"bbox": [
|
| 582 |
+
0.477,
|
| 583 |
+
0.928,
|
| 584 |
+
0.526,
|
| 585 |
+
0.941
|
| 586 |
+
],
|
| 587 |
+
"angle": 0,
|
| 588 |
+
"content": "11690"
|
| 589 |
+
}
|
| 590 |
+
],
|
| 591 |
+
[
|
| 592 |
+
{
|
| 593 |
+
"type": "table_caption",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.134,
|
| 596 |
+
0.083,
|
| 597 |
+
0.468,
|
| 598 |
+
0.097
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "Table 2: MIRACL and XQuAD-R data statistics."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "table",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.123,
|
| 607 |
+
0.108,
|
| 608 |
+
0.487,
|
| 609 |
+
0.374
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "<table><tr><td rowspan=\"2\">language</td><td rowspan=\"2\">ISO</td><td colspan=\"2\">MIRACL Dev</td><td colspan=\"2\">XQuAD-R</td></tr><tr><td># queries</td><td># candidates</td><td># queries</td><td># candidates</td></tr><tr><td>Arabic</td><td>ar</td><td>2,869</td><td>2,061,414</td><td>1,190</td><td>1,222</td></tr><tr><td>Bengali</td><td>bn</td><td>411</td><td>297,265</td><td>-</td><td>-</td></tr><tr><td>German</td><td>de</td><td>-</td><td>-</td><td>1,190</td><td>1,276</td></tr><tr><td>Greek</td><td>el</td><td>-</td><td>-</td><td>1,190</td><td>1,234</td></tr><tr><td>English</td><td>en</td><td>648</td><td>32,893,221</td><td>1,190</td><td>1,180</td></tr><tr><td>Spanish</td><td>es</td><td>799</td><td>10,373,953</td><td>1,190</td><td>1,215</td></tr><tr><td>Persian</td><td>fa</td><td>632</td><td>2,207,172</td><td>-</td><td>-</td></tr><tr><td>Finnish</td><td>fi</td><td>1,271</td><td>1,883,509</td><td>-</td><td>-</td></tr><tr><td>French</td><td>fr</td><td>343</td><td>14,636,953</td><td>-</td><td>-</td></tr><tr><td>Hindi</td><td>hi</td><td>350</td><td>506,264</td><td>1,190</td><td>1,244</td></tr><tr><td>Indonesian</td><td>id</td><td>960</td><td>1,446,315</td><td>-</td><td>-</td></tr><tr><td>Japanese</td><td>ja</td><td>860</td><td>6,953,614</td><td>-</td><td>-</td></tr><tr><td>Korean</td><td>ko</td><td>213</td><td>1,486,752</td><td>-</td><td>-</td></tr><tr><td>Russian</td><td>ru</td><td>1,252</td><td>9,543,918</td><td>1,190</td><td>1,219</td></tr><tr><td>Swahili</td><td>sw</td><td>482</td><td>131,924</td><td>-</td><td>-</td></tr><tr><td>Telugu</td><td>te</td><td>828</td><td>518,079</td><td>-</td><td>-</td></tr><tr><td>Thai</td><td>th</td><td>733</td><td>542,166</td><td>1,190</td><td>852</td></tr><tr><td>Turkish</td><td>tr</td><td>-</td><td>-</td><td>1,190</td><td>1,167</td></tr><tr><td>Vietnamese</td><td>vi</td><td>-</td><td>-</td><td>1,190</td><td>1,209</td></tr><tr><td>Chinese</td><td>zh</td><td>393</td><td>4,934,368</td><td>1,190</td><td>1,196</td></tr></table>"
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.113,
|
| 618 |
+
0.401,
|
| 619 |
+
0.49,
|
| 620 |
+
0.449
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "different between languages. Also note that for each query, MIRACL has multiple relevant candidates while XQuAD only has one."
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "title",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.114,
|
| 629 |
+
0.462,
|
| 630 |
+
0.347,
|
| 631 |
+
0.478
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "4.2 Implementation Details"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.113,
|
| 640 |
+
0.484,
|
| 641 |
+
0.49,
|
| 642 |
+
0.821
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "Models. We apply mAggretriever to two 12-layer pre-trained multilingual models: (1) mBERT; (2) XLM-R.\\(^3\\) We compare models fine-tuned solely using [CLS] vector and based on mAggretriever using whole-vocabulary MLM prediction with the subscripts \"CLS\" and \"AGG\", respectively, e.g., \\(\\mathrm{mBERT}_{\\mathrm{CLS}}\\) and \\(\\mathrm{mBERT}_{\\mathrm{AGG}}\\). We report the two variants of mAggretriever with target-language and self prediction; e.g., \\(\\mathrm{mBERT}_{\\mathrm{AGG}}^{\\mathrm{tg}}\\) and \\(\\mathrm{mBERT}_{\\mathrm{AGG}}^{\\mathrm{self}}\\). For target-language prediction, we train mAggretriever using English token prediction and run inference using the corresponding language of each corpus, and for whole-vocabulary prediction, we finetune with half of the batch size. In addition, we report the numbers of BM25 and mDPR from Zhang et al. as reference points, and the two state-of-the-art multilingual retrievers: (1) mContriever (Izacard et al., 2022), pre-trained on multilingual corpora with 29 languages and further fine-tuned on MS MARCO dataset;\\(^3\\) (2) Cohere (API), whose numbers are copied from Kamalloo et al. (2023).\\(^4\\)"
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.113,
|
| 651 |
+
0.833,
|
| 652 |
+
0.489,
|
| 653 |
+
0.882
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "Training and Inference. We train our models on a single A100 GPU with 80 GB memory for 6 epochs (around 100k steps) with learning rate"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.508,
|
| 662 |
+
0.085,
|
| 663 |
+
0.885,
|
| 664 |
+
0.422
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "7e-6. Each batch includes 24 queries, and for each query, we randomly sample one positive and seven negative passages. All the negatives are sampled from the MS MARCO \"small\" triples training set, which is created using BM25. During training, we minimize the negative log likelihood as the standard contrastive loss and following Karpukhin et al. (2020), for each query, we consider all the (positive and negative) passages from the other triplets in the batch as in-batch negative samples. We set the maximum input length for the query and the passage to 32 and 128, respectively, at both training and inference stages for MS MARCO. For MIRACL and XQUAD-R, we use the maximum input length of 128 and 256 for the query and passage, respectively. Note that we lowercase all the queries and passages for mAggretriever. We measure query encoding latency on the 6980 MS MARCO development queries with the batch size of 1 and single thread on a Linux machine with 12 Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz and 88G of RAM."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.508,
|
| 673 |
+
0.431,
|
| 674 |
+
0.887,
|
| 675 |
+
0.626
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "Target Token Set Construction. For each language of corpus in MIRACL and XQuAD-R, we tokenize and lowercase all the passages and collect the unique tokens in the corpus as the target token set. For example, when fine-tuning on MS MARCO dataset, we use the token set built from MS MARCO corpus. While conducting target-language MLM prediction on MIRACL Arabic queries and corpus, we use the token set collected from Arabic corpus as our target token set. Note that self MLM prediction does not require collecting the token set for the target language."
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "title",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.509,
|
| 684 |
+
0.636,
|
| 685 |
+
0.61,
|
| 686 |
+
0.652
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "5 Results"
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "title",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.509,
|
| 695 |
+
0.662,
|
| 696 |
+
0.719,
|
| 697 |
+
0.677
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "5.1 Results on MIRACL"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.508,
|
| 706 |
+
0.682,
|
| 707 |
+
0.885,
|
| 708 |
+
0.875
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "Table 3 reports models' retrieval effectiveness on MS MARCO and MIRACL development queries. We first observe that mAggretriever, incorporating lexical features, not only outperforms its CLS counterpart in supervised English retrieval (MARCO), but also exhibits superior transferability to other languages (MIRACL) regardless of backbone. In addition, \\(\\mathrm{mBERT}_{\\mathrm{AGG}}\\) without introducing contrastive pre-training on multilingual corpora, outperforms mContriever in MIRACL in terms of nDCG@10. Note that \\(\\mathrm{mBERT}_{\\mathrm{AGG}}\\) and mContriever are both initialized from mBERT. We"
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "page_footnote",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.51,
|
| 717 |
+
0.881,
|
| 718 |
+
0.883,
|
| 719 |
+
0.919
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "<sup>5</sup>Our preliminary experiments on MS MARCO show that lowercase improves mAggretriever while degrades its CLS counterpart."
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "page_footnote",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.114,
|
| 728 |
+
0.89,
|
| 729 |
+
0.442,
|
| 730 |
+
0.905
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "3 Model checkpoints: mBERT, XLM-R, mContribiever"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "page_footnote",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.114,
|
| 739 |
+
0.905,
|
| 740 |
+
0.342,
|
| 741 |
+
0.919
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "\\(^{4}\\)Cohere multilingual retrieval model"
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "list",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.114,
|
| 750 |
+
0.89,
|
| 751 |
+
0.442,
|
| 752 |
+
0.919
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": null
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "page_number",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.478,
|
| 761 |
+
0.928,
|
| 762 |
+
0.525,
|
| 763 |
+
0.941
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "11691"
|
| 767 |
+
}
|
| 768 |
+
],
|
| 769 |
+
[
|
| 770 |
+
{
|
| 771 |
+
"type": "table_caption",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.114,
|
| 774 |
+
0.083,
|
| 775 |
+
0.49,
|
| 776 |
+
0.126
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "Table 3: Supervised (MARCO) English and zero-shot (MIRACL) multilingual retrieval effectiveness comparisons. Full numbers are listed in Appendix A.1."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "table",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.123,
|
| 785 |
+
0.136,
|
| 786 |
+
0.487,
|
| 787 |
+
0.348
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">MARCO Dev</td><td colspan=\"2\">MIRACL Dev</td></tr><tr><td colspan=\"2\">English</td><td colspan=\"2\">15 lang. macro avg.</td></tr><tr><td></td><td>MRR@10</td><td>R@1K</td><td>nDCG@10</td><td>R@100</td></tr><tr><td>(a) BM25</td><td>18.8</td><td>85.8</td><td>39.6</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>29.6</td><td>94.6</td><td>41.7</td><td>78.9</td></tr><tr><td>(c) mContriesver</td><td>27.4</td><td>97.1</td><td>43.8</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>50.1</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>29.1</td><td>93.6</td><td>36.1</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>34.3</td><td>95.8</td><td>44.4</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>34.5</td><td>96.1</td><td>44.4</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>34.2</td><td>95.6</td><td>46.9</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>31.1</td><td>93.8</td><td>39.3</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>34.7</td><td>96.1</td><td>52.9</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>35.0</td><td>96.2</td><td>53.3</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>35.0</td><td>96.0</td><td>53.3</td><td>86.3</td></tr></table>"
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "text",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.113,
|
| 796 |
+
0.372,
|
| 797 |
+
0.489,
|
| 798 |
+
0.42
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "hypothesize that mContriever's high recall (i.e., R@1K) comes from its pre-training on multilingual corpora with 29 languages."
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.113,
|
| 807 |
+
0.42,
|
| 808 |
+
0.49,
|
| 809 |
+
0.614
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "Switching to XLM-R backbone, mAggretriever even outperforms Cohere (API). Furthermore, we notice that mAaggretriever with XLM-R backbone improves over mBERT more than its CLS counterpart does. For example, in the case of MIRACL, XLM-\\(\\mathsf{R}_{\\mathsf{AGG}}^{\\mathrm{tg}}\\) exhibits a significant improvement over \\(\\mathsf{mBERT}_{\\mathsf{AGG}}^{\\mathrm{tg}}\\) from an nDCG@10 score of 44.4 to 53.3, whereas XLM-\\(\\mathsf{R}_{\\mathsf{CLS}}\\) only sees a modest improvement over \\(\\mathsf{mBERT}_{\\mathsf{CLS}}\\) from 36.1 to 39.3. This notable enhancement highlights mAaggretriever's ability to effectively utilize a superior pre-trained language model."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "text",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.113,
|
| 818 |
+
0.614,
|
| 819 |
+
0.49,
|
| 820 |
+
0.822
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "Finally, compared to whole-vocabulary and proposed approximate MLM predictions, we observe that self prediction shows comparable and even strong language transferability. We hypothesize that MLM prediction learned from English data cannot transfer well to other languages. It is worth mentioning that compared to whole-vocabulary MLM prediction, the proposed approximate MLM prediction, target-language and self prediction, are advantageous for real-world deployment since they show almost no effectiveness drop (sometimes even better) but require far less training and inference cost as shown in Table 1."
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "title",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.114,
|
| 829 |
+
0.834,
|
| 830 |
+
0.332,
|
| 831 |
+
0.85
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "5.2 Results on XQuAD-R"
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.113,
|
| 840 |
+
0.855,
|
| 841 |
+
0.49,
|
| 842 |
+
0.922
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "In Table 4, we directly apply XLM-R based models fine-tuned on MS MARCO to XQuAD-R dataset. In the experiment, we try different MLM prediction settings for \\(\\mathrm{XLM - R_{AGG}^{tg}}\\). For example, instead of"
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "table_caption",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.509,
|
| 851 |
+
0.083,
|
| 852 |
+
0.885,
|
| 853 |
+
0.113
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Table 4: Zero-shot retrieval effectiveness on XQuAD-R. \\(X_{Q}\\left(X_{C}\\right)\\) denotes the language of queries (corpus)."
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "table",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.517,
|
| 862 |
+
0.123,
|
| 863 |
+
0.88,
|
| 864 |
+
0.25
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"2\">MLM target lang.</td><td colspan=\"2\">XQuAD-R</td></tr><tr><td>query</td><td>corpus</td><td>\\( {X}_{Q} = {X}_{C} \\)</td><td>\\( {X}_{Q} \\neq {X}_{C} \\)</td></tr><tr><td></td><td></td><td></td><td colspan=\"2\">MAP@100</td></tr><tr><td>(1) XLM- \\( {\\mathrm{R}}_{\\mathrm{{CLS}}} \\)</td><td>-</td><td>-</td><td>73.1</td><td>57.5</td></tr><tr><td>(2) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}} \\)</td><td>-</td><td>-</td><td>77.4</td><td>41.8</td></tr><tr><td>(3)</td><td>\\( {X}_{Q} \\)</td><td>\\( {X}_{C} \\)</td><td>77.4</td><td>36.0</td></tr><tr><td>(4) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}}^{\\mathrm{{tg}}} \\)</td><td>\\( {X}_{C} \\)</td><td>\\( {X}_{C} \\)</td><td>77.4</td><td>44.9</td></tr><tr><td>(5)</td><td>English</td><td>English</td><td>73.5</td><td>51.7</td></tr><tr><td>(6) XLM- \\( {\\mathrm{R}}_{\\mathrm{{AGG}}}^{\\text{self }} \\)</td><td>-</td><td>-</td><td>77.3</td><td>36.2</td></tr></table>"
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.508,
|
| 873 |
+
0.28,
|
| 874 |
+
0.885,
|
| 875 |
+
0.392
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "using respective query and corpus source language as the target language (row 3), we use the language corresponding to each corpus (row 4) or English (row 5) as target language for both queries and corpus. Note that rows 3 and 4 are the same when both queries and corpus are in the same language \\((X_{Q} = X_{C})\\)."
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.508,
|
| 884 |
+
0.396,
|
| 885 |
+
0.885,
|
| 886 |
+
0.653
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "We observe that mAggretriever shows relatively poor cross-lingual retrieval effectiveness (\\(X_Q \\neq X_C\\)) compared to its CLS counterpart (rows 2,3,6 vs 1). When aligning the MLM prediction target language for queries and corpus, the cross-lingual retrieval effectiveness sees improvement (rows 3 vs 4,5). These results show that MLM prediction head potentially can be used as a translation layer to project query and corpus into the same language, which is also reported by Nair et al. (2022). It is possible to leverage the transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022) and the established parallel training data (Bonifacio et al., 2021) to improve mAggretriever's cross-lingual retrieval capability, which we leave for future work."
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "title",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.509,
|
| 895 |
+
0.676,
|
| 896 |
+
0.642,
|
| 897 |
+
0.692
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "6 Conclusion"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.508,
|
| 906 |
+
0.71,
|
| 907 |
+
0.885,
|
| 908 |
+
0.92
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "In this paper, we introduce mAggretriever, an extension of Aggretriever for multilingual retrieval, by combining lexical and semantic features in pretrained language models for dense retrieval. We propose target-language and self MLM predictions to enhance the efficiency of mAggretriever. Our study highlights the efficiency advantage of self MLM prediction in multilingual retrieval, while target-language MLM prediction offers flexibility for cross-lingual retrieval. Importantly, mAggretriever, solely fine-tuned on English data, demonstrates competitive multilingual retrieval capability compared to other state-of-the-art dense retrievers."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "page_number",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.478,
|
| 917 |
+
0.928,
|
| 918 |
+
0.526,
|
| 919 |
+
0.941
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "11692"
|
| 923 |
+
}
|
| 924 |
+
],
|
| 925 |
+
[
|
| 926 |
+
{
|
| 927 |
+
"type": "title",
|
| 928 |
+
"bbox": [
|
| 929 |
+
0.115,
|
| 930 |
+
0.085,
|
| 931 |
+
0.221,
|
| 932 |
+
0.1
|
| 933 |
+
],
|
| 934 |
+
"angle": 0,
|
| 935 |
+
"content": "Limitations"
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"bbox": [
|
| 940 |
+
0.117,
|
| 941 |
+
0.111,
|
| 942 |
+
0.491,
|
| 943 |
+
0.384
|
| 944 |
+
],
|
| 945 |
+
"angle": 0,
|
| 946 |
+
"content": "Our research primarily focuses on enhancing multilingual retrieval, specifically targeting monolingual retrieval zero-shot transfer to non-English languages. We plan to extend our study to improve cross-lingual retrieval by leveraging transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022). In addition, we only discuss how to improve zero-shot language transferability of dense retrieval. It is possible to further improve model effectiveness by leveraging existing multilingual training data (Bonifacio et al., 2021; Zhang et al., 2021) and better negative mining strategies (Shen et al., 2022). Finally, due to space limitation, we compare mAggre-triever with previous state-of-the-art multilingual retrievers on Mr. TyDi (Zhang et al., 2021) in Appendix A.2."
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "title",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.115,
|
| 952 |
+
0.398,
|
| 953 |
+
0.287,
|
| 954 |
+
0.414
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": "Acknowledgements"
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.114,
|
| 963 |
+
0.424,
|
| 964 |
+
0.49,
|
| 965 |
+
0.504
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. We thank the anonymous referees who provided useful feedback to improve this work."
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "title",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.115,
|
| 974 |
+
0.533,
|
| 975 |
+
0.214,
|
| 976 |
+
0.547
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "References"
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "ref_text",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.117,
|
| 985 |
+
0.556,
|
| 986 |
+
0.49,
|
| 987 |
+
0.609
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proc. ACL, pages 4623-4637."
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "ref_text",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.117,
|
| 996 |
+
0.62,
|
| 997 |
+
0.49,
|
| 998 |
+
0.713
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "ref_text",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.117,
|
| 1007 |
+
0.724,
|
| 1008 |
+
0.49,
|
| 1009 |
+
0.79
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "Luiz Henrique Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mMARCO: A multilingual version of MS MARCO passage ranking dataset. arXiv:2108.13897."
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "ref_text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.117,
|
| 1018 |
+
0.801,
|
| 1019 |
+
0.49,
|
| 1020 |
+
0.879
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proc. NAACL, pages 3576-3588."
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "ref_text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.117,
|
| 1029 |
+
0.892,
|
| 1030 |
+
0.49,
|
| 1031 |
+
0.919
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "list",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.117,
|
| 1040 |
+
0.556,
|
| 1041 |
+
0.49,
|
| 1042 |
+
0.919
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": null
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "ref_text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.529,
|
| 1051 |
+
0.086,
|
| 1052 |
+
0.885,
|
| 1053 |
+
0.14
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proc. ACL, pages 8440-8451."
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "ref_text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.512,
|
| 1062 |
+
0.149,
|
| 1063 |
+
0.885,
|
| 1064 |
+
0.202
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, pages 4171-4186."
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "ref_text",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.512,
|
| 1073 |
+
0.211,
|
| 1074 |
+
0.885,
|
| 1075 |
+
0.263
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proc. ACL, pages 878-891."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "ref_text",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.512,
|
| 1084 |
+
0.274,
|
| 1085 |
+
0.885,
|
| 1086 |
+
0.34
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research."
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "ref_text",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.512,
|
| 1095 |
+
0.349,
|
| 1096 |
+
0.885,
|
| 1097 |
+
0.415
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluating embedding APIs for information retrieval. arXiv:2305.06300."
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "ref_text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.512,
|
| 1106 |
+
0.424,
|
| 1107 |
+
0.885,
|
| 1108 |
+
0.49
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proc. EMNLP, pages 6769-6781."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "ref_text",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.512,
|
| 1117 |
+
0.5,
|
| 1118 |
+
0.885,
|
| 1119 |
+
0.619
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7:452-466."
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "ref_text",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.512,
|
| 1128 |
+
0.628,
|
| 1129 |
+
0.885,
|
| 1130 |
+
0.655
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "Carlos Lassance. 2023. Extending English IR methods to multi-lingual IR. arXiv:2302.14723."
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "ref_text",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.512,
|
| 1139 |
+
0.665,
|
| 1140 |
+
0.885,
|
| 1141 |
+
0.731
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen tau Yih, and Xilun Chen. 2023a. How to train your DRAGON: Diverse augmentation towards generalizable dense retrieval. arXiv:2302.07452."
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "ref_text",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.512,
|
| 1150 |
+
0.74,
|
| 1151 |
+
0.885,
|
| 1152 |
+
0.807
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": "Sheng-Chieh Lin, Minghan Li, and Jimmy Lin. 2023b. Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval. Transactions of the Association for Computational Linguistics, pages 436-452."
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "ref_text",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.512,
|
| 1161 |
+
0.816,
|
| 1162 |
+
0.885,
|
| 1163 |
+
0.869
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "Suraj Nair, Eugene Yang, Dawn Lawrie, James Mayfield, and Douglas W. Oard. 2022. Learning a sparse representation model for neural CLIR. In Proc. DE-SIRES."
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "ref_text",
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
0.512,
|
| 1172 |
+
0.878,
|
| 1173 |
+
0.885,
|
| 1174 |
+
0.919
|
| 1175 |
+
],
|
| 1176 |
+
"angle": 0,
|
| 1177 |
+
"content": "Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022."
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "list",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
0.512,
|
| 1183 |
+
0.086,
|
| 1184 |
+
0.885,
|
| 1185 |
+
0.919
|
| 1186 |
+
],
|
| 1187 |
+
"angle": 0,
|
| 1188 |
+
"content": null
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "page_number",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
0.478,
|
| 1194 |
+
0.928,
|
| 1195 |
+
0.525,
|
| 1196 |
+
0.941
|
| 1197 |
+
],
|
| 1198 |
+
"angle": 0,
|
| 1199 |
+
"content": "11693"
|
| 1200 |
+
}
|
| 1201 |
+
],
|
| 1202 |
+
[
|
| 1203 |
+
{
|
| 1204 |
+
"type": "ref_text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.135,
|
| 1207 |
+
0.086,
|
| 1208 |
+
0.489,
|
| 1209 |
+
0.113
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "Large dual encoders are generalizable retrievers. In Proc. EMNLP, pages 9844-9855."
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "ref_text",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.117,
|
| 1218 |
+
0.123,
|
| 1219 |
+
0.49,
|
| 1220 |
+
0.163
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERT-networks. In Proc. EMNLP, pages 3982-3992."
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "ref_text",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.117,
|
| 1229 |
+
0.173,
|
| 1230 |
+
0.489,
|
| 1231 |
+
0.224
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proc. EMNLP, pages 4512-4525."
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "ref_text",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.117,
|
| 1240 |
+
0.235,
|
| 1241 |
+
0.489,
|
| 1242 |
+
0.288
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. In Proc. EMNLP, pages 5919-5930."
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "ref_text",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.117,
|
| 1251 |
+
0.298,
|
| 1252 |
+
0.489,
|
| 1253 |
+
0.362
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "Tianhao Shen, Mingtong Liu, Ming Zhou, and Deyi Xiong. 2022. Recovering gold from black sand: Multilingual dense passage retrieval with hard and false negative samples. In Proc. EMNLP, pages 10659-10670."
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "ref_text",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.117,
|
| 1262 |
+
0.374,
|
| 1263 |
+
0.489,
|
| 1264 |
+
0.438
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, and Daxin Jiang. 2022. Unsupervised context aware sentence representation pretraining for multi-lingual dense retrieval. arXiv:2206.03281."
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "ref_text",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.117,
|
| 1273 |
+
0.45,
|
| 1274 |
+
0.488,
|
| 1275 |
+
0.5
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": "Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, and Nan Duan. 2023a. Modeling sequential sentence relation to improve cross-lingual dense retrieval. In Proc. ICLR."
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "ref_text",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
0.117,
|
| 1284 |
+
0.512,
|
| 1285 |
+
0.489,
|
| 1286 |
+
0.565
|
| 1287 |
+
],
|
| 1288 |
+
"angle": 0,
|
| 1289 |
+
"content": "Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proc. Workshop on Multilingual Representation Learning, pages 127-137."
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "ref_text",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.117,
|
| 1295 |
+
0.575,
|
| 1296 |
+
0.489,
|
| 1297 |
+
0.627
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2023b. Toward best practices for training multilingual dense retrieval models. ACM Trans. Inf. Syst."
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "ref_text",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.117,
|
| 1306 |
+
0.638,
|
| 1307 |
+
0.49,
|
| 1308 |
+
0.73
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": "Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages. Transactions of the Association for Computational Linguistics, pages 1114-1131."
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "list",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.117,
|
| 1317 |
+
0.086,
|
| 1318 |
+
0.49,
|
| 1319 |
+
0.73
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": null
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "page_number",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.478,
|
| 1328 |
+
0.929,
|
| 1329 |
+
0.526,
|
| 1330 |
+
0.941
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": "11694"
|
| 1334 |
+
}
|
| 1335 |
+
],
|
| 1336 |
+
[
|
| 1337 |
+
{
|
| 1338 |
+
"type": "title",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
0.115,
|
| 1341 |
+
0.085,
|
| 1342 |
+
0.239,
|
| 1343 |
+
0.101
|
| 1344 |
+
],
|
| 1345 |
+
"angle": 0,
|
| 1346 |
+
"content": "A Appendix"
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "title",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
0.115,
|
| 1352 |
+
0.11,
|
| 1353 |
+
0.363,
|
| 1354 |
+
0.124
|
| 1355 |
+
],
|
| 1356 |
+
"angle": 0,
|
| 1357 |
+
"content": "A.1 Full Results on MIRACL"
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "text",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
0.114,
|
| 1363 |
+
0.131,
|
| 1364 |
+
0.49,
|
| 1365 |
+
0.195
|
| 1366 |
+
],
|
| 1367 |
+
"angle": 0,
|
| 1368 |
+
"content": "Table 5 reports the detailed nDCG@10 and R@100 numbers on MIRACL 15 languages for all the compared models. Note that we do not use English dataset when evaluating on MIRACL."
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "title",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
0.115,
|
| 1374 |
+
0.206,
|
| 1375 |
+
0.37,
|
| 1376 |
+
0.221
|
| 1377 |
+
],
|
| 1378 |
+
"angle": 0,
|
| 1379 |
+
"content": "A.2 Comparisons on Mr. TyDi"
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "text",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
0.114,
|
| 1385 |
+
0.227,
|
| 1386 |
+
0.492,
|
| 1387 |
+
0.501
|
| 1388 |
+
],
|
| 1389 |
+
"angle": 0,
|
| 1390 |
+
"content": "Some previous state-of-the-art multilingual retrievers are evaluated on the test queries of Mr. TyDi (Zhang et al., 2021), the multilingual retrieval dataset similar to MIRACL but with sparse relevance judgements and less languages. In order to compare with the previous models, following the model inference settings in MIRACL, we evaluate our mAggretriever with the backbone of XLM-R. The full results are tabulated in Table 6. We still observe that all the variants of mAggretriever outperform previous state of the art in 6 out of 10 languages. Note that most of the previous retrievers undergo multilingual contrastive learning. For example, mContrieser are pre-trained on the corpus with 29 languages while mColBERT are fine-tuned on multilingual MS MARCO dataset introduced by Bonifacio et al. (2021)."
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "page_number",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
0.478,
|
| 1396 |
+
0.928,
|
| 1397 |
+
0.527,
|
| 1398 |
+
0.941
|
| 1399 |
+
],
|
| 1400 |
+
"angle": 0,
|
| 1401 |
+
"content": "11695"
|
| 1402 |
+
}
|
| 1403 |
+
],
|
| 1404 |
+
[
|
| 1405 |
+
{
|
| 1406 |
+
"type": "table_caption",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.193,
|
| 1409 |
+
0.14,
|
| 1410 |
+
0.804,
|
| 1411 |
+
0.155
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "Table 5: MIRACL multilingual retrieval comparisons. Bold denotes the best effectiveness."
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "table",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.119,
|
| 1420 |
+
0.165,
|
| 1421 |
+
0.88,
|
| 1422 |
+
0.535
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "<table><tr><td rowspan=\"2\">Models</td><td colspan=\"16\">MIRACL Dev</td></tr><tr><td>ar</td><td>bn</td><td>es</td><td>fa</td><td>fi</td><td>fr</td><td>hi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>zh</td><td>avg.</td></tr><tr><td></td><td colspan=\"16\">nDCG@10</td></tr><tr><td>(a) BM25</td><td>48.1</td><td>50.8</td><td>31.9</td><td>33.3</td><td>55.1</td><td>18.3</td><td>45.3</td><td>44.9</td><td>36.9</td><td>41.9</td><td>33.4</td><td>38.3</td><td>49.4</td><td>48.4</td><td>18.0</td><td>39.6</td></tr><tr><td>(b) mDPR</td><td>49.9</td><td>44.3</td><td>47.8</td><td>48.0</td><td>47.2</td><td>43.5</td><td>38.3</td><td>27.2</td><td>43.9</td><td>41.9</td><td>40.7</td><td>29.9</td><td>35.6</td><td>35.8</td><td>51.2</td><td>41.7</td></tr><tr><td>(c) mContriever</td><td>52.5</td><td>50.0</td><td>41.8</td><td>21.5</td><td>60.2</td><td>31.4</td><td>28.6</td><td>39.2</td><td>42.4</td><td>48.3</td><td>39.1</td><td>56.0</td><td>52.8</td><td>51.7</td><td>41.0</td><td>43.8</td></tr><tr><td>(d) Cohere (API)</td><td>61.7</td><td>59.4</td><td>23.3</td><td>47.1</td><td>63.4</td><td>46.2</td><td>49.3</td><td>44.6</td><td>46.0</td><td>49.6</td><td>46.9</td><td>61.1</td><td>61.3</td><td>54.6</td><td>36.5</td><td>50.1</td></tr><tr><td>(1) mBERTCLS</td><td>50.9</td><td>45.2</td><td>31.7</td><td>30.5</td><td>48.5</td><td>30.5</td><td>37.7</td><td>19.8</td><td>43.4</td><td>40.0</td><td>27.0</td><td>21.2</td><td>36.4</td><td>37.7</td><td>43.9</td><td>36.1</td></tr><tr><td>(2) mBERTAGG</td><td>56.9</td><td>48.5</td><td>44.0</td><td>43.1</td><td>63.4</td><td>40.8</td><td>34.1</td><td>40.3</td><td>49.3</td><td>43.9</td><td>44.9</td><td>44.6</td><td>44.7</td><td>27.2</td><td>40.2</td><td>44.4</td></tr><tr><td>(3) mBERTtgAGG</td><td>55.7</td><td>45.5</td><td>45.4</td><td>42.4</td><td>62.3</td><td>42.9</td><td>34.3</td><td>38.7</td><td>47.7</td><td>43.4</td><td>45.2</td><td>45.5</td><td>43.9</td><td>32.5</td><td>41.5</td><td>44.4</td></tr><tr><td>(4) mBERTselfAGG</td><td>59.4</td><td>51.0</td><td>44.6</td><td>44.5</td><td>65.3</td><td>43.6</td><td>37.4</td><td>42.1</td><td>50.2</td><td>47.8</td><td>46.3</td><td>48.5</td><td>48.0</td><td>31.1</td><td>44.3</td><td>46.9</td></tr><tr><td>(5) XLM-RCLS</td><td>46.6</td><td>46.6</td><td>29.9</td><td>43.5</td><td>44.4</td><td>28.5</td><td>41.7</td><td>31.7</td><td>40.7</td><td>45.6</td><td>27.5</td><td>22.5</td><td>50.5</td><td>53.9</td><td>35.9</td><td>39.3</td></tr><tr><td>(6) XLM-RAGG</td><td>60.6</td><td>60.4</td><td>42.5</td><td>46.2</td><td>66.2</td><td>43.0</td><td>44.8</td><td>47.8</td><td>53.3</td><td>58.2</td><td>44.6</td><td>46.6</td><td>72.1</td><td>66.0</td><td>41.4</td><td>52.9</td></tr><tr><td>(7) XLM-RtgAGG</td><td>60.2</td><td>60.9</td><td>44.3</td><td>46.7</td><td>65.0</td><td>43.8</td><td>48.0</td><td>47.2</td><td>53.3</td><td>59.1</td><td>45.8</td><td>45.2</td><td>70.2</td><td>67.1</td><td>42.9</td><td>53.3</td></tr><tr><td>(8) XLM-RselfAGG</td><td>61.4</td><td>61.4</td><td>42.9</td><td>46.5</td><td>66.2</td><td>41.3</td><td>46.2</td><td>48.4</td><td>53.9</td><td>57.9</td><td>46.5</td><td>47.5</td><td>71.2</td><td>66.8</td><td>41.0</td><td>53.3</td></tr><tr><td></td><td colspan=\"16\">R@100</td></tr><tr><td>(a) BM25</td><td>88.9</td><td>90.9</td><td>70.2</td><td>73.1</td><td>89.1</td><td>65.3</td><td>86.8</td><td>90.4</td><td>80.5</td><td>78.3</td><td>66.1</td><td>70.1</td><td>83.1</td><td>88.7</td><td>56.0</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>84.1</td><td>81.9</td><td>86.4</td><td>89.8</td><td>78.8</td><td>91.5</td><td>77.6</td><td>57.3</td><td>82.5</td><td>73.7</td><td>79.7</td><td>61.6</td><td>76.2</td><td>67.8</td><td>94.4</td><td>78.9</td></tr><tr><td>(c) mContriever</td><td>92.5</td><td>92.1</td><td>84.1</td><td>65.4</td><td>95.3</td><td>82.4</td><td>64.6</td><td>80.2</td><td>87.8</td><td>87.5</td><td>85.0</td><td>91.1</td><td>96.1</td><td>93.6</td><td>90.3</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>84.3</td><td>81.9</td><td>67.1</td><td>84.1</td><td>61.2</td><td>72.9</td><td>76.3</td><td>47.7</td><td>80.7</td><td>71.9</td><td>61.1</td><td>50.4</td><td>75.5</td><td>69.7</td><td>90.0</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>86.9</td><td>83.8</td><td>81.0</td><td>81.0</td><td>90.2</td><td>83.2</td><td>70.9</td><td>76.6</td><td>84.1</td><td>71.7</td><td>80.8</td><td>79.4</td><td>82.4</td><td>60.0</td><td>81.3</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>86.2</td><td>82.2</td><td>82.1</td><td>79.9</td><td>89.7</td><td>82.9</td><td>72.6</td><td>75.7</td><td>84.3</td><td>76.3</td><td>80.5</td><td>80.9</td><td>81.7</td><td>63.8</td><td>82.6</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>88.7</td><td>85.1</td><td>80.1</td><td>82.4</td><td>92.1</td><td>86.5</td><td>77.6</td><td>80.2</td><td>85.5</td><td>78.4</td><td>82.0</td><td>81.5</td><td>83.4</td><td>65.9</td><td>84.5</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>79.2</td><td>82.6</td><td>63.6</td><td>79.9</td><td>75.2</td><td>66.8</td><td>76.9</td><td>62.7</td><td>77.2</td><td>77.6</td><td>62.5</td><td>48.8</td><td>87.0</td><td>89.2</td><td>78.9</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>89.8</td><td>92.5</td><td>78.8</td><td>85.4</td><td>92.6</td><td>81.8</td><td>85.1</td><td>83.5</td><td>89.0</td><td>88.5</td><td>78.9</td><td>79.1</td><td>95.4</td><td>95.1</td><td>80.6</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>89.3</td><td>93.1</td><td>80.4</td><td>84.6</td><td>91.8</td><td>81.4</td><td>84.5</td><td>82.6</td><td>88.9</td><td>86.6</td><td>79.5</td><td>77.7</td><td>94.8</td><td>95.1</td><td>79.9</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>90.2</td><td>92.8</td><td>78.2</td><td>84.5</td><td>93.1</td><td>81.2</td><td>82.5</td><td>84.6</td><td>89.8</td><td>88.3</td><td>80.8</td><td>79.4</td><td>95.5</td><td>94.5</td><td>79.0</td><td>86.3</td></tr></table>"
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "table_caption",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.114,
|
| 1431 |
+
0.655,
|
| 1432 |
+
0.883,
|
| 1433 |
+
0.684
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": "Table 6: Mr. TyDi multilingual retrieval comparisons with state-of-the-art multilingual retrievers. Bold denotes the best effectiveness. multi CL denotes multilingual contrastive learning."
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "table",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.12,
|
| 1442 |
+
0.695,
|
| 1443 |
+
0.879,
|
| 1444 |
+
0.86
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "<table><tr><td rowspan=\"2\">Models</td><td rowspan=\"2\">multi CL</td><td colspan=\"11\">Mr. TyDi Test</td></tr><tr><td>ar</td><td>bn</td><td>fi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>avg.</td></tr><tr><td></td><td></td><td colspan=\"11\">MRR@100</td></tr><tr><td>(a) BM25 (Zhang et al., 2021)</td><td>X</td><td>36.7</td><td>41.3</td><td>28.8</td><td>38.2</td><td>21.7</td><td>28.1</td><td>32.9</td><td>39.6</td><td>42.4</td><td>41.7</td><td>35.1</td></tr><tr><td>(b) mColBERT (Bonifacio et al., 2021)</td><td>✓</td><td>55.3</td><td>48.8</td><td>41.3</td><td>55.5</td><td>36.6</td><td>36.7</td><td>48.2</td><td>44.8</td><td>61.6</td><td>-</td><td>-</td></tr><tr><td>(c) mContriever (Izacard et al., 2022)</td><td>✓</td><td>43.4</td><td>42.3</td><td>35.1</td><td>42.6</td><td>32.4</td><td>34.2</td><td>36.1</td><td>51.2</td><td>37.4</td><td>40.2</td><td>39.5</td></tr><tr><td>(d) CCP (Wu et al., 2022)</td><td>X</td><td>42.6</td><td>45.7</td><td>37.2</td><td>46.2</td><td>37.7</td><td>34.6</td><td>36.0</td><td>39.2</td><td>47.0</td><td>48.9</td><td>41.5</td></tr><tr><td>(e) MSM (Zhang et al., 2023a)</td><td>X</td><td>51.6</td><td>53.0</td><td>39.4</td><td>50.5</td><td>32.0</td><td>36.8</td><td>37.2</td><td>43.4</td><td>62.6</td><td>53.5</td><td>44.7</td></tr><tr><td>(1) XLM-RCLS</td><td>X</td><td>41.9</td><td>40.8</td><td>27.8</td><td>39.9</td><td>32.5</td><td>33.0</td><td>27.7</td><td>23.7</td><td>54.2</td><td>46.1</td><td>36.8</td></tr><tr><td>(2) XLM-RAGG</td><td>X</td><td>52.3</td><td>55.8</td><td>43.2</td><td>55.0</td><td>40.4</td><td>40.5</td><td>41.5</td><td>45.1</td><td>77.5</td><td>57.3</td><td>50.8</td></tr><tr><td>(3) XLM-RtgAGG</td><td>X</td><td>52.3</td><td>55.2</td><td>43.0</td><td>54.8</td><td>41.1</td><td>40.4</td><td>44.9</td><td>46.0</td><td>76.2</td><td>58.7</td><td>51.2</td></tr><tr><td>(4) XLM-RselfAGG</td><td>X</td><td>52.0</td><td>58.5</td><td>42.6</td><td>54.8</td><td>39.2</td><td>41.6</td><td>44.3</td><td>47.3</td><td>74.7</td><td>58.3</td><td>51.3</td></tr></table>"
|
| 1448 |
+
},
|
| 1449 |
+
{
|
| 1450 |
+
"type": "page_number",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
0.478,
|
| 1453 |
+
0.929,
|
| 1454 |
+
0.526,
|
| 1455 |
+
0.941
|
| 1456 |
+
],
|
| 1457 |
+
"angle": 0,
|
| 1458 |
+
"content": "11696"
|
| 1459 |
+
}
|
| 1460 |
+
]
|
| 1461 |
+
]
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/58a42c3f-381b-4c90-a86a-76877cc92cd5_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10647e116112ab41d803672d74e6f34c193b0b4b9946a640008bf08ddd86aa33
|
| 3 |
+
size 1368197
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/full.md
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# mAggretriever: A Simple Yet Effective Approach to Zero-Shot Multilingual Dense Retrieval
|
| 2 |
+
|
| 3 |
+
Sheng-Chieh Lin $^{1*}$ , Amin Ahmad $^{2}$ , Jimmy Lin $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> University of Waterloo <sup>2</sup> Vectora
|
| 6 |
+
|
| 7 |
+
{s269lin,jimmylin}@uwaterloo.ca,amin@vectora.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Multilingual information retrieval (MLIR) is a crucial yet challenging task due to the need for human annotations in multiple languages, making training data creation labor-intensive. In this paper, we introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformers (e.g., mBERT and XLM-R) for dense retrieval. To enhance training and inference efficiency, we employ approximate masked-language modeling prediction for computing lexical features, reducing $70 - 85\%$ GPU memory requirement for mAggretriever fine-tuning. Empirical results demonstrate that mAggretriever, fine-tuned solely on English training data, surpasses existing state-of-the-art multilingual dense retrieval models that undergo further training on large-scale MLIR training data. Our code is available at https://github.com/castorini/dhr.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Fine-tuning a pre-trained transformer has proven to be highly effective in many tasks of natural language processing, including information retrieval (IR). Despite its success, the recent state-of-the-art dense retrieval (DR) models (Ni et al., 2022; Lin et al., 2023a) predominantly focus on English. This bias arises from the fact that constructing an effective DR model requires a substantial amount of annotated training data, which is predominantly available in English datasets (Bajaj et al., 2016; Kwiatkowski et al., 2019). This makes it challenging for users of low-resource languages to benefit from the recent IR progress.
|
| 16 |
+
|
| 17 |
+
To address this issue, researchers have explored leveraging pre-trained multilingual transformers, such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020), which exhibit promising language transferability even when fine-tuned on
|
| 18 |
+
|
| 19 |
+
English datasets alone (Zhang et al., 2023b). However, Izacard et al. (2022) emphasize the importance of contrastive pre-training on multilingual corpora to achieve improved language transferability in IR. Other approaches utilize multilingual parallel corpora or translation pairs for contrastive pretraining (Feng et al., 2022) or fine-tuning (Reimers and Gurevych, 2020; Bonifacio et al., 2021). However, these solutions are viable only when significant computational resources or multilingual parallel data are available.
|
| 20 |
+
|
| 21 |
+
Recently, Lin et al. (2023b) demonstrate that the existing DR models solely using [CLS] (or averaged pooling) do not fully exploit the capability from pre-trained transformers for retrieval. Lin et al. (2023b) propose Aggretriever by combining the semantic and lexical features from the respective two components, [CLS] and masked language modeling (MLM) prediction, which shows superior effectiveness on diverse English retrieval tasks. This inspires us to ask the question: Can Aggre-triever benefit multilingual retrieval?
|
| 22 |
+
|
| 23 |
+
Extending Aggretriever to support multilingual retrieval poses challenges due to increased computation costs for extracting lexical features from the MLM component as the vocabulary size of the pre-trained model grows, making fine-tuning with limited resources challenging. In this work, we propose two simple approaches to approximate MLM prediction to extract lexical features from multilingual pre-trained transformers, making the training of mAggretriever possible in one GPU within 40 GBs of memory. Remarkably, mAggretriever exhibits strong retrieval capability across multiple languages despite being fine-tuned on English data.
|
| 24 |
+
|
| 25 |
+
The paper is structured as follows: we begin by providing background information on standard DR and Aggretriever. We then present our proposed approaches to tackle the computational challenges
|
| 26 |
+
|
| 27 |
+
involved in extending Aggretriever to support multilingual retrieval. Subsequently, we compare mAgregtriever with other state-of-the-art multilingual DR models and explore the potential for extending mAgregtriever to enable cross-lingual retrieval.
|
| 28 |
+
|
| 29 |
+
# 2 Background
|
| 30 |
+
|
| 31 |
+
Dense Retrieval. Given a query with sequential tokens $q = ([CLS], q_1, \dots, q_n)$ , our task is to retrieve a list of passages to maximize some ranking metric such as nDCG or MRR. Standard dense retrieval (DR) models (Reimers and Gurevych, 2019; Karpukhin et al., 2020) based on pre-trained language models encode queries and passages as low dimensional [CLS] vectors with a bi-encoder architecture and use the dot product between the encoded vectors as the similarity score:
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
\operatorname {s i m} _ {\mathrm {C L S}} (q, p) \triangleq \mathbf {e} _ {q _ {[ \mathrm {C L S} ]}} \cdot \mathbf {e} _ {p _ {[ \mathrm {C L S} ]}}, \tag {1}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
where $\mathbf{e}_{q_{[CLS]}}$ and respective query and passage $\mathbf{e}_{p_{[CLS]}}$ are the [CLS] vectors at the last layer of a pre-trained language model (e.g., BERT).
|
| 38 |
+
|
| 39 |
+
Aggretriever. In addition to using [CLS] vectors to capture semantic textual features, Lin et al. (2023b) further propose to capture lexical textual features from the pre-trained MLM prediction head by projecting each contextualized token embedding $\mathbf{e}_{q_i}$ into a high-dimensional vector in the wordpiece lexical space:
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbf {p} _ {q _ {i}} = \operatorname {s o f t m a x} \left(\mathbf {e} _ {q _ {i}} \cdot \mathbf {W} _ {\mathrm {m l m}} + \mathbf {b} _ {\mathrm {m l m}}\right), \tag {2}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $\mathbf{e}_{q_i} \in \mathbb{R}^d$ , $\mathbf{W}_{\mathrm{mlm}} \in \mathbb{R}^{d \times |\mathrm{V}_{\mathrm{wp}}|}$ , and $\mathbf{b}_{\mathrm{mlm}} \in \mathbb{R}^{|\mathrm{V}_{\mathrm{wp}}|}$ are the weights of the pre-trained MLM linear projector, and $\mathbf{p}_{q_i} \in \mathbb{R}^{|\mathrm{V}_{\mathrm{wp}}|}$ is the $i$ -th contextualized token represented by a probability distribution over the BERT wordpiece vocabulary, $\mathrm{V}_{\mathrm{wp}}$ . Weighted max pooling is then performed over the sequential representations $(\mathbf{p}_{q_1}, \mathbf{p}_{q_2}, \dots, \mathbf{p}_{q_l})$ to obtain a single-vector lexical representation:
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\mathbf {v} _ {q} [ v ] = \max _ {i \in (1, 2, \dots , l)} w _ {i} \cdot \mathbf {p} _ {q _ {i}} [ v ], \tag {3}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $w_{i} = |\mathbf{e}_{q_{i}}\cdot \mathbf{W} + \mathbf{b}| \in \mathbb{R}^{1}$ is a positive scalar and $v \in \{1,2,\dots ,|\mathrm{V}_{\mathrm{wp}}|\}$ ; $\mathbf{W} \in \mathbb{R}^{d\times 1}$ and $\mathbf{b} \in \mathbb{R}^{1}$ are trainable weights. Note that the scalar $w_{i}$ for each token $q_{i}$ is essential to capture term importance, which $\mathbf{p}_{q_i}$ alone cannot capture since it is normalized by softmax. Note that the [CLS] token embedding is excluded since it is used for
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
Figure 1: Illustration of MLM prediction for the input token $q_{1}$ . (1) Whole-vocabulary prediction considers all languages while target-language prediction only considers the tokens in one language; e.g., (2) source language (Spanish) or (3) English. (4) Self prediction assigns probability of one to itself. 0. denotes less than 0.1.
|
| 55 |
+
|
| 56 |
+
next-sentence prediction during pre-training; thus, does not carry much lexical information.
|
| 57 |
+
|
| 58 |
+
The wordpiece lexical representation $\mathbf{v}_q$ is then compressed, without supervision, into low-dimensional vector $\mathbf{agg}_q^\star$ and concatenated with the [CLS] vector. The similarity score between a given $q - p$ pair is computed with their dot product:
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\begin{array}{l} \operatorname {s i m} (q, p) \triangleq \left(\mathbf {e} _ {q _ {[ \mathrm {C L S} ]}} \oplus \mathbf {v} _ {q}\right) \cdot \left(\mathbf {e} _ {p _ {[ \mathrm {C L S} ]}} \oplus \mathbf {v} _ {p}\right) \tag {4} \\ \approx \left(\epsilon_ {q _ {[ \mathrm {C L S} ]}} \oplus \mathbf {a g g} _ {q} ^ {\star}\right) \cdot \left(\epsilon_ {p _ {[ \mathrm {C L S} ]}} \oplus \mathbf {a g g} _ {p} ^ {\star}\right), \\ \end{array}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
where $\mathbf{e}_{q_{[\mathrm{CLS}]}}$ and $\mathbf{v}_q$ capture semantic and lexical textual features from BERT, respectively. Following Lin et al. (2023b), we linearly project $\mathbf{e}_{q_{[\mathrm{CLS}]}}$ into 128 dimension and compress $\mathbf{v}_q$ into 640 dimension; i.e., $\epsilon_{q_{[\mathrm{CLS}]}}\in \mathbb{R}^{128}$ and $\mathbf{agg}_q^\star \in \mathbb{R}^{640}$ . We refer readers to Lin et al. (2023b) for the details of compressing $\mathbf{v}_q$ into $\mathbf{agg}_q^\star$ .
|
| 65 |
+
|
| 66 |
+
# 3 Our Approach
|
| 67 |
+
|
| 68 |
+
In this work, we extend Aggretriever to multilingual retrieval, mAggretriever. However, directly applying Eq. (2) to multilingual pre-trained MLM heads over the whole vocabulary increases computation cost for both training and inference due to large vocabulary size $|\mathrm{V_{wp}}|$ in MLM projector, $\mathbf{W}_{\mathrm{mlm}}$ (Nair et al., 2022; Lassance, 2023). For example, the mBERT and XLM-R have respective vocabulary size of 120K and 250K (vs BERT's 35K) in the MLM projector. To address the issue, we propose two computationally efficient approaches to approximate MLM prediction in Eq. (2).
|
| 69 |
+
|
| 70 |
+
Table 1: Training and inference cost comparisons on MS MARCO with XLM-R base backbone.
|
| 71 |
+
|
| 72 |
+
<table><tr><td rowspan="2">Models</td><td colspan="2">vector components</td><td rowspan="2">MLM prediction</td><td colspan="2">training cost</td><td colspan="2">query encoding latency</td></tr><tr><td>[CLS]</td><td>agg*</td><td>GPU memory</td><td>total time</td><td>CPU</td><td>GPU</td></tr><tr><td>(1) XLM-RCLS</td><td>768 dim.</td><td>0 dim.</td><td>-</td><td>21 GBs</td><td>5.5 hrs</td><td>105 ms/q</td><td>9.1 ms/q</td></tr><tr><td>(2) XLM-RAGG</td><td></td><td></td><td>whole vocabulary</td><td>OOM (140 GBs)</td><td>-</td><td>400 ms/q</td><td>15.1 ms/q</td></tr><tr><td>(3) XLM-RtgAGG</td><td>128 dim.</td><td>640 dim.</td><td>target language</td><td>39 GBs</td><td>9.5 hrs</td><td>248 ms/q</td><td>11.4 ms/q</td></tr><tr><td>(4) XLM-RselfAGG</td><td></td><td></td><td>self</td><td>21 GBs</td><td>5.5 hrs</td><td>123 ms/q</td><td>9.1 ms/q</td></tr></table>
|
| 73 |
+
|
| 74 |
+
Target-language prediction. One intuitive approach is to compute the probability distribution over the target-language token set of interest, denoted as $\mathbf{V}_{\mathrm{wp}}^{\mathrm{tg}}$ , by replacing $\mathbf{W}_{\mathrm{mlm}}$ and $\mathbf{b}_{\mathrm{mlm}}$ in Eq. (2) with $\mathbf{W}_{\mathrm{mlm}}^{\mathrm{tg}}$ and $\mathbf{b}_{\mathrm{mlm}}^{\mathrm{tg}}$ , respectively.
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\left\{ \begin{array}{l l} \mathbf {W} _ {\mathrm {m l m}} ^ {\mathrm {t g}} [:, v ] = \mathbf {W} _ {\mathrm {m l m}} [:, v ]; \\ \mathbf {b} _ {\mathrm {m l m}} ^ {\mathrm {t g}} [ v ] = \mathbf {b} _ {\mathrm {m l m}} [ v ], & \text {i f} v \in \mathbf {V} _ {\mathrm {w p}} ^ {\mathrm {t g}} \\ \mathbf {W} _ {\mathrm {m l m}} ^ {\mathrm {t g}} [:, v ] = \overrightarrow {0}; \mathbf {b} _ {\mathrm {m l m}} ^ {\mathrm {t g}} [ v ] = 0, & \text {o t h e r w i s e}. \end{array} \right.
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
From the above equation, we only have to compute the matrix multiplication and softmax among $|\mathrm{V}_{\mathrm{wp}}^{\mathrm{tg}}|$ instead of $|\mathrm{V}_{\mathrm{wp}}|$ tokens. This approach assumes that only the tokens corresponding to the target language have responses when conducting MLM prediction; i.e., $\mathbf{p}_{q_i}[v] = 0$ if $v \notin \mathrm{V}_{\mathrm{wp}}^{\mathrm{tg}}$ .
|
| 81 |
+
|
| 82 |
+
Self prediction. Since BERT MLM pre-training task includes recovering the unmasked token itself, to further save computation cost, we may approximate MLM prediction by assigning a value of one to the token $q_{i}$ itself, and zero to the others:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
\mathbf {p} _ {q _ {i}} = x _ {j} \in \{0, 1 \} ^ {\left| \mathrm {V} _ {\mathrm {w p}} \right|} \text {f o r} j \in \{\operatorname {t o k} _ {-} \mathrm {i d} (q _ {i}) \}. \tag {5}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
The operation removes the computationally expensive matrix multiplication and softmax operation in Eq. (2). Note that, combining Eq. (3) and (5), the lexical representations, $\mathbf{v}_q$ , from self prediction can be considered bag-of-word vectors with learned term weights.
|
| 89 |
+
|
| 90 |
+
Figure 1 illustrates whole-vocabulary, our proposed target-language and self MLM predictions. Table 1 compares the training and inference cost of the standard DR (row 1) and mA ggtretriever with different MLM prediction strategies on MS MARCO dataset using the backbone of XLM-R base. The training and query encoding settings are detailed in Section 4.2. We observe training with target-language (English in our case) MLM prediction reduces the GPU memory requirement compared to whole vocabulary (row 3 vs 2) while self prediction yields training and inference efficiency on par with standard DR (row 4 vs 1).
|
| 91 |
+
|
| 92 |
+
# 4 Experimental Setups
|
| 93 |
+
|
| 94 |
+
# 4.1 Datasets and Metrics
|
| 95 |
+
|
| 96 |
+
We use the MS MARCO passage ranking dataset introduced by Bajaj et al. (2016), comprising a corpus with 8.8M passages and 500K training queries. Model supervised language (English) retrieval effectiveness is evaluated on the 6980 (MARCO dev) queries from the development set with one relevant passage per query on average. Following the established procedure, we report MRR@10 and R@1000 as the metrics.
|
| 97 |
+
|
| 98 |
+
We evaluate model zero-shot retrieval effectiveness in other languages using MIRACL dataset introduced by Zhang et al., comprising around 77k queries over Wikipedia in 18 languages with over 700k high-quality relevance judgments by native speakers. We use the publicly available development queries and their relevance judgements in 15 languages (two surprise languages and English are excluded). Following Zhang et al., we report macro averaged nDCG10 and R@100 over the 15 languages and list the full numbers in Appendix A.1.
|
| 99 |
+
|
| 100 |
+
Finally, we study how to conduct crosslingual retrieval using mA ggtretriever on XQuAD-R dataset introduced by Roy et al. (2020), consisting of parallel queries and corpora with 11 languages. We conduct retrieval using the queries with $X_{Q}$ language against the corpus with $X_{C}$ language and report the macro-averaged MAP@100 over all the cross-lingual combinations of the 110 language pairs $(X_{Q} \neq X_{C})$ , and the other 11 monolingual combinations $(X_{Q} = X_{C})$ .
|
| 101 |
+
|
| 102 |
+
Table 2 reports the data statistics of MIRACL and XQuAD-R. Note that the candidates of MIRACL are passages while the candidates of XQuAD-R are chunked sentences from XQuAD corpora. Note that although XQuAD corpora is the manually rewritten multilingual parallel corpora (Artetxe et al., 2020), the numbers of chunked sentences are
|
| 103 |
+
|
| 104 |
+
Table 2: MIRACL and XQuAD-R data statistics.
|
| 105 |
+
|
| 106 |
+
<table><tr><td rowspan="2">language</td><td rowspan="2">ISO</td><td colspan="2">MIRACL Dev</td><td colspan="2">XQuAD-R</td></tr><tr><td># queries</td><td># candidates</td><td># queries</td><td># candidates</td></tr><tr><td>Arabic</td><td>ar</td><td>2,869</td><td>2,061,414</td><td>1,190</td><td>1,222</td></tr><tr><td>Bengali</td><td>bn</td><td>411</td><td>297,265</td><td>-</td><td>-</td></tr><tr><td>German</td><td>de</td><td>-</td><td>-</td><td>1,190</td><td>1,276</td></tr><tr><td>Greek</td><td>el</td><td>-</td><td>-</td><td>1,190</td><td>1,234</td></tr><tr><td>English</td><td>en</td><td>648</td><td>32,893,221</td><td>1,190</td><td>1,180</td></tr><tr><td>Spanish</td><td>es</td><td>799</td><td>10,373,953</td><td>1,190</td><td>1,215</td></tr><tr><td>Persian</td><td>fa</td><td>632</td><td>2,207,172</td><td>-</td><td>-</td></tr><tr><td>Finnish</td><td>fi</td><td>1,271</td><td>1,883,509</td><td>-</td><td>-</td></tr><tr><td>French</td><td>fr</td><td>343</td><td>14,636,953</td><td>-</td><td>-</td></tr><tr><td>Hindi</td><td>hi</td><td>350</td><td>506,264</td><td>1,190</td><td>1,244</td></tr><tr><td>Indonesian</td><td>id</td><td>960</td><td>1,446,315</td><td>-</td><td>-</td></tr><tr><td>Japanese</td><td>ja</td><td>860</td><td>6,953,614</td><td>-</td><td>-</td></tr><tr><td>Korean</td><td>ko</td><td>213</td><td>1,486,752</td><td>-</td><td>-</td></tr><tr><td>Russian</td><td>ru</td><td>1,252</td><td>9,543,918</td><td>1,190</td><td>1,219</td></tr><tr><td>Swahili</td><td>sw</td><td>482</td><td>131,924</td><td>-</td><td>-</td></tr><tr><td>Telugu</td><td>te</td><td>828</td><td>518,079</td><td>-</td><td>-</td></tr><tr><td>Thai</td><td>th</td><td>733</td><td>542,166</td><td>1,190</td><td>852</td></tr><tr><td>Turkish</td><td>tr</td><td>-</td><td>-</td><td>1,190</td><td>1,167</td></tr><tr><td>Vietnamese</td><td>vi</td><td>-</td><td>-</td><td>1,190</td><td>1,209</td></tr><tr><td>Chinese</td><td>zh</td><td>393</td><td>4,934,368</td><td>1,190</td><td>1,196</td></tr></table>
|
| 107 |
+
|
| 108 |
+
different between languages. Also note that for each query, MIRACL has multiple relevant candidates while XQuAD only has one.
|
| 109 |
+
|
| 110 |
+
# 4.2 Implementation Details
|
| 111 |
+
|
| 112 |
+
Models. We apply mAggretriever to two 12-layer pre-trained multilingual models: (1) mBERT; (2) XLM-R. $^3$ We compare models fine-tuned solely using [CLS] vector and based on mAggretriever using whole-vocabulary MLM prediction with the subscripts "CLS" and "AGG", respectively, e.g., $\mathrm{mBERT}_{\mathrm{CLS}}$ and $\mathrm{mBERT}_{\mathrm{AGG}}$ . We report the two variants of mAggretriever with target-language and self prediction; e.g., $\mathrm{mBERT}_{\mathrm{AGG}}^{\mathrm{tg}}$ and $\mathrm{mBERT}_{\mathrm{AGG}}^{\mathrm{self}}$ . For target-language prediction, we train mAggretriever using English token prediction and run inference using the corresponding language of each corpus, and for whole-vocabulary prediction, we finetune with half of the batch size. In addition, we report the numbers of BM25 and mDPR from Zhang et al. as reference points, and the two state-of-the-art multilingual retrievers: (1) mContriever (Izacard et al., 2022), pre-trained on multilingual corpora with 29 languages and further fine-tuned on MS MARCO dataset; $^3$ (2) Cohere (API), whose numbers are copied from Kamalloo et al. (2023). $^4$
|
| 113 |
+
|
| 114 |
+
Training and Inference. We train our models on a single A100 GPU with 80 GB memory for 6 epochs (around 100k steps) with learning rate
|
| 115 |
+
|
| 116 |
+
7e-6. Each batch includes 24 queries, and for each query, we randomly sample one positive and seven negative passages. All the negatives are sampled from the MS MARCO "small" triples training set, which is created using BM25. During training, we minimize the negative log likelihood as the standard contrastive loss and following Karpukhin et al. (2020), for each query, we consider all the (positive and negative) passages from the other triplets in the batch as in-batch negative samples. We set the maximum input length for the query and the passage to 32 and 128, respectively, at both training and inference stages for MS MARCO. For MIRACL and XQUAD-R, we use the maximum input length of 128 and 256 for the query and passage, respectively. Note that we lowercase all the queries and passages for mAggretriever. We measure query encoding latency on the 6980 MS MARCO development queries with the batch size of 1 and single thread on a Linux machine with 12 Intel(R) Xeon(R) Gold 5317 CPU @ 3.00GHz and 88G of RAM.
|
| 117 |
+
|
| 118 |
+
Target Token Set Construction. For each language of corpus in MIRACL and XQuAD-R, we tokenize and lowercase all the passages and collect the unique tokens in the corpus as the target token set. For example, when fine-tuning on MS MARCO dataset, we use the token set built from MS MARCO corpus. While conducting target-language MLM prediction on MIRACL Arabic queries and corpus, we use the token set collected from Arabic corpus as our target token set. Note that self MLM prediction does not require collecting the token set for the target language.
|
| 119 |
+
|
| 120 |
+
# 5 Results
|
| 121 |
+
|
| 122 |
+
# 5.1 Results on MIRACL
|
| 123 |
+
|
| 124 |
+
Table 3 reports models' retrieval effectiveness on MS MARCO and MIRACL development queries. We first observe that mAggretriever, incorporating lexical features, not only outperforms its CLS counterpart in supervised English retrieval (MARCO), but also exhibits superior transferability to other languages (MIRACL) regardless of backbone. In addition, $\mathrm{mBERT}_{\mathrm{AGG}}$ without introducing contrastive pre-training on multilingual corpora, outperforms mContriever in MIRACL in terms of nDCG@10. Note that $\mathrm{mBERT}_{\mathrm{AGG}}$ and mContriever are both initialized from mBERT. We
|
| 125 |
+
|
| 126 |
+
Table 3: Supervised (MARCO) English and zero-shot (MIRACL) multilingual retrieval effectiveness comparisons. Full numbers are listed in Appendix A.1.
|
| 127 |
+
|
| 128 |
+
<table><tr><td rowspan="2">Models</td><td colspan="2">MARCO Dev</td><td colspan="2">MIRACL Dev</td></tr><tr><td colspan="2">English</td><td colspan="2">15 lang. macro avg.</td></tr><tr><td></td><td>MRR@10</td><td>R@1K</td><td>nDCG@10</td><td>R@100</td></tr><tr><td>(a) BM25</td><td>18.8</td><td>85.8</td><td>39.6</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>29.6</td><td>94.6</td><td>41.7</td><td>78.9</td></tr><tr><td>(c) mContriesver</td><td>27.4</td><td>97.1</td><td>43.8</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>50.1</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>29.1</td><td>93.6</td><td>36.1</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>34.3</td><td>95.8</td><td>44.4</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>34.5</td><td>96.1</td><td>44.4</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>34.2</td><td>95.6</td><td>46.9</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>31.1</td><td>93.8</td><td>39.3</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>34.7</td><td>96.1</td><td>52.9</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>35.0</td><td>96.2</td><td>53.3</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>35.0</td><td>96.0</td><td>53.3</td><td>86.3</td></tr></table>
|
| 129 |
+
|
| 130 |
+
hypothesize that mContriever's high recall (i.e., R@1K) comes from its pre-training on multilingual corpora with 29 languages.
|
| 131 |
+
|
| 132 |
+
Switching to XLM-R backbone, mAggretriever even outperforms Cohere (API). Furthermore, we notice that mAaggretriever with XLM-R backbone improves over mBERT more than its CLS counterpart does. For example, in the case of MIRACL, XLM- $\mathsf{R}_{\mathsf{AGG}}^{\mathrm{tg}}$ exhibits a significant improvement over $\mathsf{mBERT}_{\mathsf{AGG}}^{\mathrm{tg}}$ from an nDCG@10 score of 44.4 to 53.3, whereas XLM- $\mathsf{R}_{\mathsf{CLS}}$ only sees a modest improvement over $\mathsf{mBERT}_{\mathsf{CLS}}$ from 36.1 to 39.3. This notable enhancement highlights mAaggretriever's ability to effectively utilize a superior pre-trained language model.
|
| 133 |
+
|
| 134 |
+
Finally, compared to whole-vocabulary and proposed approximate MLM predictions, we observe that self prediction shows comparable and even strong language transferability. We hypothesize that MLM prediction learned from English data cannot transfer well to other languages. It is worth mentioning that compared to whole-vocabulary MLM prediction, the proposed approximate MLM prediction, target-language and self prediction, are advantageous for real-world deployment since they show almost no effectiveness drop (sometimes even better) but require far less training and inference cost as shown in Table 1.
|
| 135 |
+
|
| 136 |
+
# 5.2 Results on XQuAD-R
|
| 137 |
+
|
| 138 |
+
In Table 4, we directly apply XLM-R based models fine-tuned on MS MARCO to XQuAD-R dataset. In the experiment, we try different MLM prediction settings for $\mathrm{XLM - R_{AGG}^{tg}}$ . For example, instead of
|
| 139 |
+
|
| 140 |
+
Table 4: Zero-shot retrieval effectiveness on XQuAD-R. $X_{Q}\left(X_{C}\right)$ denotes the language of queries (corpus).
|
| 141 |
+
|
| 142 |
+
<table><tr><td rowspan="2">Models</td><td colspan="2">MLM target lang.</td><td colspan="2">XQuAD-R</td></tr><tr><td>query</td><td>corpus</td><td>\( {X}_{Q} = {X}_{C} \)</td><td>\( {X}_{Q} \neq {X}_{C} \)</td></tr><tr><td></td><td></td><td></td><td colspan="2">MAP@100</td></tr><tr><td>(1) XLM- \( {\mathrm{R}}_{\mathrm{{CLS}}} \)</td><td>-</td><td>-</td><td>73.1</td><td>57.5</td></tr><tr><td>(2) XLM- \( {\mathrm{R}}_{\mathrm{{AGG}}} \)</td><td>-</td><td>-</td><td>77.4</td><td>41.8</td></tr><tr><td>(3)</td><td>\( {X}_{Q} \)</td><td>\( {X}_{C} \)</td><td>77.4</td><td>36.0</td></tr><tr><td>(4) XLM- \( {\mathrm{R}}_{\mathrm{{AGG}}}^{\mathrm{{tg}}} \)</td><td>\( {X}_{C} \)</td><td>\( {X}_{C} \)</td><td>77.4</td><td>44.9</td></tr><tr><td>(5)</td><td>English</td><td>English</td><td>73.5</td><td>51.7</td></tr><tr><td>(6) XLM- \( {\mathrm{R}}_{\mathrm{{AGG}}}^{\text{self }} \)</td><td>-</td><td>-</td><td>77.3</td><td>36.2</td></tr></table>
|
| 143 |
+
|
| 144 |
+
using respective query and corpus source language as the target language (row 3), we use the language corresponding to each corpus (row 4) or English (row 5) as target language for both queries and corpus. Note that rows 3 and 4 are the same when both queries and corpus are in the same language $(X_{Q} = X_{C})$ .
|
| 145 |
+
|
| 146 |
+
We observe that mAggretriever shows relatively poor cross-lingual retrieval effectiveness ( $X_Q \neq X_C$ ) compared to its CLS counterpart (rows 2,3,6 vs 1). When aligning the MLM prediction target language for queries and corpus, the cross-lingual retrieval effectiveness sees improvement (rows 3 vs 4,5). These results show that MLM prediction head potentially can be used as a translation layer to project query and corpus into the same language, which is also reported by Nair et al. (2022). It is possible to leverage the transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022) and the established parallel training data (Bonifacio et al., 2021) to improve mAggretriever's cross-lingual retrieval capability, which we leave for future work.
|
| 147 |
+
|
| 148 |
+
# 6 Conclusion
|
| 149 |
+
|
| 150 |
+
In this paper, we introduce mAggretriever, an extension of Aggretriever for multilingual retrieval, by combining lexical and semantic features in pretrained language models for dense retrieval. We propose target-language and self MLM predictions to enhance the efficiency of mAggretriever. Our study highlights the efficiency advantage of self MLM prediction in multilingual retrieval, while target-language MLM prediction offers flexibility for cross-lingual retrieval. Importantly, mAggretriever, solely fine-tuned on English data, demonstrates competitive multilingual retrieval capability compared to other state-of-the-art dense retrievers.
|
| 151 |
+
|
| 152 |
+
# Limitations
|
| 153 |
+
|
| 154 |
+
Our research primarily focuses on enhancing multilingual retrieval, specifically targeting monolingual retrieval zero-shot transfer to non-English languages. We plan to extend our study to improve cross-lingual retrieval by leveraging transformers pre-trained with translation language modeling (Chi et al., 2021; Feng et al., 2022). In addition, we only discuss how to improve zero-shot language transferability of dense retrieval. It is possible to further improve model effectiveness by leveraging existing multilingual training data (Bonifacio et al., 2021; Zhang et al., 2021) and better negative mining strategies (Shen et al., 2022). Finally, due to space limitation, we compare mAggre-triever with previous state-of-the-art multilingual retrievers on Mr. TyDi (Zhang et al., 2021) in Appendix A.2.
|
| 155 |
+
|
| 156 |
+
# Acknowledgements
|
| 157 |
+
|
| 158 |
+
This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. We thank the anonymous referees who provided useful feedback to improve this work.
|
| 159 |
+
|
| 160 |
+
# References
|
| 161 |
+
|
| 162 |
+
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proc. ACL, pages 4623-4637.
|
| 163 |
+
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv:1611.09268.
|
| 164 |
+
Luiz Henrique Bonifacio, Vitor Jeronymo, Hugo Queiroz Abonizio, Israel Campiotti, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2021. mMARCO: A multilingual version of MS MARCO passage ranking dataset. arXiv:2108.13897.
|
| 165 |
+
Zewen Chi, Li Dong, Furu Wei, Nan Yang, Saksham Singhal, Wenhui Wang, Xia Song, Xian-Ling Mao, Heyan Huang, and Ming Zhou. 2021. InfoXLM: An information-theoretic framework for cross-lingual language model pre-training. In Proc. NAACL, pages 3576-3588.
|
| 166 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco
|
| 167 |
+
|
| 168 |
+
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proc. ACL, pages 8440-8451.
|
| 169 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. NAACL, pages 4171-4186.
|
| 170 |
+
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proc. ACL, pages 878-891.
|
| 171 |
+
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research.
|
| 172 |
+
Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-Hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluating embedding APIs for information retrieval. arXiv:2305.06300.
|
| 173 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proc. EMNLP, pages 6769-6781.
|
| 174 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 7:452-466.
|
| 175 |
+
Carlos Lassance. 2023. Extending English IR methods to multi-lingual IR. arXiv:2302.14723.
|
| 176 |
+
Sheng-Chieh Lin, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen tau Yih, and Xilun Chen. 2023a. How to train your DRAGON: Diverse augmentation towards generalizable dense retrieval. arXiv:2302.07452.
|
| 177 |
+
Sheng-Chieh Lin, Minghan Li, and Jimmy Lin. 2023b. Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval. Transactions of the Association for Computational Linguistics, pages 436-452.
|
| 178 |
+
Suraj Nair, Eugene Yang, Dawn Lawrie, James Mayfield, and Douglas W. Oard. 2022. Learning a sparse representation model for neural CLIR. In Proc. DE-SIRES.
|
| 179 |
+
Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022.
|
| 180 |
+
|
| 181 |
+
Large dual encoders are generalizable retrievers. In Proc. EMNLP, pages 9844-9855.
|
| 182 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERT-networks. In Proc. EMNLP, pages 3982-3992.
|
| 183 |
+
Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proc. EMNLP, pages 4512-4525.
|
| 184 |
+
Uma Roy, Noah Constant, Rami Al-Rfou, Aditya Barua, Aaron Phillips, and Yinfei Yang. 2020. LAReQA: Language-agnostic answer retrieval from a multilingual pool. In Proc. EMNLP, pages 5919-5930.
|
| 185 |
+
Tianhao Shen, Mingtong Liu, Ming Zhou, and Deyi Xiong. 2022. Recovering gold from black sand: Multilingual dense passage retrieval with hard and false negative samples. In Proc. EMNLP, pages 10659-10670.
|
| 186 |
+
Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, and Daxin Jiang. 2022. Unsupervised context aware sentence representation pretraining for multi-lingual dense retrieval. arXiv:2206.03281.
|
| 187 |
+
Shunyu Zhang, Yaobo Liang, Ming Gong, Daxin Jiang, and Nan Duan. 2023a. Modeling sequential sentence relation to improve cross-lingual dense retrieval. In Proc. ICLR.
|
| 188 |
+
Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proc. Workshop on Multilingual Representation Learning, pages 127-137.
|
| 189 |
+
Xinyu Zhang, Kelechi Ogueji, Xueguang Ma, and Jimmy Lin. 2023b. Toward best practices for training multilingual dense retrieval models. ACM Trans. Inf. Syst.
|
| 190 |
+
Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. MIRACL: A Multilingual Retrieval Dataset Covering 18 Diverse Languages. Transactions of the Association for Computational Linguistics, pages 1114-1131.
|
| 191 |
+
|
| 192 |
+
# A Appendix
|
| 193 |
+
|
| 194 |
+
# A.1 Full Results on MIRACL
|
| 195 |
+
|
| 196 |
+
Table 5 reports the detailed nDCG@10 and R@100 numbers on MIRACL 15 languages for all the compared models. Note that we do not use English dataset when evaluating on MIRACL.
|
| 197 |
+
|
| 198 |
+
# A.2 Comparisons on Mr. TyDi
|
| 199 |
+
|
| 200 |
+
Some previous state-of-the-art multilingual retrievers are evaluated on the test queries of Mr. TyDi (Zhang et al., 2021), the multilingual retrieval dataset similar to MIRACL but with sparse relevance judgements and less languages. In order to compare with the previous models, following the model inference settings in MIRACL, we evaluate our mAggretriever with the backbone of XLM-R. The full results are tabulated in Table 6. We still observe that all the variants of mAggretriever outperform previous state of the art in 6 out of 10 languages. Note that most of the previous retrievers undergo multilingual contrastive learning. For example, mContrieser are pre-trained on the corpus with 29 languages while mColBERT are fine-tuned on multilingual MS MARCO dataset introduced by Bonifacio et al. (2021).
|
| 201 |
+
|
| 202 |
+
Table 5: MIRACL multilingual retrieval comparisons. Bold denotes the best effectiveness.
|
| 203 |
+
|
| 204 |
+
<table><tr><td rowspan="2">Models</td><td colspan="16">MIRACL Dev</td></tr><tr><td>ar</td><td>bn</td><td>es</td><td>fa</td><td>fi</td><td>fr</td><td>hi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>zh</td><td>avg.</td></tr><tr><td></td><td colspan="16">nDCG@10</td></tr><tr><td>(a) BM25</td><td>48.1</td><td>50.8</td><td>31.9</td><td>33.3</td><td>55.1</td><td>18.3</td><td>45.3</td><td>44.9</td><td>36.9</td><td>41.9</td><td>33.4</td><td>38.3</td><td>49.4</td><td>48.4</td><td>18.0</td><td>39.6</td></tr><tr><td>(b) mDPR</td><td>49.9</td><td>44.3</td><td>47.8</td><td>48.0</td><td>47.2</td><td>43.5</td><td>38.3</td><td>27.2</td><td>43.9</td><td>41.9</td><td>40.7</td><td>29.9</td><td>35.6</td><td>35.8</td><td>51.2</td><td>41.7</td></tr><tr><td>(c) mContriever</td><td>52.5</td><td>50.0</td><td>41.8</td><td>21.5</td><td>60.2</td><td>31.4</td><td>28.6</td><td>39.2</td><td>42.4</td><td>48.3</td><td>39.1</td><td>56.0</td><td>52.8</td><td>51.7</td><td>41.0</td><td>43.8</td></tr><tr><td>(d) Cohere (API)</td><td>61.7</td><td>59.4</td><td>23.3</td><td>47.1</td><td>63.4</td><td>46.2</td><td>49.3</td><td>44.6</td><td>46.0</td><td>49.6</td><td>46.9</td><td>61.1</td><td>61.3</td><td>54.6</td><td>36.5</td><td>50.1</td></tr><tr><td>(1) mBERTCLS</td><td>50.9</td><td>45.2</td><td>31.7</td><td>30.5</td><td>48.5</td><td>30.5</td><td>37.7</td><td>19.8</td><td>43.4</td><td>40.0</td><td>27.0</td><td>21.2</td><td>36.4</td><td>37.7</td><td>43.9</td><td>36.1</td></tr><tr><td>(2) mBERTAGG</td><td>56.9</td><td>48.5</td><td>44.0</td><td>43.1</td><td>63.4</td><td>40.8</td><td>34.1</td><td>40.3</td><td>49.3</td><td>43.9</td><td>44.9</td><td>44.6</td><td>44.7</td><td>27.2</td><td>40.2</td><td>44.4</td></tr><tr><td>(3) mBERTtgAGG</td><td>55.7</td><td>45.5</td><td>45.4</td><td>42.4</td><td>62.3</td><td>42.9</td><td>34.3</td><td>38.7</td><td>47.7</td><td>43.4</td><td>45.2</td><td>45.5</td><td>43.9</td><td>32.5</td><td>41.5</td><td>44.4</td></tr><tr><td>(4) mBERTselfAGG</td><td>59.4</td><td>51.0</td><td>44.6</td><td>44.5</td><td>65.3</td><td>43.6</td><td>37.4</td><td>42.1</td><td>50.2</td><td>47.8</td><td>46.3</td><td>48.5</td><td>48.0</td><td>31.1</td><td>44.3</td><td>46.9</td></tr><tr><td>(5) XLM-RCLS</td><td>46.6</td><td>46.6</td><td>29.9</td><td>43.5</td><td>44.4</td><td>28.5</td><td>41.7</td><td>31.7</td><td>40.7</td><td>45.6</td><td>27.5</td><td>22.5</td><td>50.5</td><td>53.9</td><td>35.9</td><td>39.3</td></tr><tr><td>(6) XLM-RAGG</td><td>60.6</td><td>60.4</td><td>42.5</td><td>46.2</td><td>66.2</td><td>43.0</td><td>44.8</td><td>47.8</td><td>53.3</td><td>58.2</td><td>44.6</td><td>46.6</td><td>72.1</td><td>66.0</td><td>41.4</td><td>52.9</td></tr><tr><td>(7) XLM-RtgAGG</td><td>60.2</td><td>60.9</td><td>44.3</td><td>46.7</td><td>65.0</td><td>43.8</td><td>48.0</td><td>47.2</td><td>53.3</td><td>59.1</td><td>45.8</td><td>45.2</td><td>70.2</td><td>67.1</td><td>42.9</td><td>53.3</td></tr><tr><td>(8) XLM-RselfAGG</td><td>61.4</td><td>61.4</td><td>42.9</td><td>46.5</td><td>66.2</td><td>41.3</td><td>46.2</td><td>48.4</td><td>53.9</td><td>57.9</td><td>46.5</td><td>47.5</td><td>71.2</td><td>66.8</td><td>41.0</td><td>53.3</td></tr><tr><td></td><td colspan="16">R@100</td></tr><tr><td>(a) BM25</td><td>88.9</td><td>90.9</td><td>70.2</td><td>73.1</td><td>89.1</td><td>65.3</td><td>86.8</td><td>90.4</td><td>80.5</td><td>78.3</td><td>66.1</td><td>70.1</td><td>83.1</td><td>88.7</td><td>56.0</td><td>78.5</td></tr><tr><td>(b) mDPR</td><td>84.1</td><td>81.9</td><td>86.4</td><td>89.8</td><td>78.8</td><td>91.5</td><td>77.6</td><td>57.3</td><td>82.5</td><td>73.7</td><td>79.7</td><td>61.6</td><td>76.2</td><td>67.8</td><td>94.4</td><td>78.9</td></tr><tr><td>(c) mContriever</td><td>92.5</td><td>92.1</td><td>84.1</td><td>65.4</td><td>95.3</td><td>82.4</td><td>64.6</td><td>80.2</td><td>87.8</td><td>87.5</td><td>85.0</td><td>91.1</td><td>96.1</td><td>93.6</td><td>90.3</td><td>85.9</td></tr><tr><td>(d) Cohere (API)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>(1) mBERTCLS</td><td>84.3</td><td>81.9</td><td>67.1</td><td>84.1</td><td>61.2</td><td>72.9</td><td>76.3</td><td>47.7</td><td>80.7</td><td>71.9</td><td>61.1</td><td>50.4</td><td>75.5</td><td>69.7</td><td>90.0</td><td>71.6</td></tr><tr><td>(2) mBERTAGG</td><td>86.9</td><td>83.8</td><td>81.0</td><td>81.0</td><td>90.2</td><td>83.2</td><td>70.9</td><td>76.6</td><td>84.1</td><td>71.7</td><td>80.8</td><td>79.4</td><td>82.4</td><td>60.0</td><td>81.3</td><td>79.5</td></tr><tr><td>(3) mBERTtgAGG</td><td>86.2</td><td>82.2</td><td>82.1</td><td>79.9</td><td>89.7</td><td>82.9</td><td>72.6</td><td>75.7</td><td>84.3</td><td>76.3</td><td>80.5</td><td>80.9</td><td>81.7</td><td>63.8</td><td>82.6</td><td>80.1</td></tr><tr><td>(4) mBERTselfAGG</td><td>88.7</td><td>85.1</td><td>80.1</td><td>82.4</td><td>92.1</td><td>86.5</td><td>77.6</td><td>80.2</td><td>85.5</td><td>78.4</td><td>82.0</td><td>81.5</td><td>83.4</td><td>65.9</td><td>84.5</td><td>82.3</td></tr><tr><td>(5) XLM-RCLS</td><td>79.2</td><td>82.6</td><td>63.6</td><td>79.9</td><td>75.2</td><td>66.8</td><td>76.9</td><td>62.7</td><td>77.2</td><td>77.6</td><td>62.5</td><td>48.8</td><td>87.0</td><td>89.2</td><td>78.9</td><td>73.9</td></tr><tr><td>(6) XLM-RAGG</td><td>89.8</td><td>92.5</td><td>78.8</td><td>85.4</td><td>92.6</td><td>81.8</td><td>85.1</td><td>83.5</td><td>89.0</td><td>88.5</td><td>78.9</td><td>79.1</td><td>95.4</td><td>95.1</td><td>80.6</td><td>86.4</td></tr><tr><td>(7) XLM-RtgAGG</td><td>89.3</td><td>93.1</td><td>80.4</td><td>84.6</td><td>91.8</td><td>81.4</td><td>84.5</td><td>82.6</td><td>88.9</td><td>86.6</td><td>79.5</td><td>77.7</td><td>94.8</td><td>95.1</td><td>79.9</td><td>86.0</td></tr><tr><td>(8) XLM-RselfAGG</td><td>90.2</td><td>92.8</td><td>78.2</td><td>84.5</td><td>93.1</td><td>81.2</td><td>82.5</td><td>84.6</td><td>89.8</td><td>88.3</td><td>80.8</td><td>79.4</td><td>95.5</td><td>94.5</td><td>79.0</td><td>86.3</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 6: Mr. TyDi multilingual retrieval comparisons with state-of-the-art multilingual retrievers. Bold denotes the best effectiveness. multi CL denotes multilingual contrastive learning.
|
| 207 |
+
|
| 208 |
+
<table><tr><td rowspan="2">Models</td><td rowspan="2">multi CL</td><td colspan="11">Mr. TyDi Test</td></tr><tr><td>ar</td><td>bn</td><td>fi</td><td>id</td><td>ja</td><td>ko</td><td>ru</td><td>sw</td><td>te</td><td>th</td><td>avg.</td></tr><tr><td></td><td></td><td colspan="11">MRR@100</td></tr><tr><td>(a) BM25 (Zhang et al., 2021)</td><td>X</td><td>36.7</td><td>41.3</td><td>28.8</td><td>38.2</td><td>21.7</td><td>28.1</td><td>32.9</td><td>39.6</td><td>42.4</td><td>41.7</td><td>35.1</td></tr><tr><td>(b) mColBERT (Bonifacio et al., 2021)</td><td>✓</td><td>55.3</td><td>48.8</td><td>41.3</td><td>55.5</td><td>36.6</td><td>36.7</td><td>48.2</td><td>44.8</td><td>61.6</td><td>-</td><td>-</td></tr><tr><td>(c) mContriever (Izacard et al., 2022)</td><td>✓</td><td>43.4</td><td>42.3</td><td>35.1</td><td>42.6</td><td>32.4</td><td>34.2</td><td>36.1</td><td>51.2</td><td>37.4</td><td>40.2</td><td>39.5</td></tr><tr><td>(d) CCP (Wu et al., 2022)</td><td>X</td><td>42.6</td><td>45.7</td><td>37.2</td><td>46.2</td><td>37.7</td><td>34.6</td><td>36.0</td><td>39.2</td><td>47.0</td><td>48.9</td><td>41.5</td></tr><tr><td>(e) MSM (Zhang et al., 2023a)</td><td>X</td><td>51.6</td><td>53.0</td><td>39.4</td><td>50.5</td><td>32.0</td><td>36.8</td><td>37.2</td><td>43.4</td><td>62.6</td><td>53.5</td><td>44.7</td></tr><tr><td>(1) XLM-RCLS</td><td>X</td><td>41.9</td><td>40.8</td><td>27.8</td><td>39.9</td><td>32.5</td><td>33.0</td><td>27.7</td><td>23.7</td><td>54.2</td><td>46.1</td><td>36.8</td></tr><tr><td>(2) XLM-RAGG</td><td>X</td><td>52.3</td><td>55.8</td><td>43.2</td><td>55.0</td><td>40.4</td><td>40.5</td><td>41.5</td><td>45.1</td><td>77.5</td><td>57.3</td><td>50.8</td></tr><tr><td>(3) XLM-RtgAGG</td><td>X</td><td>52.3</td><td>55.2</td><td>43.0</td><td>54.8</td><td>41.1</td><td>40.4</td><td>44.9</td><td>46.0</td><td>76.2</td><td>58.7</td><td>51.2</td></tr><tr><td>(4) XLM-RselfAGG</td><td>X</td><td>52.0</td><td>58.5</td><td>42.6</td><td>54.8</td><td>39.2</td><td>41.6</td><td>44.3</td><td>47.3</td><td>74.7</td><td>58.3</td><td>51.3</td></tr></table>
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0e3ccbcc53510e24ae176bf642985deddeee31f2f8841115f5fc0712e287bf39
|
| 3 |
+
size 580525
|
2023/mAggretriever_ A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/63a599e4-8e55-4cc9-901f-a82194b5f82b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:85c82623b33c2f2399a5ea5767f93a01e04a3e4e24011f28aa0562401959a03c
|
| 3 |
+
size 3409085
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/full.md
ADDED
|
@@ -0,0 +1,439 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MREDDITSUM: A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images
|
| 2 |
+
|
| 3 |
+
Keighley Overbay $^{1}$ Jaewoo Ahn $^{1*}$ Fatemeh Pesaran zadeh $^{1*}$ Joonsuk Park $^{2,3,4}$ Gunhee Kim $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Seoul National University $^{2}$ University of Richmond $^{3}$ NAVER AI Lab $^{4}$ NAVER Cloud
|
| 6 |
+
|
| 7 |
+
{keighley.overbay,jaewoo.ahn,fatemeh.pesaran}@vision.snu.ac.kr
|
| 8 |
+
|
| 9 |
+
park@joonsuk.org, gunhee@snu.ac.kr
|
| 10 |
+
|
| 11 |
+
http://vision.snu.ac.kr/projects/mredditsum
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
The growing number of multimodal online discussions necessitates automatic summarization to save time and reduce content overload. However, existing summarization datasets are not suitable for this purpose, as they either do not cover discussions, multiple modalities, or both. To this end, we present MREDDITSUM, the first multimodal discussion summarization dataset. It consists of 3,033 discussion threads where a post solicits advice regarding an issue described with an image and text, and respective comments express diverse opinions. We annotate each thread with a human-written summary that captures both the essential information from the text, as well as the details available only in the image. Experiments show that popular summarization models—GPT-3.5, BART, and T5—consistently improve in performance when visual information is incorporated. We also introduce a novel method, cluster-based multi-stage summarization, that outperforms existing baselines and serves as a competitive baseline for future work.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
With the increased popularity of online discussion forums like Reddit, discussion threads that consist of a post and various numbers of comments, have quickly accumulated. It thus becomes overwhelming for users to sift through the threads to find the information they seek, which in turn has led to the development of automated means for text-only discussion summarization (Bhatia et al., 2014; Fabbri et al., 2021, 2022).
|
| 20 |
+
|
| 21 |
+
However, discussion threads are often multimodal, containing visuals in addition to text. This added modality cannot be ignored, as it plays a key role in the respective discussions. For example, in Table 1, the image of the couch is essential for discussing which coffee table would go well with it.
|
| 22 |
+
|
| 23 |
+
<table><tr><td>Post:
|
| 24 |
+
We got this couch for our living room and I need help finding the perfect coffee table to go with it. ...</td><td>Post Summary:
|
| 25 |
+
The OP asked for help with finding the right coffee table shape to match their brown leather sectional.</td></tr><tr><td>Comments:
|
| 26 |
+
User 1: I would do a circular table.
|
| 27 |
+
User 2: Definitely round! There are a lot of sharp angles already ...
|
| 28 |
+
User 3: There's way too much furniture in this space, the ottoman has to go ...
|
| 29 |
+
User 4: I think you should look for a natural wood triangular shaped coffee table ...
|
| 30 |
+
User 5: You should get a rug. ...</td><td>Comment Summaries:
|
| 31 |
+
C1,C2,C4: Commenters suggest a differently shaped coffee table from the square one in the picture, round or triangular or hexagonal.
|
| 32 |
+
C3: A commenter suggests eliminating the ottoman as it takes up too much space.
|
| 33 |
+
C5: A commenter suggests adding a rug.</td></tr><tr><td colspan="2">Full Summary:
|
| 34 |
+
The OP asked for help with finding the right coffee table shape to match their brown leather sectional. Commenters suggested a differently shaped coffee table from the square one he has already, such as round, triangular, or hexagonal. A few commenters suggested eliminating the ottoman, as it is too big for the small space. Others suggested adding a rug.</td></tr></table>
|
| 35 |
+
|
| 36 |
+
Table 1: An example from the MREDDITSUM dataset. Both the post, several viewpoints from the comments, and the overall thread are summarized along with important content only available in the image (in green), or in both image and text (in blue).
|
| 37 |
+
|
| 38 |
+
Yet, multimodal summarization has so far been limited to news and instructional domains (Zhu et al., 2018; Sanabria et al., 2018; Palaskar et al., 2019; Liu et al., 2020) that are not easily transferable to online discussions surrounding images.
|
| 39 |
+
|
| 40 |
+
To fill the gap, we tackle multimodal discussion summarization. In particular, we consider Reddit discussion threads in which the post solicits advice regarding an issue described with an image and text, and commenters offer opinions, as opposed to simple reactions or jokes. Here, the goal is to generate an abstractive summary faithfully capturing the information from the post—both image and text—and comments. This task is especially challenging because along with the need to effectively process the multimodal input, a quality summary
|
| 41 |
+
|
| 42 |
+
must provide good coverage of commenters' varying perspectives and opinions without redundancy.
|
| 43 |
+
|
| 44 |
+
To facilitate research in this direction, we contribute the Multimodal Reddit Summarization (MREDDITSUM) dataset, consisting of 3,033 Reddit discussion threads containing posts (text and image) and comments (text-only), each accompanied by a human-written summary, as shown in Table 1. We carefully select subreddits with discussions surrounding an image, and collect summaries that not only summarize the text, but also make reference to relevant information present only in the image. See Appendix C for more examples.
|
| 45 |
+
|
| 46 |
+
We also propose cluster-based multi-stage summarization (CMS), a novel method to summarize multimodal discussions. It processes discussions in three stages: (i) comments are first clustered by similarity, (ii) each cluster is summarized in a sentence, and (iii) the cluster-summaries are summarized. Experiments show that CMS consistently outperforms popular large language models (LLMs) for summarization—GPT-3.5 (Brown et al., 2020), BART (Lewis et al., 2020), and T5 (Raffel et al., 2020). Also, incorporating image information, either as a dense vector or in text caption, consistently boosts the performance.
|
| 47 |
+
|
| 48 |
+
Our main contributions are as follows:
|
| 49 |
+
|
| 50 |
+
- We present MREDDITSUM, the first multimodal discussion summarization dataset with human-written summaries with essential information from both the text and the image. Table 2 presents comprehensive comparison with existing summarization datasets.
|
| 51 |
+
- We propose cluster-based multi-stage summarization (CMS), a novel method to summarize multimodal discussions outperforming competitive baselines like GPT-3.5, BART and T5, as well as their vision-guided variations.
|
| 52 |
+
|
| 53 |
+
# 2 Related Work
|
| 54 |
+
|
| 55 |
+
Discussion Thread Summarization. Despite the prevalence of discussion threads online, it has been understudied for summarization. This is likely due to the fact that it can be a difficult target for extractive summarization, though there has been small extractive summarization dataset (Bhatia et al., 2014).
|
| 56 |
+
|
| 57 |
+
More recently, several abstractive summarization datasets have been proposed. ConvoSumm (Fabbri et al., 2021) presented a dataset of 2000 summarized forum threads, 500 from each of 4
|
| 58 |
+
|
| 59 |
+
different domains including NYT articles, Reddit, StackExchange, and Email threads. AnswerSumm (Fabbri et al., 2022) is another dataset consisting of 4,631 question-answering discussion threads sourced from StackExchange. AnswerSumm shares the most similarities with our dataset, as they also summarize multi-speaker threads, and their annotation pipeline shares some similarities with ours. They also cluster the comments and summarize these groups before going through a final summary editing process, similar to our pipeline. The key differences between this dataset and ours is that AnswerSumm is only text-based with no images and operates in a different domain, as they are all question-answering threads from StackExchange. In contrast, our dataset includes both images and text, and focuses on Reddit threads where the images play a key role. Additionally, in our annotation pipeline we summarize the original post and image as well, which to our knowledge has not been done in any other forum summarization dataset. This is useful because often the posts alone have unclear intent that may require context derived from the image or forum domain itself.
|
| 60 |
+
|
| 61 |
+
Other related summarization datasets include multi-turn datasets such as SamSUM (Gliwa et al., 2019), consisting of chat-like dialogues and human-annotated summaries, and EmailSum (Zhang et al., 2021), containing work-related emails and both long and short reference summaries.
|
| 62 |
+
|
| 63 |
+
Overall, though there is a small variety of existing discussion thread summarization datasets, they are all currently only text-based and none of these tackle both original post and thread summarization.
|
| 64 |
+
|
| 65 |
+
Multimodal Summarization. Though other multimodal research areas such as VQA (Agrawal et al., 2017) and text-image pretraining (Radford et al., 2021; Li et al., 2022a, 2023) have been gaining attention in recent years, there only exist a small handful of works that address multimodal summarization, which aims to generate a summary that includes salient information from inputs with multiple modalities. For example, MSMO (Zhu et al., 2018; Qiu et al., 2023), Multimodal Summarization with Multimodal Outputs, takes inputs of various modalities and outputs both a text-based summary and a representative image.
|
| 66 |
+
|
| 67 |
+
However, our task aims to generate a unimodal output—that is, a purely textual summary. This is similar to the multimodal summarization done on the How2 Dataset (Sanabria et al., 2018; Palaskar
|
| 68 |
+
|
| 69 |
+
<table><tr><td>Dataset</td><td>Domain</td><td>#Docs</td><td>Doc Len</td><td>Sum Len</td><td>#Turns</td><td>#Speakers</td><td>Modality</td><td>StructSum</td></tr><tr><td>MREDDITSUM(ours)</td><td>Forum</td><td>3,033</td><td>691.0</td><td>91.0</td><td>22.6</td><td>15.59</td><td>t,i</td><td>✓</td></tr><tr><td>AnswerSumm</td><td>Forum</td><td>4,631</td><td>787.0</td><td>47.0</td><td>6.4</td><td>6.17</td><td>t</td><td>✓</td></tr><tr><td>ConvoSummreddit</td><td>Forum</td><td>500</td><td>641.0</td><td>65.0</td><td>7.88</td><td>*</td><td>t</td><td></td></tr><tr><td>SamSUM</td><td>Dialog</td><td>16,396</td><td>124.1</td><td>23.4</td><td>12.19</td><td>2.39</td><td>t</td><td></td></tr><tr><td>CNN/DM</td><td>News</td><td>286,817</td><td>766.0</td><td>53.0</td><td>1</td><td>1</td><td>t</td><td></td></tr><tr><td>MSMO DailyMail</td><td>News</td><td>314,581</td><td>722.7</td><td>55.0</td><td>1</td><td>1</td><td>t,i</td><td></td></tr><tr><td>How2</td><td>Video</td><td>79,114</td><td>291.0</td><td>33.0</td><td>1</td><td>1</td><td>t,v</td><td></td></tr></table>
|
| 70 |
+
|
| 71 |
+
*: speaker info not provided / t: text / i: image / v: video
|
| 72 |
+
Table 2: A comparison of MREDDITSUM and other summarization datasets. Among forum-based and multi-turn datasets, MREDDITSUM is the only multimodal dataset, and it has the highest summary length, number of turns, and number of speakers. Length is reported in the average number of words, and turns are the average number of each instance of a post, comment, or speaker change. StructSum denotes whether there are structural-level summary annotations, such as for comment clusters. Statistics are taken from the respective papers for AnswerSumm (Fabbri et al., 2022), ConvoSumm(Fabbri et al., 2021), SamSUM (Gliwa et al., 2019), CNN/DM (Nallapati et al., 2016), MSMO DailyMail (Zhu et al., 2018), and How2 (Yu et al., 2021).
|
| 73 |
+
|
| 74 |
+
et al., 2019), where a textual transcript of the video along with the video frames are generated into a text summary. (Yu et al., 2021) reported that incorporating the additional modality of the video frames into their summarization models showed improvement compared to text-only based models. Though this multimodal summarization task is the most similar to ours, there are some key differences. The How2 dataset uses short video captions as pseudo-summaries, instead of detailed human-annotated summaries like we curate for MREDDIT-SUM. Additionally, our text is a rich multi-speaker discussion, rather than a transcript of audio. Finally, MREDDITSUM's threads are specifically selected to include images where their information is necessarily included in the summary, whereas there is no such assurance for How2's videos.
|
| 75 |
+
|
| 76 |
+
# 3 The MREDDITSUM Dataset
|
| 77 |
+
|
| 78 |
+
# 3.1 Data Preparation
|
| 79 |
+
|
| 80 |
+
To construct a meaningful multimodal discussion summarization dataset, we imposed three major criteria when selecting Reddit threads.
|
| 81 |
+
|
| 82 |
+
Criterion 1. The discussion thread needs to contain an image. Since Reddit does not allow images embedded in comments, this means that the post needs to contain an image.
|
| 83 |
+
|
| 84 |
+
Criterion 2. The discussion needs to be centered around an image in such a way that the information only available from the image plays a key role in the discussion. In some threads, the image does not provide any information, e.g. it is a favorite character of the original poster. In such cases, simply
|
| 85 |
+
|
| 86 |
+
summarizing the text is sufficient, and a multimodal model is unnecessary.
|
| 87 |
+
|
| 88 |
+
Criterion 3. The discussion needs to be meaningfully summarizable. Many Reddit threads that include images are meant to incite reactions from other users, or to be shared in a jocular manner. On the other hand, some threads consist of the post clearly asking for advice or opinions, thereby eliciting diverse responses from a number of commenters. Summarizing these opinions along with the advice would be helpful for readers to understand the gist of the threads.
|
| 89 |
+
|
| 90 |
+
Sources. Given the aforementioned criteria, we identify 9 subreddits, e.g. r/fashionadvice (See Appendix A for the complete list), which consist primarily of image-based posts where the original poster is soliciting advice or opinions about either clothing or interior design. We collect all threads from these subreddits with over 5 comments from years 2015-2022. Collection is done with RedCaps (Desai et al., 2021) modified to collect all comments from each thread. We additionally follow similar preprocessing steps (Ahn et al., 2023), removing all posts that contain NSFW content or images with faces. We also remove any comments with NSFW content, and comments posted by bots. All responses to these removed comments are also removed. We replace URL's with the "[URL]" token.
|
| 91 |
+
|
| 92 |
+
# 3.2 Annotation
|
| 93 |
+
|
| 94 |
+
We then annotate the data after selecting qualified workers from Amazon Mechanical Turk (MTurk). We limit our workers to those from English-
|
| 95 |
+
|
| 96 |
+
speaking countries with a HIT approval rate over $98\%$ , with greater than 5000 HITs approved. For all tasks, workers are required to pass a qualification task where the results are manually checked for quality. Any workers who are found to submit low-quality work have their qualification revoked. In total, a pool of 40 annotators were qualified to perform annotation tasks. As each task was performed separately, it was possible for multiple annotators to contribute to a single summary. So Additional detail on the annotation interface and instructions are found in Appendix B. The annotation is conducted in a 3-step annotation pipeline as follows.
|
| 97 |
+
|
| 98 |
+
Step 1: Original Post Summarization. In the first step, we present workers with the original post along with the image from that post. We ask the annotators to summarize in a single sentence the intent of the original poster, as well as the most relevant details from the image. We use this method because a post that simply reads "blue or black?" may only be comprehensible when paired with the image of blue and black heels next to a blue dress, and a true text-only summary should be comprehensible without the image. Our summary may then read "The original poster asked if blue or black heels would match better with a strapless, knee-length blue dress," eliminating the need of the image to comprehend the question. In this way, all information necessary to understand the question should be self-contained within the summary.
|
| 99 |
+
|
| 100 |
+
Step 2: Comment Cluster Summarization. We first cluster the comments to identify groups of comments that share a similar opinion. We follow the method described in AnswerSUMM (Fabbri et al., 2022) to allow for clusters of varying sizes and number. We use a RoBERTa-based model fine-tuned for semantic similarity to get sentence embeddings of the top-level comments from each thread. We then use agglomerative clustering with average linkage, cosine distance, and a maximum distance of 0.5 to generate clusters of comments.
|
| 101 |
+
|
| 102 |
+
We then rank the comment clusters according to their size and Reddit score. We take the sum of all Reddit scores of the top-level comments in a single cluster as a saliency score of the cluster. We select the top 5 clusters with the highest saliency scores and use these for annotation. We do this to limit the size of the summary and to help remove irrelevant comments, while encouraging larger clusters of comments with a similar sentiment.
|
| 103 |
+
|
| 104 |
+
We then present these groups of comments to an-
|
| 105 |
+
|
| 106 |
+
<table><tr><td>Structure</td><td>Document</td><td>Summary</td></tr><tr><td rowspan="2">Original Post</td><td>1.62 sents</td><td>1.07 sents</td></tr><tr><td>18.87 words</td><td>23.14 words</td></tr><tr><td>Comment</td><td>6.63 sents</td><td>1.34 sents</td></tr><tr><td>Clusters</td><td>85.05 words</td><td>20.17 words</td></tr><tr><td rowspan="3">Full Thread</td><td>21.6 comments</td><td></td></tr><tr><td>37.41 sents</td><td>5.32 sents</td></tr><tr><td>691 words</td><td>91.0 words</td></tr></table>
|
| 107 |
+
|
| 108 |
+
Table 3: Average statistics across the original post, comment clusters, and full thread structures of our dataset.
|
| 109 |
+
|
| 110 |
+
notators along with the original post and image, and ask them to summarize within one or two sentences the main opinions present in each group of comments. We encourage the annotators to reference objects or details from the image when necessary. For consistency, we instruct the annotators to refer to the commenters as "Commenters" as opposed to people, users, or other words.
|
| 111 |
+
|
| 112 |
+
Step 3: Summary Synthesis. Finally, we concatenate the original post summary as well as the comment cluster summaries, in descending order of their saliency-scores. We then present these summaries once more to annotators and ask them to edit them for fluency and readability. We encourage annotators to reduce repetitive wording, add connectives between sentences, and to rearrange sentences so that related topics are next to each other and the overall summary reads as more natural. We also ensure all summaries are written entirely in the past-tense.
|
| 113 |
+
|
| 114 |
+
# 3.3 Dataset Analyses
|
| 115 |
+
|
| 116 |
+
Statistics. The resulting dataset contains a total of 3,033 posts and summaries. We split these into a train, test, and validation set of sizes 2729, 152, and 152, respectively. We present further statistics in Table 2, where we compare with similar summarization datasets from a few different domains. The average summary length for MREDDITSUM is longer than other datasets; however, this is not surprising given the nature of summarizing varying opinions, of which there could be many. Additionally, we describe the structure-level statistics in Table 3; note that while the average length of the Original Post summary is longer than the document, this is due to the additional image description and context. For the full thread, the summary is $13.2\%$ as long as the input on average, which is comparable to SamSUM's $19\%$ and How2's $11.3\%$ .
|
| 117 |
+
|
| 118 |
+
Summary Quality. In order to ensure the quality summary, we additionally performed an exper
|
| 119 |
+
|
| 120 |
+
iment to rate the annotated summaries out of 3 metrics. Following a closely related work, Sam-SUM (Gliwa et al., 2019), we randomly selected 100 thread-summary pairs and had 2 independent judges from MTurk grade them on a quality scale of -1, 0, or 1, where -1 means a summary is irrelevant or does not make sense, 0 means a summary extracts only part of the relevant information or makes some mistakes, and 1 means a summary that is understandable and delivers a brief overview of the relevant information from the thread. We asked annotators to score summaries on overall quality, fluency, and faithfulness, similar to our human annotation found in Sec 5.2. We found the scores were highly positive, with average scores of 0.95, 0.96, 0.83 for overall quality, fluency, and faithfulness respectively. Additionally, we found Gwet's AC1 agreement scores of .91, 0.89, and 0.53, corresponding to high, high, and moderate agreement, respectively. Note that we used the Gwet's AC1 score for interannotator agreement as it performs well despite class imbalances where agreement is high. (Gwet, 2008; Wongpakaran et al., 2013; Wong et al., 2021)
|
| 121 |
+
|
| 122 |
+
Abstractiveness. Extractive-Oracle ROUGE scores in Table 4 show that our dataset is similar in abstractiveness to other multi-turn datasets, and much more abstractive than DailyMail. Though scores are not available for MSMO, it is expected that the scores would be similar to DailyMail.
|
| 123 |
+
|
| 124 |
+
Relatedness between Text and Images. We also calculate the CLIPScore (Hessel et al., 2021), a metric that measures the correlation between text and an image, to determine how grounded our summaries are to the images from each thread. Our summaries have an average CLIPScore of 74.62, the post summaries alone achieve 74.89, and the comment cluster summaries alone score 68.34. These suggest our summaries, especially the post summaries, are well-correlated with the images.
|
| 125 |
+
|
| 126 |
+
# 4 Experiments
|
| 127 |
+
|
| 128 |
+
# 4.1 Task Description
|
| 129 |
+
|
| 130 |
+
We consider the multimodal summarization task, where the input includes all original text and the image and the output is a text-only summary that describes both the document and image. The text includes the post and comments, and the goal is to accurately summarize both the original poster's intent and commenters' opinions. For this task, we format the text input as the following: "Orig
|
| 131 |
+
|
| 132 |
+
<table><tr><td rowspan="2">Dataset</td><td colspan="3">Extractive Oracle ROUGE</td></tr><tr><td>R1</td><td>R2</td><td>RL</td></tr><tr><td>MREDDITSUM (ours)</td><td>36.52</td><td>11.95</td><td>31.42</td></tr><tr><td>AnswerSumm</td><td>40.05</td><td>18.45</td><td>35.70</td></tr><tr><td>ConvoSummreddit</td><td>35.74</td><td>10.45</td><td>30.74</td></tr><tr><td>DailyMail</td><td>55.23</td><td>30.55</td><td>51.24</td></tr></table>
|
| 133 |
+
|
| 134 |
+
Table 4: A comparison of Extractive Oracle ROUGE scores on MREDDITSUM and related datasets. The lower the score, the more abstractive the summaries are. Results for related works are from the respective papers (Fabbri et al., 2022, 2021).
|
| 135 |
+
|
| 136 |
+
inal Post: Original Post", with "Image: Image Caption." appended for models that include image captions. We then additionally append the comments in the form "User 1: Comment 1. User 2: Comment 2. ..." where each username has been anonymized. Comments are listed in the order that they are scraped from Reddit in. The target output is the result of our final summary.
|
| 137 |
+
|
| 138 |
+
# 4.2 Evaluation Metrics
|
| 139 |
+
|
| 140 |
+
Following the standard metric for summarization evaluation, we use the ROUGE (Lin, 2004) and BertScore (Zhang* et al., 2020). ROUGE<sup>1</sup> measures the salience of model-generated summaries by comparing n-grams in the generated summary and gold summary. We consider three variants: ROUGE-1/2 (R1/2) measure the unigram/bigram overlap, and ROUGE-L (RL) determines the longest common subsequence between summaries. BertScore<sup>2</sup> computes a soft token similarity using contextual embeddings output from BERT, instead of word matches. We use the default RoBERTa-large model and rescale with baseline.
|
| 141 |
+
|
| 142 |
+
# 4.3 Baseline Models
|
| 143 |
+
|
| 144 |
+
We consider four text-only baseline models: GPT-3.5 (zero-shot), BART, T5, and LongT5 (finetuned), as well as their extensions to make use of image information, either as image captions or embeddings.
|
| 145 |
+
|
| 146 |
+
Extractive Baselines (Lead-1, Lead-Comment, Ext-Oracle). Lead-1 uses the first sentence from the document as the summary, and Lead-Comment uses the leading top 5 comments from the thread. Ext-Oracle extracts passages from the document to achieve the maximum possible ROUGE score, and
|
| 147 |
+
|
| 148 |
+
thus is the highest possible performance from an extractive model.
|
| 149 |
+
|
| 150 |
+
Text-only Baselines (GPT-3.5, BART, T5, LongT5). GPT-3.5 (Ouyang et al., 2022) is an LLM that has shown excellent zero-shot performance in summarization tasks (Goyal et al., 2022; Bhaskar et al., 2023). We use the largest model, text-davinci-003, through the OpenAI API, with the prompt "Summarize what the original post was asking about and the general opinions of the commenters.", which is determined empirically to perform well and closely mimic the instructions given to annotators. We also evaluate three finetuned models, BART-base (Lewis et al., 2020), and T5-base (Raffel et al., 2020), which are high-performing LLMs with good summarization abilities, as well as LongT5-base (Guo et al., 2022) which is an extension of T5 that is capable of handling longer input sequences. We pretrain them on the CNN/DailyMail (Nallapati et al., 2016) summarization dataset before fine-tuning it for our task.
|
| 151 |
+
|
| 152 |
+
Extensions with Image Captioning (GPT3.5-ImgCap, BART-ImgCap, T5-ImgCap, LongT5-ImgCap). We extend the text-only baselines to incorporate visual information through the use of an image caption, denoted as GPT3.5-ImgCap, BART-ImgCap, T5-ImgCap, and LongT5-ImgCap respectively. They take advantage of powerful LLMs without large amounts of multimodal training to understand visual features. For image captions, we use the BLIP2 model (Li et al., 2023) trained on COCO image captions (Chen et al., 2015) and generate multiple image captions for each image using nucleus sampling. Since a more detailed and grounded image caption that describes concrete objects is best for this task, we use a image-grounding model, GLIP (Li et al., 2022b), to score each caption by grounding it with the image, and calculate how many image-text grounded pairs are above a threshold of 0.7. We then select the image caption with the highest score and append the caption to the input after the original post. We then fine-tune BART-ImgCap, T5-ImgCap, and LongT5-ImgCap as described above; for GPT3.5-ImgCap, we use the caption-appended prompt.
|
| 153 |
+
|
| 154 |
+
Extensions with Vision-Guidance (VG-BART, VG-T5). Vision-Guided BART and T5 are presented in (Yu et al., 2021) for multimodal summarization. They include additional visual layers that receive video embeddings as input, and show state-of-the-art performance in multimodal summariza
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
Figure 1: An illustration of Cluster-based Multi-stage Summarization (CMS): (1) comments are first clustered by similarity, (2) each cluster is summarized in a sentence, and (3) the cluster-summaries are summarized. (OP: the original post / I: the post image / ComClus $_k$ : the $k$ -th comment cluster / N: the number of comment clusters / ClusSum $_k$ : the generated summary of the $k$ -th comment cluster)
|
| 158 |
+
|
| 159 |
+
tion for the How2 dataset. We modify the original models by instead using 768-D ViT-base (Dosovitskiy et al., 2021) image embeddings as input, as they have shown excellent performance as an image backbone. We use cross-modal dot product attention with a forget-gate and image transformer, as this version performed best in our experiments. We use the same T5-base and BART-base pretrained on CNN/DM to initialize the encoder and decoder. For VG-BART, we pretrain the visual layers using the COCO image captions before fine-tuning on our dataset; VG-T5 shows no performance increase from visual pretraining, so we initialize its layers from scratch.
|
| 160 |
+
|
| 161 |
+
# 4.4 Cluster-based Multi-stage Summarization
|
| 162 |
+
|
| 163 |
+
One challenge in summarizing discussions is that they can be very long. To confirm that this is causing an issue, we conduct a preliminary experiment on the fine-tuned BART model by comparing the results of two different test subsets: the long subset with more than 22 turns and the short subset with
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Model</td><td>R1</td><td>R2</td><td>RL</td><td>BertS</td></tr><tr><td colspan="5">Extractive</td></tr><tr><td>Lead-1</td><td>15.23</td><td>3.46</td><td>13.24</td><td>11.89</td></tr><tr><td>Lead-Comment</td><td>22.86</td><td>5.55</td><td>20.43</td><td>7.16</td></tr><tr><td>Ext-Oracle</td><td>36.52</td><td>11.95</td><td>31.42</td><td>16.71</td></tr><tr><td colspan="5">Zero-shot Prompting</td></tr><tr><td>GPT-3.5</td><td>34.29</td><td>9.10</td><td>30.39</td><td>30.15</td></tr><tr><td>GPT-3.5-ImgCap</td><td>34.59</td><td>9.41</td><td>30.59</td><td>31.07</td></tr><tr><td colspan="5">Fine-tuned</td></tr><tr><td>BART</td><td>44.33</td><td>18.4</td><td>41.71</td><td>41.61</td></tr><tr><td>VG-BART</td><td>44.97</td><td>18.75</td><td>42.29</td><td>40.85</td></tr><tr><td>BART-ImgCap</td><td>44.91</td><td>18.54</td><td>42.12</td><td>41.34</td></tr><tr><td>CMS-VG-BART (ours)</td><td>45.13</td><td>18.81</td><td>42.56</td><td>42.13</td></tr><tr><td>CMS-BART-ImgCap (ours)</td><td>45.55</td><td>19.28</td><td>42.87</td><td>43.89</td></tr><tr><td>T5</td><td>45.29</td><td>18.97</td><td>42.4</td><td>42.32</td></tr><tr><td>VG-T5</td><td>45.58</td><td>18.94</td><td>42.75</td><td>42.3</td></tr><tr><td>T5-ImgCap</td><td>45.61</td><td>18.97</td><td>42.63</td><td>42.59</td></tr><tr><td>LongT5</td><td>45.98</td><td>19.44</td><td>43.12</td><td>41.95</td></tr><tr><td>LongT5-ImgCap</td><td>46.6</td><td>19.86</td><td>43.7</td><td>42.63</td></tr><tr><td>CMS-VG-T5 (ours)</td><td>45.71</td><td>19.21</td><td>42.97</td><td>42.72</td></tr><tr><td>CMS-T5-ImgCap (ours)</td><td>47.29</td><td>19.86</td><td>44.13</td><td>44.74</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 5: Results for the summarization task on mReddit-Sum. Models with "-ImgCap" in the name incorporate image information via image caption, and "VG-", via image embedding. Others are text-only models. Cluster-based multi-stage summarization (CMS) is our proposed method of processing discussions in three stages.
|
| 168 |
+
|
| 169 |
+
less than or equal to 22 turns. The performance on the long subset is noticeably worse than that on the short subset, lower by 4.95 ROUGE-1 and 6.1 BertScore.
|
| 170 |
+
|
| 171 |
+
To effectively handle this challenge, we present a novel method named cluster-based multi-stage summarization (CMS), consisting of three stages (See Figure 1):
|
| 172 |
+
|
| 173 |
+
1. Comment Clustering. Similar comments are clustered using RoBERTa sentence embedding and agglomerative clustering.
|
| 174 |
+
2. Cluster Summarization. Each cluster is summarized in about a sentence using an LLM with image captioning, or a vision-guided LLM, such as VG-BART or VG-T5.
|
| 175 |
+
3. Cluster-summary Summarization. The cluster summaries are concatenated and further reduced into a coherent summary using a separate model, which is either an LLM with image captioning or a vision-guided LLM.
|
| 176 |
+
|
| 177 |
+
# 4.5 Implementation Details
|
| 178 |
+
|
| 179 |
+
The fine-tuned models are trained for 50 epochs on a single Titan RTX GPU for the BART and T5 models. We use a batch size of 4, and following (Yu et al., 2021; Raffel et al., 2020; Lewis
|
| 180 |
+
|
| 181 |
+
et al., 2020), we use learning rates 3e-5 to fine-tune the pre-trained parts of model weights, and 1.5e-4 to train the newly added visual layers in VG-BART and VG-T5. The decoding process uses beam-search with a size of 5. The average training time for BART, T5, BART-Cap, and T5-Cap is approximately 5 hours; the average training time for VG-BART and VG-T5 is about 8 hours, with the additional visual layers adding about 100 million extra parameters. We use the same training epochs, batch size, learning rates, and beam-search size for cluster-based multi-stage summarization. All results shown are an average of two runs.
|
| 182 |
+
|
| 183 |
+
# 5 Results and Analysis
|
| 184 |
+
|
| 185 |
+
Table 5 shows the results of all models evaluated across the test set. We see that our model, Cluster-based Multi-stage Summarization (CMS), outperformed baseline models for all metrics across both T5 and BART-based models. We believe this is due to our models' ability to better handle the long length of input threads, even outperforming LongT5 models; see § D.1 for more detailed analysis. In general across all model types, models that contain image information through an image caption outperform those that only have access to text-information. This supports that our dataset requires multimodal understanding in order to perform well on the summarization task. To confirm this, we additionally computed the CLIPScore between the image and the first sentence to the generated summaries, which corresponds to the post summary and is where most of the image information is found. The results in Table 6 support that our methods incorporate more image information compared to a non-visual baseline. Vision-Guided models using text embeddings showed mixed results, with a marginal or no improvement over text-only models; we believe this to be due to a limitation of these models to effectively incorporate image information. Though they show strong performance on the How2 summarization task (Yu et al., 2021), mRedditSum has longer input and summary length, images, and fewer documents, likely contributing to the performance differences. Additionally, we note that T5 models show the best performance, followed by BART models and GPT3.5 models. For GPT3.5 models, we note that the low scores are likely due to inconsistencies in summary format, length, and detail, due to the zero-shot setting, but still receive relatively reasonable BertScore scores.
|
| 186 |
+
|
| 187 |
+
<table><tr><td>Model</td><td>CLIPScore</td></tr><tr><td>BART</td><td>68.29</td></tr><tr><td>CMS-Bart-ImgCap</td><td>70.26</td></tr><tr><td>T5</td><td>69.34</td></tr><tr><td>CMS-T5-ImgCap</td><td>70.58</td></tr></table>
|
| 188 |
+
|
| 189 |
+
Table 6: The average CLIPScores computed between the first sentence of the generated summary and the threads' corresponding image.
|
| 190 |
+
|
| 191 |
+
We provide further analyses on the effect of input length and subreddit category on performance in $\S$ D.
|
| 192 |
+
|
| 193 |
+
# 5.1 Qualitative Analysis
|
| 194 |
+
|
| 195 |
+
In addition to our automatic evaluation, we check the test results manually for qualitative analysis. Several results can be found in Table 7. The primary advantage of our method, CMS, is that it has a greater coverage of relevant opinions compared to the baseline models. It is better able to filter out irrelevant or strange comments, while keeping the important opinions and including ones that are presented late in the thread.
|
| 196 |
+
|
| 197 |
+
We also find that all models, even those incorporating image information, are still prone to hallucinations of what is in the image. These include incorrect descriptions of object color and style, as well as describing objects that are not present in the image at all. Though our multimodal models are generally better at incorporating visual details than text-only models, their power to interpret the image seems still limited; we believe this to be due to potential undertraining of the text-vision fusion layers in the VG models, and the limitations of image caption models.
|
| 198 |
+
|
| 199 |
+
Thus, while our CMS model can overcome one weakness of the baseline multimodal summarization models, we still believe there to be significant room for improvement in the field of multimodal models, and hope that MREDDITSUM can help facilitate such research.
|
| 200 |
+
|
| 201 |
+
# 5.2 Human Evaluation
|
| 202 |
+
|
| 203 |
+
We additionally perform human evaluation studies via MTurk to compare the summaries generated from CMS-T5-ImgCap (ours) versus the baseline T5-ImgCap model. Based on similar works such as (Zhang et al., 2021), we use three metrics to measure the summary quality: fluency, faithfulness, and overall quality. Fluency measures which is
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
Figure 2: Human evaluation results of randomly sampled summaries of CMS-T5-Imgcap and T5-ImgCap models.
|
| 207 |
+
|
| 208 |
+
more naturally written, faithfulness measures how truthful the summary is to the document, and the overall quality represents general user preference. We randomly sample 25 datapoints from the test set and receive 3 annotations per sample. We note that this limited number of datapoints is due to the fact that this evaluation task is highly challenging for human annotators, given that the input, including the original post, threads, and image, is long and complex.
|
| 209 |
+
|
| 210 |
+
Figure 2 shows the majority vote results that our summaries are overall more preferable in terms of fluency and overall quality, with similar performance for faithfulness. We believe this to be an indicator of our model's better ability to effectively summarize the thread, offering more fluent and higher-quality summaries. The similar faithfulness scores are likely due to our method sharing the same base model and image caption as the baseline, T5-ImgCap, granting a similar ability to incorporate correct image and text information.
|
| 211 |
+
|
| 212 |
+
# 6 Conclusion and Future Work
|
| 213 |
+
|
| 214 |
+
Online discussions are increasingly becoming multimodal, yet there are not sufficient resources for summarizing them. To this end, we presented MREDDITSUM, the first multimodal discussion summarization dataset containing 3,033 discussion threads and images with human-written summaries. Threads were carefully chosen so that the images play a key role in the respective threads, and summaries were written to capture this. Experiments showed that summarization models making use of visual information consistently outperform those that do not. Additionally, we introduced Cluster-based Multi-stage Summarization, which accounted for the structure of discussion thread
|
| 215 |
+
|
| 216 |
+
<table><tr><td>Image</td><td>Thread</td><td>T5</td><td>T5-ImgCap</td><td>CMS-T5-ImgCap</td></tr><tr><td></td><td>Original Post: Looking for ideas on how to style this necklace User 1: This gives me Ms. Frizzle vibes. Maybe embrace that? User 2: Yes! Funky skirt that poofs out, fitted shirt! User 3: Fitted button up collard white shirt with high waisted shorts and flats OP: Ooh that's a great idea! Thank you! User 4: [URL] Link to necklace if anyone is interested...</td><td>The OP asked for ideas on how to style a black and white floral necklace. One commenter suggested wearing the necklace with a fitted shirt. Another suggested wearing a button up collared white shirt with high waisted shorts and flats.</td><td>The OP wanted advice on how to style their necklace, which has a floral pattern on it. One commenter suggested wearing the necklace with a fitted shirt. Another suggested wearing a button up collared white shirt with high waisted shorts and flats.</td><td>The OP asked for ideas on how to style their necklace, which is white with a dinosaur pattern on it. One commenter suggested wearing a funny skirt and fitted shirt with the necklace. Another commenter recommended a button up collared white shirt with high waisted shorts and flats. One user shared a link to OP's necklace.</td></tr><tr><td></td><td>Original Post: [Update] Couch arrived and this is the rug I ended up choosing. Any coffee table and ac-cent chair recommendations? User 1: Something with a warm wood color would look nice. Check out antique shops or local furniture shops to find something that has some character and is made of solid wood. User 2: Yes, round or oval would look nice....</td><td>The OP asked for advice on a coffee table and accent chair for their living room, which has a brown leather arm-chair and tan leather couch. Most commenters suggested a wood coffee table with a walnut finish and a solid white marble top. One commenter recommended a round or oval coffee table. Another commenter suggested brown throw pillows and blankets to match the rug. One user suggested OP get a non-shedding dog.</td><td>The OP asked for advice on a coffee table and ac-cent chair for their blue couch. Most commenters suggested a wood coffee table with a walnut finish and a solid white marble top. One commenter re-commended a round or oval coffee table. Another commenter suggested getting a non-shedding dog.</td><td>The OP asked for help with picking out a coffee table and accent chair for their blue couch. Most commenters suggested getting a warm wood coffee table. Others suggested a brown leather armchair or cream colored ac-cent chair. One commenter suggested getting throw pil-lows and blankets to match the rug. Another commenter asked where the rug was from, and the OP said it was from Apt2B.</td></tr></table>
|
| 217 |
+
|
| 218 |
+
Table 7: Examples of summaries generated from various models. Across all models, hallucinations regarding the image (highlighted in red) are present; however, these are reduced with multimodal models that incorporate image-only information (highlighted in green). Our CMS models tended to include more relevant details (blue) while removing irrelevant comments (orange).
|
| 219 |
+
|
| 220 |
+
data and outperformed baseline methods. We hope this dataset will help to facilitate active research on multimodal discussion summarization.
|
| 221 |
+
|
| 222 |
+
# Limitations
|
| 223 |
+
|
| 224 |
+
As with any dataset, there are some limitations to MREDDITSUM. Though it is of comparable size to many other summarization datasets, the relatively small size of the dataset makes it hard to utilize without significant pretraining, thus limiting the use of the dataset to those with access to large-scale pretraining datasets or pretrained models.
|
| 225 |
+
|
| 226 |
+
MREDDITSUM only includes Reddit threads with single images, as opposed to multiple images or videos. There is thus still room for improvement for multimodal summarization to additionally consider these threads.
|
| 227 |
+
|
| 228 |
+
Furthermore, our dataset considers only English-language threads from a single forum, Reddit, and a limited number of subreddits. There thus may be some additional bias due to the relatively small domain and raw nature of the dataset.
|
| 229 |
+
|
| 230 |
+
For our cluster-based multi-stage summarization method, one limitation is the need to train an extra model in addition to the base summarization model. As a result, our method incurs some computational overhead. However, it is worth noting that both the training and inference processes can
|
| 231 |
+
|
| 232 |
+
be accommodated within a single Titan RTX GPU.
|
| 233 |
+
|
| 234 |
+
# Ethics Statement
|
| 235 |
+
|
| 236 |
+
As we propose a novel multimodal dataset, there are ethical considerations about the use of the data.
|
| 237 |
+
|
| 238 |
+
Privacy. All data are sourced from Reddit, which is publicly available. Following Desai et al. (2021); Ahn et al. (2023), additional measures have been taken to address privacy considerations. This includes the exclusion of images or discussions with clear identifying information, such as names or faces. Additionally, posts that are removed by their authors from Reddit also render the image unavailable for our dataset, as we only provide the links to the images. Thus, any users who are concerned about their post being in the dataset may easily remove it from the dataset by deleting it from Reddit.
|
| 239 |
+
|
| 240 |
+
Bias. As all data are sourced from real discussions on a public forum, there may be biases within the discussions due to the demographics of the Reddit users. Though we use a NSFW filter to remove inappropriate words, and look at each datapoint by hand to further filter out any harmful or inappropriate images or discussions, it is possible a few may still be present in the dataset. Less obvious bias such as stereotyping based on gender, etc. may also still be present in the dataset.
|
| 241 |
+
|
| 242 |
+
Intended Use. The MREDDITSUM dataset is intended to be used for research purposes only, and the use is subject to the Reddit User Agreement, Privacy Policy, and Content $\text{Policy}^3$ .
|
| 243 |
+
|
| 244 |
+
Annotator Compensation. We ensured that our annotators were paid a fair wage of approximately USD $16/hour, which is greater than the minimum wage in all countries that we recruited annotators from: The United States, Canada, Australia, New Zealand, and Great Britain. The time to complete each task was determined by running multiple trials with researchers, and the payment per task was calculated from this time. The cost per datapoint was approximately $3.50, with some longer datapoints costing more to compensate for the extra annotation time.
|
| 245 |
+
|
| 246 |
+
# Acknowledgements
|
| 247 |
+
|
| 248 |
+
This work was financially supported by SNU-NAVER Hyperscale AI Center, as well as the Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (No. 2019-0-01082, SW StarLab and NO. 2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2023R1A2C2005573). Joonsuk Park and Gunhee Kim are the corresponding authors.
|
| 249 |
+
|
| 250 |
+
# References
|
| 251 |
+
|
| 252 |
+
Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2017. Vqa: Visual question answering. Int. J. Comput. Vision, 123(1):4-31.
|
| 253 |
+
Jaewoo Ahn, Yeda Song, Sangdoo Yun, and Gunhee Kim. 2023. MPCHAT: Towards multimodal person-grounded conversation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3354-3377, Toronto, Canada. Association for Computational Linguistics.
|
| 254 |
+
Adithya Bhaskar, Alexander R. Fabbri, and Greg Durrett. 2023. Zero-shot opinion summarization with gpt-3. In *Findings of the Association for Computational Linguistics: ACL* 2023. Association for Computational Linguistics.
|
| 255 |
+
Sumit Bhatia, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions - can dialog
|
| 256 |
+
|
| 257 |
+
acts of individual messages help? In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2127-2131, Doha, Qatar. Association for Computational Linguistics.
|
| 258 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 259 |
+
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server.
|
| 260 |
+
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. 2021. Redcaps: Web-curated image-text data created by the people, for the people. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual.
|
| 261 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.
|
| 262 |
+
Alexander Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir Radev. 2021. ConvoSumm: Conversation summarization benchmark and improved abstractive summarization with argument mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6866-6880, Online. Association for Computational Linguistics.
|
| 263 |
+
Alexander Fabbri, Xiaojian Wu, Srini Iyer, Haoran Li, and Mona Diab. 2022. AnswerSumm: A manually-curated dataset and pipeline for answer summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2508-2520, Seattle, United States. Association for Computational Linguistics.
|
| 264 |
+
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human
|
| 265 |
+
|
| 266 |
+
annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70-79, Hong Kong, China. Association for Computational Linguistics.
|
| 267 |
+
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of gpt-3.
|
| 268 |
+
Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2022. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724-736, Seattle, United States. Association for Computational Linguistics.
|
| 269 |
+
Kilem Gwet. 2008. Computing inter-rater reliability and its variance in the presence of high agreement. The British journal of mathematical and statistical psychology, 61:29-48.
|
| 270 |
+
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. CLIPScore: A reference-free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7514-7528, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 271 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 272 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models.
|
| 273 |
+
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022a. BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 12888-12900. PMLR.
|
| 274 |
+
Lianian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, KaiWei Chang, and Jianfeng Gao. 2022b. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10965-10975.
|
| 275 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 276 |
+
|
| 277 |
+
Nayu Liu, Xian Sun, Hongfeng Yu, Wenkai Zhang, and Guangluan Xu. 2020. Multistage fusion with forget gate for multimodal summarization in open-domain videos. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1834-1845, Online. Association for Computational Linguistics.
|
| 278 |
+
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.
|
| 279 |
+
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback.
|
| 280 |
+
Shruti Palaskar, Jindrich Libovický, Spandana Gella, and Florian Metze. 2019. Multimodal abstractive summarization for how2 videos. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6587-6596, Florence, Italy. Association for Computational Linguistics.
|
| 281 |
+
Jielin Qiu, Jiacheng Zhu, William Han, Aditesh Kumar, Karthik Mittal, Claire Jin, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Bo Li, Ding Zhao, and Lijuan Wang. 2023. Multisum: A dataset for multimodal summarization and thumbnail generation of videos.
|
| 282 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR.
|
| 283 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
|
| 284 |
+
Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loic Barrault, Lucia Specia, and Florian Metze. 2018. How2: a large-scale dataset for multimodal language understanding. In Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL). NeurIPS.
|
| 285 |
+
Ka Wong, Praveen Paritosh, and Lora Aroyo. 2021. Cross-replication reliability - an empirical approach to interpreting inter-rater reliability. In Proceedings of the 59th Annual Meeting of the Association for
|
| 286 |
+
|
| 287 |
+
Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7053-7065, Online. Association for Computational Linguistics.
|
| 288 |
+
Nahathai Wongpakaran, Tinakon Wongpakaran, Danny Wedding, and Kilem Gwet. 2013. A comparison of cohen's kappa and gwet's ac1 when calculating inter-rater reliability coefficients: A study conducted with personality disorder samples. BMC Medical Research Methodology, 13.
|
| 289 |
+
Tiezheng Yu, Wenliang Dai, Zihan Liu, and Pascale Fung. 2021. Vision guided generative pre-trained language models for multimodal abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3995-4007, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 290 |
+
Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, and Mohit Bansal. 2021. EmailSum: Abstractive email thread summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6895-6909, Online. Association for Computational Linguistics.
|
| 291 |
+
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
|
| 292 |
+
Junnan Zhu, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2018. MSMO: Multimodal summarization with multimodal output. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4154-4164, Brussels, Belgium. Association for Computational Linguistics.
|
| 293 |
+
|
| 294 |
+
# A Subreddits Used
|
| 295 |
+
|
| 296 |
+
We list all subreddits used for data collection, along with the number of threads from each present in the final dataset in Table 8.
|
| 297 |
+
|
| 298 |
+
<table><tr><td>Subreddit</td><td>Category</td><td># Threads</td></tr><tr><td>r/outfits</td><td>Clothes</td><td>161</td></tr><tr><td>r/fashionadvice</td><td>Clothes</td><td>529</td></tr><tr><td>r/plussizefashion</td><td>Clothes</td><td>19</td></tr><tr><td>r/handbags</td><td>Clothes</td><td>90</td></tr><tr><td>r/petitefashionadvice</td><td>Clothes</td><td>112</td></tr><tr><td>r/weddingdress</td><td>Clothes</td><td>108</td></tr><tr><td>r/designmyroom</td><td>Interior</td><td>1098</td></tr><tr><td>r/malelivingspace</td><td>Interior</td><td>642</td></tr><tr><td>r/femalelivingspace</td><td>Interior</td><td>258</td></tr></table>
|
| 299 |
+
|
| 300 |
+
Table 8: The subreddits used for data collection.
|
| 301 |
+
|
| 302 |
+
# B Annotation Interface
|
| 303 |
+
|
| 304 |
+
We listed a total of 3 tasks on Amazon Mechanical Turk for our data pipeline. We informed all annotators that this data would be used to help in summarizing Reddit threads, asked them to agree with the Reddit Terms of Use before participating, and notified them that participating in the HIT constituted acceptance of these terms of use.
|
| 305 |
+
|
| 306 |
+
We provided annotators with detailed instructions for the task and several acceptable and unacceptable examples to help them perform the task. In Figure 6, we show the instructions provided for Task 1; similar instructions were used in the other two tasks. Additionally, we show the annotation interface used for Tasks 2 and 3 in Figure 7 and Figure 8.
|
| 307 |
+
|
| 308 |
+
# C Additional Sample Data
|
| 309 |
+
|
| 310 |
+
We show a few additional data points from the MREDDITSUM dataset in Table 9 and Table 10. Table 9 shows a datapoint from the fashion category, whereas Table 10 shows a datapoint from the interior design category.
|
| 311 |
+
|
| 312 |
+
# D Further Analyses
|
| 313 |
+
|
| 314 |
+
# D.1 Summarization based on the Length of Input Threads
|
| 315 |
+
|
| 316 |
+
To better understand whether CMS effectively handles long inputs, we run a further analysis using BART-based models (see Figure 3). As the number of comments increases, the R1 score consistently
|
| 317 |
+
|
| 318 |
+

|
| 319 |
+
Figure 3: The influence of the number of comments in the thread on summarization performance (ROUGE-1) on BART-based models measured on the test set.
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
Figure 4: The influence of the number of comments in the thread on summarization performance (ROUGE-1) of T5-based models. The results are based on the test set.
|
| 323 |
+
|
| 324 |
+
decreases. This indicates that summarization indeed becomes more challenging when the input length is longer. However, the performance gap between the baseline models (i.e., BART, BART-ImgCap) and the CMS-BART-ImgCap generally increases as the number of comments grows, supporting the idea that CMS better handles longer threads. As our model generates cluster summaries in stage 1, it reduces the average input length by $82.8\%$ , and thus achieving better performance even on relatively challenging long inputs. We also provide results from T5-based models in Figure 4, showing similar trends; the gap between the baseline models and the CMS-T5-ImgCap is large when the number of comments falls within the range of [15,20) and [20,25).
|
| 325 |
+
|
| 326 |
+
# D.2 Summarization per Subreddit
|
| 327 |
+
|
| 328 |
+
We further explore the summarization across 9 different subreddits, as shown in Figure 5.
|
| 329 |
+
|
| 330 |
+
The results reveal that subreddits within the 'Interior' category (i.e., the left three subreddits in Figure 5) exhibit lower ROUGE scores in compar-
|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
Figure 5: ROUGE scores obtained from our CMS-T5-ImgCap model on the test set, categorized by different subreddits. The number of input words is indicated in parentheses.
|
| 334 |
+
Figure 6: An example of instructions given for Task 1: Original Post Summarization.
|
| 335 |
+
|
| 336 |
+
ison to subreddits within the 'Clothes' category (i.e., the right six subreddits in Figure 5). This discrepancy can be attributed to the difference in the input lengths across each subreddit. Given that the average input length of examples from the 'Interior' category exceeds that of examples from the 'Clothes' category, it is more difficult for our model to summarize the former. Additionally, we can also explain this gap by comparing the difference between domains. Specifically, while the model can easily comprehend clothing images by focusing on only salient objects, comprehending interior images is more challenging as it necessitates a broader range of information (e.g., wall color, spatial relationship between furniture, etc). Consequently, summarizing examples from the 'Interior' category proves to be more challenging for the models than summarizing examples from the 'Clothes' category.
|
| 337 |
+
|
| 338 |
+
# INSTRUCTIONS
|
| 339 |
+
|
| 340 |
+
# Summarizing Posts + Images:
|
| 341 |
+
|
| 342 |
+
You are to summarize what kind of advice or opinions a Reddit post is asking for, along with how it is related to the given image. Ideally, the summary should be a single sentence that contains all necessary information from both the post and image. That is, someone should be able to understand the summary completely even without seeing the image.
|
| 343 |
+
|
| 344 |
+
# Be sure to:
|
| 345 |
+
|
| 346 |
+
i. Start your sentence with "The OP...".
|
| 347 |
+
ii. Write in third-person (no "I" or "you") past tense.
|
| 348 |
+
iii. Summarize in your own words, rather than copy-pasting the original post.
|
| 349 |
+
iv. Include relevant information and details from the image.
|
| 350 |
+
v. Do not include personal opinions, or irrelevant details.
|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
OP: "what would you do to cover the ugly breaker box in my bedroom? obviously it still needs to be accessible, but i keep my door closed a lot and it's such an eyesore. the wall is 58 inches wide and 10 feet high."
|
| 354 |
+
|
| 355 |
+
# Comments:
|
| 356 |
+
|
| 357 |
+
# Group 1:
|
| 358 |
+
|
| 359 |
+
- User 4: you could hang a large framed painting, something liked stretched canvas over frame. no glass. that way to can move it and not worry about glass breaking. some will even install a hinge on one side so it opens like a cabinet door.
|
| 360 |
+
- User 6: i think a painting or frame would be nice- you can hinge it if you want to give easier access. i feel like a gallery style wall with multiple pieces would be best otherwise your art would look off center and weirdly close to the corner, if that makes sense.
|
| 361 |
+
|
| 362 |
+
- OP: makes total sense. the closeness to the edge of the wall is really what's throwing me off about it.
|
| 363 |
+
|
| 364 |
+
- User 7: hang something over it. a painting or some kind of rug.
|
| 365 |
+
- User 14: i would paint it the same color as the wall and hang 2-3 large frames on the wall. you could easily
|
| 366 |
+
|
| 367 |
+
Write a summary sentence. You may write multiple sentences if necessary for this group.
|
| 368 |
+
|
| 369 |
+
Enter your summary for group 1...
|
| 370 |
+
|
| 371 |
+
$\bigcirc$ No summary sentence is necessary - these comments are redundant or irrelevant.
|
| 372 |
+
|
| 373 |
+
Figure 7: An example of the Cluster Summarization task presented to workers on Amazon Mechanical Turk.
|
| 374 |
+
Post 1:
|
| 375 |
+
Figure 8: An example of the Summary Editing task presented to workers on Amazon Mechanical Turk.
|
| 376 |
+

|
| 377 |
+
OP: my room for about a year while flight instructing. what can i do to improve without being too permanent?
|
| 378 |
+
|
| 379 |
+
# Original Summary 1:
|
| 380 |
+
|
| 381 |
+
The OP wanted temporary solutions to improve their sparse bedroom with just a bed, dresser and fake plant. One user commented that the room looks like a dressing room for a rub-n-tug. One user recommended getting a rug. Another user suggested curtains which easy to put and add a lot of character and soften the place and a floor lamp. Another user suggested moving the bed so it is not directly under the window and adding a small nightstand to the room. The OP said the recommendation are great so far and is now planning. Another user suggested adding a life-size statue to make the place less lonely.
|
| 382 |
+
|
| 383 |
+
# Edited Summary 1:
|
| 384 |
+
|
| 385 |
+
The OP wanted temporary solutions to improve their sparse bedroom with just a bed, dresser and fake plant. One user commented that the room looks like a dressing room for a rub-n-tug. One user recommended getting a rug.
|
| 386 |
+
|
| 387 |
+
Another user suggested curtains which easy to put and add a lot of character and soften the place and a floor lamp. Another user suggested moving the bed so it is not directly under the window and adding a small nightstand to the room. The OP said the recommendation are great so far
|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
Image:
|
| 391 |
+
|
| 392 |
+
# Post Caption:
|
| 393 |
+
|
| 394 |
+
what could you pair these with?
|
| 395 |
+
|
| 396 |
+
# Comments:
|
| 397 |
+
|
| 398 |
+
User 1: Dressy black pants, colorful blouse, and blazer....
|
| 399 |
+
|
| 400 |
+
User 2: You can pair this with shorts, slacks, or jeans—basically anything. Just make sure that the color of your top & bottom matches.
|
| 401 |
+
|
| 402 |
+
User 3: If you are looking for women's wear I would say a very wide leg pastel high waisted pant with a tight/fitted top in same color scheme or white.
|
| 403 |
+
|
| 404 |
+
User 4: This reminds me of the kind of shoes I see in anime with sailor style uniforms tbh
|
| 405 |
+
|
| 406 |
+
User 5: A Goodwill donation
|
| 407 |
+
|
| 408 |
+
# Summary:
|
| 409 |
+
|
| 410 |
+
The OP wanted to know what to wear with a pair of white leather loafers that have a thick black sole and low heel. One commenter thought pastel pants and a white top to match the shoes would work. Another commenter said that OP's shoes would pair with any sort of bottoms, but cautioned that the top and bottom color should match. One user shared links for OP to use as inspiration. Another user thought that the shoes looked like anime sailor shoes. Two commenters didn't like OP's shoes, and suggested they be thrown away or donated.
|
| 411 |
+
|
| 412 |
+
Table 9: Another example from our dataset, from the r/fashionadvice subreddit.
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
Image:
|
| 416 |
+
|
| 417 |
+
# Post Caption:
|
| 418 |
+
|
| 419 |
+
Just moved into my first home and this space bothers me. Need some advice to make it look more cohesive.
|
| 420 |
+
|
| 421 |
+
# Comments:
|
| 422 |
+
|
| 423 |
+
User 1: center feels empty. if it were me, i'd place one of those vintage wooden radio clocks in the middle. that's oddly specific i know...
|
| 424 |
+
|
| 425 |
+
User 2: Change nothing but add a vase of fresh white flowers in the center
|
| 426 |
+
|
| 427 |
+
User 3: Center large art piece and move it down. Lean the small art off center behind Candles, use the basket as a trinket tray on console.
|
| 428 |
+
|
| 429 |
+
User 4: I really like your art. I agree that the cneter needs something, maybe a plant or a stack of books.
|
| 430 |
+
|
| 431 |
+
User 5: I rather like it. The only thing missing is something sort of tallish in the center to fill that space. Like, it is the perfect spot for a vase filled with flowers. Some color and life! If a floral subscription isn't in your future lol maybe a full plant would fit the bill
|
| 432 |
+
|
| 433 |
+
···
|
| 434 |
+
|
| 435 |
+
# Summary:
|
| 436 |
+
|
| 437 |
+
The OP asked what to do with a space in their home that presently has light blue walls and a brown sideboard with a lamp and candlesticks on it. Most commenters agreed the space looked good as-is, but recommended just adding something in the empty center of the table, such as a vase of white flowers or a large plant. Others thought a vintage wooden radio clock or traditional record player in the same green color as the candles would look perfect, while another suggested a stack of nice books. Others said to center the large wall art, and to check local thrift stores for a substantial but short statement piece to be the center accent decor. Others recommended using a basket as a trinket tray or just buying a marble tray for trinkets on the table. They also said to lean the smallest art pieces against the wall behind the candles, or get rid of the candles altogether.
|
| 438 |
+
|
| 439 |
+
Table 10: Another example from our dataset, from the r/designmyroom subreddit.
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea3a9de887dba056ea5ccb24a193b405a3152e33f47a6259927688470e220217
|
| 3 |
+
size 747016
|
2023/mRedditSum_ A Multimodal Abstractive Summarization Dataset of Reddit Threads with Images/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/0b42a6d9-afd2-4e94-b5d2-ab4bdab641ff_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bed53739dcf37e0c6c97e3cfdde407cb9b5ef6bb555ac54da153b7a93e69f20b
|
| 3 |
+
size 1578193
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/full.md
ADDED
|
@@ -0,0 +1,656 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# q2d: Turning Questions into Dialogs to Teach Models How to Search
|
| 2 |
+
|
| 3 |
+
Yonatan Bitton†,‡,* Shlomi Cohen-Ganor‡ Ido Hakimi‡ Yoad Lewenberg‡ Roee Aharoni‡ Enav Weinreb‡
|
| 4 |
+
|
| 5 |
+
† The Hebrew University of Jerusalem; ‡ Google Research
|
| 6 |
+
|
| 7 |
+
{yonatanbitton,shlomic,idohakimi,yoadlew,roeeaharoni,eweinreb} @ google.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose $q2d$ : an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve $90\% - 97\%$ of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Recent dialog generation models, such as LaMDA (Thoppilan et al., 2022), BlenderBot3 (Shuster et al., 2022b) and Sparrow (Glaese et al., 2022) use an external search API to generate grounded and factually accurate responses (Parisi et al., 2022). This is important for providing reliable and consistent answers (Shuster et al., 2022a), especially
|
| 16 |
+
|
| 17 |
+
when discussing entities and asking related questions with anaphora. To do this, these models use a query generation component that is trained on dialog-to-search-query datasets. When the model is triggered with a dialog turn that requires search, it generates a query that is used to obtain a search result, which is then used to generate a grounded response. This allows the model to provide relevant information about the world in its responses to user queries. For example, a model trained in 2021 should be able to provide a factual response to the question "How old is Joe Biden?" even in 2023. In a conversation, one might discuss an entity (e.g. "Joe Biden") and later ask a related question (e.g. "How old is he?") with anaphora. In order to provide reliable and consistent answers, it is necessary to generate a decontextualized query (e.g., "How old is Joe Biden") for a search engine.
|
| 18 |
+
|
| 19 |
+
Using APIs also decouples language and reasoning from knowledge (Borgeaud et al., 2021; Parisi et al., 2022), which can help prevent errors caused by outdated information being stored in the model's parameters. For example, if a model trained at the end of 2021 is asked "How old is the current president?", it may produce the incorrect query "How old is Donald Trump" if its parameters are outdated or if it provides factually-inconsistent responses (a.k.a "hallucinations").
|
| 20 |
+
|
| 21 |
+
Query generation datasets have been created using human annotators, limiting them in scale, control, and quality (Komeili et al., 2021). As a result, when a new domain is introduced, a significant amount of human effort is required to create a new query generation dataset for that domain (Gupta et al., 2021; Dziri et al., 2021). The fact that language models often generate hallucinations (Zhao et al., 2020; Maynez et al., 2020; Lee et al., 2018), especially in new domains or dialogs that differ from the training data (Nie et al., 2020; Honovich et al., 2021, 2022a), highlights the need for more
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Left: Our $q2d$ method starts from an existing query or question and prompt a few-shot language model to transform it into a dialog. We filter out cases where the intent of the generated dialogue differs from the intent of the initial query and apply additional filters. Right: We take a question from the QReCC dataset (surrounded by a rectangle) and generate an information-seeking dialog with $q2d$ . By starting with a query and generating a dialog, we create {dialogue → query} dataset, which is used to train and evaluate query generation models, which communicate with an external search API to generate factual responses.
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+
effective query generation datasets that will foster more grounded and factually consistent models.
|
| 29 |
+
|
| 30 |
+
In this work, we propose a data generation pipeline to improve grounded dialog models with access to search engines. To create a dialog-to-search-queries dataset for training the query generation component in such models, we reverse the process, starting from a search query and generating an information-seeking dialog that corresponds to that query. Our automatic pipeline, shown in Figure 1, begins with a search query or question, and prompts a large language model (PaLM; Chowdhery et al., 2022) to generate a conversational dialog that conveys the information need implied by the given query. For example in Figure 1, we take the question "Who played Ardra on star trek the next generation?" from the Natural Questions dataset (Kwiatkowski et al., 2019) and generate a dialog with a similar intent: the correct answer to the original question ("Marta DuBois") is also a correct response to the generated dialog. This process allows us to leverage existing question-answering datasets, which are widely available for different domains, and extend them by generating dialogs that preserve the original information need while controlling the dialog domain and style.
|
| 31 |
+
|
| 32 |
+
To assess whether the automatically generated dialogs can replace human-generated dialogs, we experiment with QReCC NQ (Anantha et al., 2020), a human-curated dialog dataset. We generate a training set that is the same size as the original dataset, but with synthetic dialogue, and use it to train a query generation model. The resulting model obtains $90\% - 95\%$ of the performance of models trained on the human-generated training data, using the same metrics used to evaluate QReCC (Anantha et al., 2020).
|
| 33 |
+
|
| 34 |
+
Other than training query generation models, our approach is also useful for training the dialog generation models themselves when no dialog data is available for a new domain. We demonstrate that on the domain of multi-hop question answering, where we first show that existing dialog models struggle to perform well on a domain-specific challenge set. We then generate synthetic dialog data from the MuSiQue (Trivedi et al., 2021) multi-hop QA dataset, and show that training a dialog model on this data improves performance.
|
| 35 |
+
|
| 36 |
+
We provide a thorough analysis of the quality of the generated datasets, demonstrating that they (a) looks natural, and humans struggle to distinguish the synthetic dialogs from natural; (b) factual: generated and human-annotated answers perform similarly in query generation; (c) correct: dataset labels are accurate, and strict filtering improves results.
|
| 37 |
+
|
| 38 |
+
To conclude, our main contributions are:
|
| 39 |
+
|
| 40 |
+
1. We introduce $q2d$ : an automatic method to generate information-seeking dialogs from questions using large language models.
|
| 41 |
+
2. We show that our method is beneficial for training query generation and dialog generation, including in different domains like multi-hop QA.
|
| 42 |
+
3. A thorough analysis showing that the synthetically generated dialogs are natural, factual and correct.
|
| 43 |
+
4. Publicly releasing the generated datasets and generation prompts, code, and evaluation protocols.
|
| 44 |
+
|
| 45 |
+
# 2 Generating Dialogs from Questions
|
| 46 |
+
|
| 47 |
+
In this section, we describe our automatic method, called $q2d$ , for generating dialogs from questions, and the properties of datasets produced by this
|
| 48 |
+
|
| 49 |
+
# Algorithm 1 Generate Dialogues from Questions
|
| 50 |
+
|
| 51 |
+
# input
|
| 52 |
+
|
| 53 |
+
Few-Shot Model $M_{fs}$ , QA Dataset $(Q, A)$ ,
|
| 54 |
+
|
| 55 |
+
Examples Queries $S_{q} = \{(q_{i},d_{i})\}_{i = 1}^{k}$
|
| 56 |
+
|
| 57 |
+
Examples Dialogues $S_{d} = \{(d_{i},q_{i})\}_{i = 1}^{k}$
|
| 58 |
+
|
| 59 |
+
Instructions Query $I$ , Instructions Dialogue $I_{r}$
|
| 60 |
+
|
| 61 |
+
# execute
|
| 62 |
+
|
| 63 |
+
dataset $\leftarrow \emptyset$
|
| 64 |
+
|
| 65 |
+
for $(q,a)\in (Q,A)$ do
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
d i a g o u g e \leftarrow M (S _ {q}, I, q)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
q ^ {\prime} \leftarrow M _ {f s} \left(S _ {d}, I _ {r}, d i a l o g u e\right)
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
if filter/dialogue,q,q',a) then
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
d a t a s e t. a d d ((d i a g o u g e, q, a))
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
# output
|
| 82 |
+
|
| 83 |
+
Query Generation Dataset: $D = \{(d_i,q_i)\}_{i = 1}^{|Q|}$
|
| 84 |
+
|
| 85 |
+
method. Our goal is to reduce the effort associated with generating a training dataset for training generation, and to improve query-generation-based dialog models with a high-quality training dataset. Query generation can start by extracting queries from existing dialogs. However, our approach is unique in that it begins with factual queries or questions, allowing us to leverage existing resources. Any question-answering dataset, queries dataset, or queries used in popular web search services or dialog model logs can be used with our algorithm.
|
| 86 |
+
|
| 87 |
+
The algorithm is described in Algorithm 1 and consists of three main steps:
|
| 88 |
+
|
| 89 |
+
1. Starting from a query or question from the set $Q$ , we use a few-shot model $M_{fs}$ , specifically we use PaLM, and instructions $I$ to generate a dialog given the query. The few-shot prompts can be manually written to adapt to different conversation styles, or sampled from existing dialogs dataset.
|
| 90 |
+
2. Using the same few-shot examples in reverse, $S_{d}$ and $I_{r}$ , we generate a query based on the generated dialog, $q'$ .
|
| 91 |
+
3. Filtering: we filter dialogs with different intent, or dialogs where the dialog answer is contained in the dialog. We elaborate on the different filters below.
|
| 92 |
+
|
| 93 |
+
Filtering. In this part we attempt to filter (dialog, query) samples that would not be beneficial for training or testing. We do it in three steps, elaborated below. We stress that there are many more filtering strategies possible, and exploring
|
| 94 |
+
|
| 95 |
+
them is left for future work. First, we filter out dialogs whose intent is different from the original query by measuring the similarity between the query and its reversed version using SBERT similarity $(sim(q, q'))$ and comparing it to a threshold $(T_{query})$ . If the similarity is below the threshold, the generated query is considered to have a different intent and the dialog is filtered. Appendix A, Section A.2 shows several examples of dialogs, original and reversed query and SBERT semantic similarity. Second, we filter out cases where the answer is included in the dialog by measuring the n-gram overlap between the dialog and the answer using the Rouge metric (Lin, 2004). If the overlap is above a threshold $(T_{answer})$ , the answer is entailed in the dialog and the example is filtered. For example, if the final answer ("Marta DeBois") would have been already written in the dialog for the role of playing Ardra, the final question ("Who played Ardra") would not make sense. Finally, we filter out cases where the last turn of the dialog is similar ( $>80\%$ ) to the original question using SBERT similarity. These cases include situations where no anaphora is required.
|
| 96 |
+
|
| 97 |
+
In this work, we use PaLM (Chowdhery et al., 2022), a large language model with 540B parameters, as the few-shot language model for generating dialogs with a temperature of 0.6. We provide a fully working code with GPT-3 (Brown et al., 2020) for reproducibility. The set of prompts and instructions can be found in Appendix A, Section A.3. For the similarity metric (sim), we use the allmpnet-base-v2 model from Sentence Transformers, with a threshold similarity of $T_{query} = 0.999$ . This threshold is justified through human-evaluation and ablation studies for the filtering in Section 5.3.
|
| 98 |
+
|
| 99 |
+
# 3 Replacing Human-Annotated with Auto-Generated Data
|
| 100 |
+
|
| 101 |
+
In this section, we evaluate the extent to which our automatically generated dataset can replace the human-annotated dataset. We use the QReCC NQ dataset (Anantha et al., 2020), which contains (dialog, query) pairs, and automatically generate a dialog from natural questions. This allows us to create an automatically generated train set of the same size, and compare it to the human-annotated dataset. An example of a human-generated dialog compared to an automatically generated dialog is shown in Figure 2. We use the version of the dataset where the intermediate questions are con
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
Figure 2: An example of human annotated dialogue from QReCC and an automatically generated dialogue produced for the same question.
|
| 105 |
+
|
| 106 |
+
textualized, rather than decontextualized. For example, the second and third user turns in the figure are contextualized versions of the decontextualized questions "Who directed the film, The Vikings?" and "Was the film The Vikings based on a novel?".
|
| 107 |
+
|
| 108 |
+
Dataset Generation. To generate our dataset, we use our $q2d$ method as described in Section 2. For the few-shot examples of queries and dialogs ( $S_{q}$ and $S_{d}$ ), we sample 15 examples from QReCC that fit within the maximum input sequence length. These examples are available in Appendix A, Section A.3. For the base questions ( $Q$ ), we use the Natural Questions (Kwiatkowski et al., 2019) dataset instead of the QReCC NQ questions to reduce dependence on QReCC. Importantly, all of the questions and dialogs in the natural and automatically generated datasets are disjoint. In total, we generate 13K samples, the same as the QReCC NQ train set. Full prompts, instructions and examples are available in Appendix A, Section A.1.
|
| 109 |
+
|
| 110 |
+
Metrics and Models. Our metrics are the same as those used in the QReCC dataset, comparing the original and generated queries. These include Rouge-1 Recall (Lin, 2004) for measuring the similarity between two text unigrams, and SBERT embedding semantic similarity for comparing the semantic content of two sentences (same metric as in §2).<sup>2</sup>. We also use Recall@10 to compare the retrieved URLs for the ground-truth query and the generated query.<sup>3</sup> We conduct experiments using
|
| 111 |
+
|
| 112 |
+
<table><tr><td>Model</td><td>Training Dataset</td><td>SBERT Similarity</td><td>Rouge-1 Recall</td><td>Search Results Recall@10</td></tr><tr><td rowspan="2">T5</td><td>Human Annotated</td><td>92.4</td><td>88.1</td><td>68.5</td></tr><tr><td>Auto Generated</td><td>87.5 (95%)</td><td>83.3 (95%)</td><td>61.5 (90%)</td></tr></table>
|
| 113 |
+
|
| 114 |
+
Table 1: Results on the human-annotated QReCC NQ test set, experimenting with replacing the human-annotated data with automatically generated data with the $q2d$ method. Bold shows the percentage of performance for a model trained with auto-generated data out of a model trained with human-annotated data. Training on the automatically generated data achieves $90\% -95\%$ of the model trained on the human annotated results.
|
| 115 |
+
|
| 116 |
+
an open-source T5-3B model (Raffel et al., 2020) in its original form (referred to as 'None'), by fine-tuning it on the natural QReCC training data and contrasting the results with those obtained from training on the auto-generated QReCC dataset. We use a batch size of 32, an Adam optimizer, a learning rate of 0.0001, and fine-tune it for 10,000 steps.
|
| 117 |
+
|
| 118 |
+
Results. Results are presented in Table 1. We observe that by replacing human annotated data with auto generated data we were able to reach $90\% - 95\%$ of the results with a set of the same size using the same model, demonstrating the efficacy of our $q2d$ approach in minimizing annotation labor and producing synthetic training data that is nearly as effective as human-annotated data.
|
| 119 |
+
|
| 120 |
+
# 4 Extending Query Generation: Multi-Hop QA
|
| 121 |
+
|
| 122 |
+
This section shows that our method is effective as a benchmark and training signal that generalizes to human-annotated data. It is also flexible and able to adapt and improve for specific styles of dialog, even without annotated data. It allows us to create dialogs similar to a target domain and provide a fully labeled query-generation dataset. The generated data is useful for training and evaluation, as well as exploring model performance in new scenarios. We demonstrate this using a multi-hop question answering example.
|
| 123 |
+
|
| 124 |
+
Manual Dialog Construction. We define a challenging test set for multi-hop dialogs by annotating the Bamboogle dataset (Press et al., 2022), which consists of 125 multi-hop human-constructed questions. We create dialogs that ask the same questions, with the user as the information seeker and the assistant as the information provider. The assistant should help the user obtain the information they are seeking, clarify any questions, and move
|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
Q: Who is the female star in Gone with the Wind married to?
|
| 128 |
+
Figure 3: An example dialog generated by $q2d$ from a MuSiQue multihop question. The dialog's final query has a 0.5034 semantic similarity score with the original, indicating different semantic content and thereby not being filtered.
|
| 129 |
+
|
| 130 |
+
the conversation forward without trying to mimic human-to-human interaction. Figure 3 shows a positive example where the generated dialog is not being filtered. On the other hand, a negative example could be a query like "Who is the female star in Gone with the Wind married to?" which closely resembles another query asking about the male star, with a high similarity score of 0.9595. This demonstrates the method's shortcomings in filtering semantically similar queries.
|
| 131 |
+
|
| 132 |
+
Full instructions, examples and annotated data can be found in the Appendix A, Section A.4, including examples with model predictions.
|
| 133 |
+
|
| 134 |
+
Dataset Generation. We use our $q2d$ method as described in Section 2 to generate dialogs that ask multi-hop questions, using the MuSiQue dataset (Trivedi et al., 2021) as the base for the questions $(Q)$ . MuSiQue is a challenging multi-hop QA dataset that is partially auto-generated, so we generate dialogs from partially generated questions. This illustrates how we can use automatically generated data to improve on human-annotated data. We use seven few-shot examples $(S_{q}$ and $S_{d})$ . As a result, we generate 3K train samples and 480 test samples. Full prompts, instructions and examples are available in Appendix A, Section A.1.
|
| 135 |
+
|
| 136 |
+
Metrics. The metrics used in this work are the same as those described in the previous section: Rouge-1 Recall, SBERT embedding semantic similarity, and Recall@10.
|
| 137 |
+
|
| 138 |
+
Models. We evaluate several state-of-the-art language and dialog models. These include PaLM 540B (Chowdhery et al., 2022), Flan-U-PaLM 540B (Chung et al., 2022), T5-3B (Raffel et al., 2020), BlenderBot3-3B (Shuster et al., 2022b), WizInt Search Engine FiD (Lewis et al., 2019). These models are used in a zero-shot setting, except for T5, which is fine-tuned on the auto-generated
|
| 139 |
+
|
| 140 |
+
<table><tr><td></td><td colspan="2">SBERT Similarity</td><td colspan="2">Rouge-1 Recall</td><td colspan="2">Search Results Recall@10</td></tr><tr><td>Model / Test Set</td><td>M</td><td>B</td><td>M</td><td>B</td><td>M</td><td>B</td></tr><tr><td>WizInt</td><td>66</td><td>67</td><td>40</td><td>36</td><td>21</td><td>21</td></tr><tr><td>BlenderBot3</td><td>62</td><td>69</td><td>32</td><td>35</td><td>19</td><td>24</td></tr><tr><td>T5 (QReCC)</td><td>74</td><td>77</td><td>70</td><td>65</td><td>34</td><td>37</td></tr><tr><td>PaLM 540B</td><td>88</td><td>82</td><td>81</td><td>69</td><td>52</td><td>41</td></tr><tr><td>Flan-U-PaLM 540B</td><td>89</td><td>82</td><td>83</td><td>68</td><td>57</td><td>39</td></tr><tr><td>T5 (MuSiQue)</td><td>97</td><td>91</td><td>94</td><td>80</td><td>75</td><td>54</td></tr></table>
|
| 141 |
+
|
| 142 |
+
Table 2: Performance of language and dialogue models on query generation test sets is shown. 'M'B' indicates results on MuSiQue auto-generated Bamboogle manually constructed dialogues. (QReCC) and (MuSiQue) indicate fine-tuning on a "q2d" dataset. Best results were achieved by models fine-tuned on MuSiQue autogenerated dialogue, which improved T5 results by $14\% -$ $59\%$ on the human-annotated test.
|
| 143 |
+
|
| 144 |
+
MuSiQue dialogs in the same method presented in Section 3. BlenderBot3 and WizInt are publicly available in Parlai (Miller et al., 2017), exact details and versions are described in Appendix A, Section A.7. More details on the instructions for zero-shot models can be found in the Appendix A, Section A.3.
|
| 145 |
+
|
| 146 |
+
Results. Query generation results are presented in Table 2. Qualitative examples with T5 model predictions are available in Appendix A, Section A.1. The T5 model improves performance on the human-curated Bamboogle test by $14\% - 59\%$ after fine-tuning on the auto-generated MuSiQue multi-hop dialogues. We show examples for it in Appendix A, Section A.6. This improvement also correlates with improvements on the autogenerated test set, indicating the effectiveness of our method for creating evaluation data. To conclude, our results show that our datasets are effective as a benchmark for query generation, as well as training data that generalizes to both autogenerated and human-annotated test sets.
|
| 147 |
+
|
| 148 |
+
Producing a Partially Decomposed Query. Given a multi-hop dialog, query generation models may resolve partial information. For example, if a dialog asks "How old is the current US president?", a query generation model may produce "How old is Joe Biden?", which is correct at the time but may become outdated in the future, or may produce hallucinations. To prevent this, we can make
|
| 149 |
+
|
| 150 |
+
two query generation calls (first to discover the current US president and then their age), decouple knowledge from executing (Borgeaud et al., 2021; Parisi et al., 2022), periodically update the model's weights, or disallow the model from making partial resolves. This will help ensure that the generated query remains accurate and relevant over time. The fine-tuning technique described in this section uses the last approach to avoid making assumptions about the current president's age or identity.
|
| 151 |
+
|
| 152 |
+
# 5 Intrinsic Evaluation: Naturalness, Factuality and Correctness
|
| 153 |
+
|
| 154 |
+
In this section we perform a thorough analysis of the generated dialogs, focusing on the QReCC NQ dataset which contains human annotated dialogs, and evaluate their naturalness (§5.1), factuality (§5.2) and correctness (§5.3).
|
| 155 |
+
|
| 156 |
+
# 5.1 Naturalness: Humans Struggle to Distinguish Synthetic Dialogs from Natural
|
| 157 |
+
|
| 158 |
+
We define a human-evaluation task to distinguish between naturally generated dialogs and autogenerated dialogs. We sample 100 annotated dialogs from QReCC NQ (Anantha et al., 2020) and mix them with 100 dialogs we generated. The annotators, who are not the authors of the paper and have a STEM degree, were asked to mark 1 if the dialog seems to be generated by a machine, and 0 otherwise.<sup>5</sup> The labels were hidden. We use three annotators for each sample and select their majority vote as the final answer. The results show that the majority vote achieved a success rate of $50.5\%$ , while the random chance is $50\%$ . All individual annotators achieved between $50\% - 55\%$ in this task. In $26\%$ of the cases there is a full agreement between all three annotators. When all agreed, the result improves to $51.9\%$ , which is still close to random chance. These results indicate that humans struggle to differentiate between natural and auto-generated dialogs. This suggests that the auto-generated dialogs are of high quality and are similar to human annotations, and can be used in place of human-generated dialogs in certain situations, saving time and resources.
|
| 159 |
+
|
| 160 |
+

|
| 161 |
+
Figure 4: Illustration of the response factuality evaluation. For each turn, we produce a response with PaLM, and compare the generated response to the human annotated response. We use an NLI model to score whether the response answers the question ("Hypothesis: The answer to the question $\{\mathbf{q}\}$ is $\{\mathbf{r}\}$ " according to the Wikipedia document $d$ used by the human annotator in the ground-truth response generation ("Premise: $\{\mathbf{d}\}$ ". In the first response there is a lower score for the PaLM response because it misses the mention of 'Cornel Wilde' that appears in the document summary.
|
| 162 |
+
|
| 163 |
+
# 5.2 Factuality: Generated and Human-Annotated Answers Perform Similarly in Query Generation
|
| 164 |
+
|
| 165 |
+
The q2d method generates a dialog by starting with a query and generating a series of related questions and answers. However, since the intermediate answers are generated by a large language model, there is a chance that they may be factually correct or incorrect. This raises the following questions. (1) Are the intermediate answers factually correct? (2) How does the factuality of the generated answers affect the results of downstream tasks?
|
| 166 |
+
|
| 167 |
+
We replace all human annotated answers in the QReCC NQ training split with PaLM generated answers. To produce PaLM answers, we use a few-shot prompt, where the input is the original dialog ending in a question, and the output is the PaLM response. An example is provided in Figure 4.
|
| 168 |
+
|
| 169 |
+
Intermediate Answers Factuality According to Automatic Metrics and Human Raters. To answer the first question, we evaluate the factual correctness of the generated answers by using an NLI (Dagan et al., 2005) model presented by Honovich
|
| 170 |
+
|
| 171 |
+
et al. (2021). We take the question ("q"), the response ("r") that may be the ground-truth annotated response or the generated response, and the Wikipedia document ("d") summary available in QReCC dataset. We construct the following NLI instance: "premise: $\{\mathrm{d}\}$ hypothesis: The answer to the question $\{\mathbf{q}\}$ is $\{\mathbf{r}\}$ and produce NLI scores for the ground-truth responses vs. the generated responses. Figure 4 illustrates our process. The average NLI scores for the human responses are $62\%$ , and for the PaLM responses is $38\%$ . However, this measure is biased towards the human responses since we measure it with the Wikipedia document that was used to generate the answer. PaLM might also produce a correct answer, that is just not written in the same exact words in Wikipedia. To test this, we conducted an annotation task with an annotator that is not a part of the paper authors. The annotator was presented with a 50 samples of dialog, query, and two options: A and B. One of the options was the original answer and the other was the generated answer. The annotator's task was to mark 0/1 for each answer indicating whether it was factual and relevant for the question. The results are that PaLM responses were marked as correct in $82\%$ of the cases, compared to $93\%$ correctness of the human responses. This result indicates the factuality and relevancy of the generated responses.
|
| 172 |
+
|
| 173 |
+
For Query Generation, Generated Answers Perform Similar to Human-Annotated. To answer the second question, we replace all of the human annotated answers with automatically generated answers, receiving a semi-auto-generated training set with the same structure and same annotated questions, but with PaLM generated dialogs. Then we train a T5-3B (Raffel et al., 2020) model on the human annotated and the semi-auto-generated version and compare the results. For example in Figure 4, the semi-auto-generated dialog is the one with the answers on the right side. We train the same way as we presented in Section 3. The results are $86.6\%$ Rouge-1 Recall with the semi auto-generated training set, only a small drop $(1.5\%)$ from the results of the model trained on the natural data, indicating that although PaLM sometimes $(< 48\%)$ produce in-factual responses, it only has negligible effect on the query generation task.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
Figure 5: Intent-preserving annotation task results. The proportion of samples that were annotated as intent-preserving increases with the semantic similarity score.
|
| 177 |
+
|
| 178 |
+
# 5.3 Correctness: Generated Datasets Labels are Accurate, and Strict Filtering Improves Results
|
| 179 |
+
|
| 180 |
+
Our main filter measures a similarity between the original query and the reversed query $\text{sim}(q, q')$ and compare it to a threshold $T_{query}$ . We measure its effect in human-evaluation and automatic ablation studies. Both experiments indicate the label correctness for the task of predicting the query from a dialog and the value of stricter filtering threshold.
|
| 181 |
+
|
| 182 |
+
Humans Find that Dialogs Generated by Queries Have the Same Intent. We define a human annotation task to determine whether the dialogs are intent-preserving. Annotators were asked to mark 1 if the dialog is intent-preserving, and 0 otherwise.<sup>6</sup> We use three annotators for each sample, and select their majority vote as the final answer. We follow the notation suggested by (Groenendijk and Stokhof, 1984) about entailment between questions: an interrogative $q$ entails another $d$ iff every proposition that answers $q$ answers $d$ as well (Jiang and de Marneffé, 2022). Here, $q$ stands for a question and $d$ stands for an information-seeking dialog. We defined eight SBERT semantic similarity score buckets, with 15 in each, covering all similarities between 0 and 100. Results are presented in Figure 5. All three annotators agree in $88\%$ of the cases. The proportion of intent-preserving annotations grows according to the SBERT semantic similarity score, with a strong gain between 0.95 and 1, the only bucket with $100\%$ intent-preserving annotations. Accordingly, we only select samples that have generated
|
| 183 |
+
|
| 184 |
+
<table><tr><td>SBERT Similarity Filtering Threshold</td><td>Filtering Proportion</td><td>Query Generation Rouge-1 Recall</td></tr><tr><td>0</td><td>0</td><td>68</td></tr><tr><td>0.25</td><td>6</td><td>69</td></tr><tr><td>0.5</td><td>16</td><td>72</td></tr><tr><td>0.75</td><td>37</td><td>74</td></tr><tr><td>0.8</td><td>44</td><td>76</td></tr><tr><td>0.9</td><td>62</td><td>79</td></tr><tr><td>0.95</td><td>72</td><td>81</td></tr><tr><td>0.99</td><td>84</td><td>83</td></tr><tr><td>0.999</td><td>88</td><td>84</td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 3: Reversed queries similarity filter. The similarity is measured between the original query $q$ and the reversed query ${q}^{\prime }$ predicted with the few-shot model ${q}^{\prime } \leftarrow {M}_{fs}\left( {{S}_{d},{I}_{r},\text{dialogue}}\right)$ . The higher the filter threshold (strict filter),the better the results.
|
| 187 |
+
|
| 188 |
+
queries very similar to the original query $(\geq 0.99)$ in the filtering step.
|
| 189 |
+
|
| 190 |
+
# Strict Filtering Leads to Higher Quality Data, Resulting in Improved Downstream Results.
|
| 191 |
+
|
| 192 |
+
We measure different thresholds tested on an evaluation set of 1,000 instances we generated from other train queries. We also add another filtering method based on an NLI (Dagan et al., 2005) model, given a dialog "d" and a question "q", we construct the following NLI sample: "premise: {d} hypothesis: The dialog asks the question {q}", with different thresholds. Results are presented in Table 3. We report the Rouge-1 recall on the evaluation set. We see that performance increases as the reversed similarity threshold rises, and with a clear trade-off with the filtering proportion. The more data we generate, we are able to apply a more strict filtering, receiving higher quality data, that leads to better results. We produced four options for the NLI-based method, with thresholds ranging from 0.65 to 0.82, and above it filtered too much data (below the goal of 13K). The max performance for the 0.82 threshold group is $70\%$ , much lower than the alternative reverse queries filter.
|
| 193 |
+
|
| 194 |
+
# 6 Related Work
|
| 195 |
+
|
| 196 |
+
Our work relates to data generation, query generation for search-based models, and information retrieval datasets.
|
| 197 |
+
|
| 198 |
+
Data Generation Several works have used large language models for data generation (Agrawal et al., 2022; Honovich et al., 2022b; Yarom et al., 2023). Dai et al. (2022b) applies this technique to information retrieval, creating retrievers based on generated data that generate queries given the document. Their method involves round-consistency filtering using a large language model, a method similar to reverse translation. In the context of dialog generation, Dialog Inpaintint (Dai et al., 2022a) starts from a document and generates a dialog. Moreover, Gekhman et al. (2023a) introduced TrueTeacher, a synthetic data generation method that employs large language models for annotating model-generated summaries, demonstrating its effectiveness over existing techniques. Our approach focuses on generating dialogs from queries, which allows us to leverage the availability of existing QA datasets. This enables us to create information-seeking dialogs with the same intent as the original questions, along with automatically generated labels for the queries and answers.
|
| 199 |
+
|
| 200 |
+
Search Based Query Generation dialog models like LaMDA and BlenderBot use search APIs to generate factual responses. Training and evaluation data for such models is obtained mostly with human annotated data. Previous works (Shuster et al., 2022b; Thoppilan et al., 2022; Komeili et al., 2021) evaluated only the end-to-end dialog response without evaluating the generated query. The evaluation was primarily based on automated metrics of perplexity and F1, or with human annotations assessing whether the model response is sensible, specific, and interesting (SSI), or whether it is correct, engaging, and consistent. The evaluated dialogs were general, not necessarily information-seeking. The focus of this paper is on the query generation task for information-seeking dialogs, with a concrete question and an expected response.
|
| 201 |
+
|
| 202 |
+
Question Rewrite Works like QReCC (Anantha et al., 2020), Question Answering in Context (QuAC) (Choi et al., 2018), TREC Conversational Assistant Track (CAsT) (Dalton et al., 2020), QuAC and CANARD (Elgohary et al., 2019) in the information retrieval domain use human annotated data, that mostly contain follow-up dialogs, questions followed by answers. In the domain of Conversational Question Answering (CQA), a comprehensive study was conducted on the robustness of dialogue history representation, underscoring
|
| 203 |
+
|
| 204 |
+
the significance of evaluations centered on robustness (Gekhman et al., 2023b). Our work focuses on the application of dialog models like LaMDA and BlenderBot, which often involve the use of less formal language and more human-like conversations. The need for a variety of query generation datasets has motivated us to develop an automatic method for generating dialogs for the query generation task, with a range of different styles and skills required.
|
| 205 |
+
|
| 206 |
+
# 7 Conclusions
|
| 207 |
+
|
| 208 |
+
We introduced $q2d$ , a data generation pipeline that produces dialogs based on questions. We demonstrated that our method can replace human-annotated data to train query-generation models, and to create effective, natural, factual, and accurate evaluation and training data in new domains, even when no existing dialogue data is available.
|
| 209 |
+
|
| 210 |
+
# 8 Limitations
|
| 211 |
+
|
| 212 |
+
$q2d$ comes with a set of limitations about costs, domain identification and factuality.
|
| 213 |
+
|
| 214 |
+
Computational Costs. The process of autogenerating data with large language models, although faster and more scalable than human annotations, still incurs significant computational costs. These costs, however, are expected to decrease as AI technologies advance.
|
| 215 |
+
|
| 216 |
+
Domain Identification and Sample Selection. Defining the target domain and selecting representative few-shot examples requires manual oversight. This step, although crucial for ensuring the diversity and representativeness of generated dialogs, adds a layer of complexity and time to the process.
|
| 217 |
+
|
| 218 |
+
Factuality of Dialogs. Our method generates dialogs that are generally factual but occasionally inaccurate. Although our analysis show these discrepancies do not impact query-generation tasks, they may challenge tasks where factuality is critical. Future applications should consider this limitation and potentially enhance factuality.
|
| 219 |
+
|
| 220 |
+
Scope of Fine-Tuning and Model Improvements. In this research, our primary aim was to show that larger foundation models' generated data, such as PaLM 540B, can significantly benefit more compact models like T5-3B. Specifically, when smaller models are fine-tuned on our auto-generated data, they can achieve performance surpassing the larger
|
| 221 |
+
|
| 222 |
+
foundation models in query-generation tasks. However, we did not explore the possible improvements to the original foundation models, like PaLM, when fine-tuned using our generated data. This represents a promising avenue for further research.
|
| 223 |
+
|
| 224 |
+
# References
|
| 225 |
+
|
| 226 |
+
Priyanka Agrawal, Chris Alberti, Fantine Huot, Joshua Maynez, Ji Ma, Sebastian Ruder, Kuzman Ganchev, Dipanjan Das, and Mirella Lapata. 2022. Qameleon: Multilingual qa with only 5 examples. arXiv preprint arXiv:2211.08264.
|
| 227 |
+
Raviteja Anantha, Svitlana Vakulenko, Zhucheng Tu, Shayne Longpre, Stephen Pulman, and Srinivas Chappidi. 2020. Open-domain question answering goes conversational via question rewriting. arXiv preprint arXiv:2010.04898.
|
| 228 |
+
Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2021. Improving language models by retrieving from trillions of tokens. arXiv preprint arXiv:2112.04426.
|
| 229 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
|
| 230 |
+
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055.
|
| 231 |
+
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations, pages 169-174.
|
| 232 |
+
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettle-moyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036.
|
| 233 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
|
| 234 |
+
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
|
| 235 |
+
|
| 236 |
+
2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
|
| 237 |
+
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In *Machine learning challenges workshop*, pages 177-190. Springer.
|
| 238 |
+
Zhuyun Dai, Arun Tejasvi Chaganty, Vincent Y Zhao, Aida Amini, Qazi Mamunur Rashid, Mike Green, and Kelvin Guu. 2022a. Dialog inpainting: Turning documents into dialogs. In International Conference on Machine Learning, pages 4558-4586. PMLR.
|
| 239 |
+
Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022b. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755.
|
| 240 |
+
Jeffrey Dalton, Chenyan Xiong, Vaibhav Kumar, and Jamie Callan. 2020. Cast-19: A dataset for conversational information seeking. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1985-1988.
|
| 241 |
+
Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2021. Evaluating groundedness in dialogue systems: The begin benchmark. arXiv preprint arXiv:2105.00071.
|
| 242 |
+
Ahmed Elgohary, Denis Peskov, and Jordan Boyd-Graber. 2019. Can you unpack that? learning to rewrite questions-in-context. Can You Unpack That? Learning to Rewrite Questions-in-Context.
|
| 243 |
+
Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. 2023a. Trueteacher: Learning factual consistency evaluation with large language models. arXiv preprint arXiv:2305.11171.
|
| 244 |
+
Zorik Gekhman, Nadav Oved, Orgad Keller, Idan Szpektor, and Roi Reichart. 2023b. On the robustness of dialogue history representation in conversational question answering: a comprehensive study and a new prompt-based method. Transactions of the Association for Computational Linguistics, 11:351-366.
|
| 245 |
+
Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375.
|
| 246 |
+
Jeroen Antonius Gerardus Groenendijk and Martin Johan Bastiaan Stokhof. 1984. Studies on the Semantics of Questions and the Pragmatics of Answers. Ph.D. thesis, Univ. Amsterdam.
|
| 247 |
+
Prakhar Gupta, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2021. Dialfact: A benchmark for fact-checking in dialogue. arXiv preprint arXiv:2110.08222.
|
| 248 |
+
|
| 249 |
+
Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022a. True: Re-evaluating factual consistency evaluation. arXiv preprint arXiv:2204.04991.
|
| 250 |
+
Or Honovich, Leshem Choshen, Roee Aharoni, Ella Neeman, Idan Szpektor, and Omri Abend. 2021. $q^2$ : Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering. arXiv preprint arXiv:2104.08202.
|
| 251 |
+
Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022b. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689.
|
| 252 |
+
Nan-Jiang Jiang and Marie-Catherine de Marneffe. 2022. Investigating reasons for disagreement in natural language inference. arXiv preprint arXiv:2209.03392.
|
| 253 |
+
Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566.
|
| 254 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
|
| 255 |
+
Katherine Lee, Orhan First, Ashish Agarwal, Clara Fan-njiang, and David Sussillo. 2018. Hallucinations in neural machine translation.
|
| 256 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 257 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 258 |
+
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. arXiv preprint arXiv:2005.00661.
|
| 259 |
+
Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476.
|
| 260 |
+
Yixin Nie, Mary Williamson, Mohit Bansal, Douwe Kiela, and Jason Weston. 2020. I like fish, especially dolphins: Addressing contradictions in dialogue modeling. arXiv preprint arXiv:2012.13391.
|
| 261 |
+
|
| 262 |
+
Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255.
|
| 263 |
+
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350.
|
| 264 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
|
| 265 |
+
Kurt Shuster, Mojtaba Komeili, Leonard Adolphs, Stephen Roller, Arthur Szlam, and Jason Weston. 2022a. Language models that seek for knowledge: Modular search & generation for dialogue and prompt completion. arXiv preprint arXiv:2203.13224.
|
| 266 |
+
Kurt Shuster, Jing Xu, Mojtaba Komeili, Da Ju, Eric Michael Smith, Stephen Roller, Megan Ung, Moya Chen, Kushal Arora, Joshua Lane, et al. 2022b. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188.
|
| 267 |
+
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
|
| 268 |
+
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2021. Musique: Multi-hop questions via single-hop question composition. arXiv preprint arXiv:2108.00573.
|
| 269 |
+
Michal Yarom, Yonatan Bitton, Soravit Changpinyo, Roee Aharoni, Jonathan Herzig, Oran Lang, Eran Ofek, and Idan Szpektor. 2023. What you see is what you read? Improving text-image alignment evaluation. arXiv preprint arXiv:2305.10400.
|
| 270 |
+
Zheng Zhao, Shay B Cohen, and Bonnie Webber. 2020. Reducing quantity hallucinations in abstractive summarization. arXiv preprint arXiv:2009.13312.
|
| 271 |
+
|
| 272 |
+
# A Appendix
|
| 273 |
+
|
| 274 |
+
# A.1 Generated Examples
|
| 275 |
+
|
| 276 |
+
Figure 6 and Figure 7 show cherry picked examples of human-annotated / auto-generated examples from Bamboogle / MuSiQue.
|
| 277 |
+
|
| 278 |
+
Tables 4 and Tables 5 show random examples from the auto-generated QReCC / MuSiQue dialogs.
|
| 279 |
+
|
| 280 |
+
# A.2 Filtering Examples
|
| 281 |
+
|
| 282 |
+
Table 6 show examples of generated samples with different SBERT similarity scores used for filtering.
|
| 283 |
+
|
| 284 |
+
# A.3 Prompts and Instructions
|
| 285 |
+
|
| 286 |
+
Prompts (instruction + few-shot examples) for the auto-generated dialogs are presented in Table 7 for MuSiQue, and Table 8 for QReCC NQ. We show three prompts for each dataset due page page length, full prompts files are here: MuSiQue, QReCC.
|
| 287 |
+
|
| 288 |
+
The reverse generation model uses the same examples but in reversed order (to generate a query given a dialog).
|
| 289 |
+
|
| 290 |
+
The instruction for the reverse generation model and the PaLM zero-shot is: "Given a dialog that asks an indirect question, extract the concrete question".
|
| 291 |
+
|
| 292 |
+
The instructions for the Flan-U-PaLM-540B and are a bit more detailed (we found out it worked better for this instruction fine-tuned model): "Your task is to reformulate the last conversational query in a dialog to a fully specified, context-independent query that can be effectively handled by Google Search."
|
| 293 |
+
|
| 294 |
+
# A.4 Human dialog Generation for Bamboogle
|
| 295 |
+
|
| 296 |
+
Bamboogle human constructed dialogs are available here: Bamboogle dialogs.
|
| 297 |
+
|
| 298 |
+
Instructions for annotator: "Your task is to write a dialog given a question. You may use the internet in order to add factual information to dialog. The dialog intent should be understandable yet challenging. The responses should be 'assistant like' not trying to mimic human emotions."
|
| 299 |
+
|
| 300 |
+
# A.5 Human Evaluation Instructions
|
| 301 |
+
|
| 302 |
+
Instructions for the intent-preserving task: "You will receive a dialog and a query, and need to label 1 if the dialog has the same intent as the query. If you are unsure, you can check to see whether both intents return the same Google search results."
|
| 303 |
+
|
| 304 |
+
Table 4: Random Examples from Auto-Generated QRECC
|
| 305 |
+
|
| 306 |
+
<table><tr><td>Dialog:
|
| 307 |
+
User: who plays the lead role in wish upon a star
|
| 308 |
+
Assistant: The cast of Wish Upon a Star include Katherine Heigl, Danielle Panabaker, and Mary Page Keller.
|
| 309 |
+
User: who plays haley
|
| 310 |
+
Question: who plays haley in wish upon a star?
|
| 311 |
+
Answer: Danielle Harris</td></tr><tr><td>Dialog:
|
| 312 |
+
User: where is assam
|
| 313 |
+
Assistant: Assam is a state in India
|
| 314 |
+
User: what is the capital
|
| 315 |
+
Assistant: The capital of Assam is Dispur
|
| 316 |
+
User: who lead the first revolt against british in the state
|
| 317 |
+
Question: who lead the first revolt against british in assam?
|
| 318 |
+
Answer: Gomdhar Konwar</td></tr><tr><td>Dialog:
|
| 319 |
+
User: when did last man standing air
|
| 320 |
+
Assistant: Last Man Standing premiered on ABC on October 11, 2011.
|
| 321 |
+
User: what is the main character's name
|
| 322 |
+
Assistant: Tim Allen stars as Mike Baxter, the central character of the series.
|
| 323 |
+
User: what are the other character's names
|
| 324 |
+
Assistant: The other characters are Mike's wife and three daughters.
|
| 325 |
+
User: who plays cami harris
|
| 326 |
+
Question: who plays cami harris on last man standing?
|
| 327 |
+
Answer: Sarah Gilman</td></tr><tr><td>Dialog:
|
| 328 |
+
User: who were the confederates in the civil war
|
| 329 |
+
Assistant: The Confederates in the American Civil War were the Confederate States of America, or the South.
|
| 330 |
+
User: who fought them
|
| 331 |
+
Question: who fought the confederates in the american civil war?
|
| 332 |
+
Answer: The Union</td></tr><tr><td>Dialog:
|
| 333 |
+
User: when did adele webber first come on grey's anatomy
|
| 334 |
+
Assistant: The character of Adele Webber was first introduced in the episode ""Save Me"".
|
| 335 |
+
User: who plays her
|
| 336 |
+
Question: who plays adele webber on grey's anatomy?
|
| 337 |
+
Answer: Loretta Devine</td></tr><tr><td>Dialog:
|
| 338 |
+
User: what did the effigy mounds look like
|
| 339 |
+
Assistant: The effigy mounds are earthen mounds built in the shapes of animals, humans, and other objects.
|
| 340 |
+
User: when were they discovered
|
| 341 |
+
Assistant: The first effigy mounds were discovered in the 1700s but were not identified as such until the late 1800s.
|
| 342 |
+
User: when were most of them built
|
| 343 |
+
Questions: when were most of the effigy mounds built?
|
| 344 |
+
Answer: 350 - 1300 CE</td></tr></table>
|
| 345 |
+
|
| 346 |
+
Instructions for the natural vs. auto-generated task: "Can you guess which of the following dialogs is natural and which is auto-generated by a machine? There are 200 dialogs. 100 are synthetic, and 100 are natural. Enter 1 for Synthetic and 0 for Natural in the "synthetic?" field."
|
| 347 |
+
|
| 348 |
+
Q: Who is the father of the father of George Washington?
|
| 349 |
+

|
| 350 |
+
Who was George Washington's father's father?
|
| 351 |
+
Which kingdom did Lady Godiva's husband rule?
|
| 352 |
+
|
| 353 |
+
Q: The husband of Lady Godiva was Earl of which Anglic kingdom?
|
| 354 |
+

|
| 355 |
+
who was lady godiva
|
| 356 |
+
|
| 357 |
+
Q: Who was the head of NASA during Apollo 11?
|
| 358 |
+

|
| 359 |
+
Who held that role during Apollo 11?
|
| 360 |
+
who was head of nasa during apollo 11
|
| 361 |
+
|
| 362 |
+
Figure 6: Examples from the human-annotated dialogues for Bamboogle. The model predictions above/below the line are of T5, before/after fine-tuning on MuSiQue dialogues.
|
| 363 |
+

|
| 364 |
+
who is clark gable married to
|
| 365 |
+
Who is the female star in Gone with the Wind married to?
|
| 366 |
+
|
| 367 |
+
Q: Who is the female star in Gone with the Wind married to?
|
| 368 |
+
Q: What record label did the performer of The Place and the Time belong to?
|
| 369 |
+
Figure 7: Examples from the auto-generated dialogues for MuSiQue. The model predictions above/below the line are of T5, before/after fine-tuning on MuSiQue dialogues.
|
| 370 |
+

|
| 371 |
+
the place and the time singer
|
| 372 |
+
What record label did the performer of The Place and the Time belong to?
|
| 373 |
+
|
| 374 |
+
Q: What is the population of Francis Watson's birthplace?
|
| 375 |
+

|
| 376 |
+
what's the population of his birthplace
|
| 377 |
+
What is the population of the birthplace of Francis Watson?
|
| 378 |
+
|
| 379 |
+
# A.6 Additional Examples
|
| 380 |
+
|
| 381 |
+
Table 9 shows examples of the ground-truth vs. the predicted queries from samples in the Bamboogle dialogs dataset.
|
| 382 |
+
|
| 383 |
+
we used: zoo:sea/bart_fid_sqse/model. For BlendeBot3 model we used zoo:bb3/bb3_3B/model -o gen/r2c2_bb3. The search engine is: https://github.com/scy6500/ParlAI_SearchEngine
|
| 384 |
+
|
| 385 |
+
# A.7 Parlai Models
|
| 386 |
+
|
| 387 |
+
We used available versions in Parlai (Miller et al., 2017) that can be accessed through an open API for a popular search engine. For the WizInt model
|
| 388 |
+
|
| 389 |
+
Table 5: Random Examples from Auto-Generated MuSiQue
|
| 390 |
+
|
| 391 |
+
# Dialog:
|
| 392 |
+
|
| 393 |
+
User: I am looking for a movie to watch tonight.
|
| 394 |
+
|
| 395 |
+
Assistant: I think I can help you find a good movie. What are your preferences?
|
| 396 |
+
|
| 397 |
+
User: I like watching action movies.
|
| 398 |
+
|
| 399 |
+
Assistant: What about The Bourne Deception? It is a very popular action movie.
|
| 400 |
+
|
| 401 |
+
User: Who plays the main character of this movie?
|
| 402 |
+
|
| 403 |
+
Question: Who plays the main character of The Bourne Deception?
|
| 404 |
+
|
| 405 |
+
Answer: Matt Damon
|
| 406 |
+
|
| 407 |
+
# Dialog:
|
| 408 |
+
|
| 409 |
+
User: I just watched Charlie and the Chocolate Factory and I can't get it out of my mind!
|
| 410 |
+
|
| 411 |
+
Assistant: What an amazing movie! I love it!
|
| 412 |
+
|
| 413 |
+
User: I have an argument with my sister about the spoilt character.
|
| 414 |
+
|
| 415 |
+
Assistant: That's the thing - there are so many characters in this movie that are not just black and white. They are complex characters.
|
| 416 |
+
|
| 417 |
+
User: I know the spoilt character is a boy but can you tell me who played him?
|
| 418 |
+
|
| 419 |
+
Question: Who played the spoilt character in Charlie and the Chocolate Factory?
|
| 420 |
+
|
| 421 |
+
Answer: Julie Dawn Cole
|
| 422 |
+
|
| 423 |
+
# Dialog:
|
| 424 |
+
|
| 425 |
+
User: I was watching the movie 'Dolores Claiborne' yesterday.
|
| 426 |
+
It made me curious about the producer of the movie.
|
| 427 |
+
|
| 428 |
+
Assistant: It is the movie with Kathy Bates, right?
|
| 429 |
+
|
| 430 |
+
User: Yes, and I was wondering about the producer, do you know who his spouse is?
|
| 431 |
+
|
| 432 |
+
Question: Who is the spouse of the producer of Dolores Claiborne?
|
| 433 |
+
|
| 434 |
+
https://www.overleaf.com/project/644a2e0b82237321a4a407f1
|
| 435 |
+
|
| 436 |
+
Answer: Helen Mirren
|
| 437 |
+
|
| 438 |
+
# Dialog:
|
| 439 |
+
|
| 440 |
+
User: I have this friend that is totally into football.
|
| 441 |
+
|
| 442 |
+
Assistant: I see.
|
| 443 |
+
|
| 444 |
+
User: He is a big fan of the Al Janoub Stadium.
|
| 445 |
+
|
| 446 |
+
Assistant: I am familiar with the Al Janoub Stadium. It is a very modern stadium, and its design is very unique.
|
| 447 |
+
|
| 448 |
+
User: I'm not surprised. I'm wondering what league includes the operating group of the stadium.
|
| 449 |
+
|
| 450 |
+
Question: What league includes the operating group of Al Janoub Stadium?
|
| 451 |
+
|
| 452 |
+
Answer: Qatar Stars League
|
| 453 |
+
|
| 454 |
+
# Dialog:
|
| 455 |
+
|
| 456 |
+
User: I am really into books these days. I just finished reading 'The Final Testament of the Holy Bible' and can't get it out of my mind.
|
| 457 |
+
|
| 458 |
+
Assistant: I can see how this book would be interesting. A lot of people think about books like that.
|
| 459 |
+
|
| 460 |
+
User: It is a really good book. I think that this book was published by a great publisher.
|
| 461 |
+
|
| 462 |
+
Assistant: Yes, it is. It is one of the best in the UK.
|
| 463 |
+
|
| 464 |
+
User: Do you know who founded the publisher of this book?
|
| 465 |
+
|
| 466 |
+
Questions: Who founded the publisher of The Final Testament of the Holy Bible?
|
| 467 |
+
|
| 468 |
+
Answer: Larry Gagosian
|
| 469 |
+
|
| 470 |
+
Table 6: Examples from the generated QReCC data with different SBERT similarity scores between original Query $(Q)$ and the reversed Query $(R_{Q})$ . The higher the similarity, the more the dialog's intent is the same as the original query. We took only dialogs with SBERT similarity $\geq 0.999$ .
|
| 471 |
+
|
| 472 |
+
# SBERT Similarity: 0.999
|
| 473 |
+
|
| 474 |
+
# Dialog:
|
| 475 |
+
|
| 476 |
+
User: who is the chairman of the joint chiefs of staff
|
| 477 |
+
|
| 478 |
+
Assistant: General Joseph Dunford is the current Chairman of the Joint Chiefs of Staff.
|
| 479 |
+
|
| 480 |
+
User: who does he advise
|
| 481 |
+
|
| 482 |
+
$Q$ : who does the chairman of the joint chiefs of staff advise
|
| 483 |
+
|
| 484 |
+
$R_{Q}$ : Who does the chairman of the joint chiefs of staff advise?
|
| 485 |
+
|
| 486 |
+
# SBERT Similarity: 0.75
|
| 487 |
+
|
| 488 |
+
# Dialog:
|
| 489 |
+
|
| 490 |
+
User: who designed magic the gathering
|
| 491 |
+
|
| 492 |
+
Assistant: Richard Garfield is the creator of the Magic: The Gathering collectible card game.
|
| 493 |
+
|
| 494 |
+
User: who originally published the game
|
| 495 |
+
|
| 496 |
+
Assistant: Wizards of the Coast, a subsidiary of Hasbro, Inc. is the original publisher of Magic: The Gathering
|
| 497 |
+
|
| 498 |
+
User: who is the current publisher
|
| 499 |
+
|
| 500 |
+
$Q$ : who created magic the gathering?
|
| 501 |
+
|
| 502 |
+
$R_{Q}$ : who is the current publisher of the game Magic: The Gathering?
|
| 503 |
+
|
| 504 |
+
# SBERT Similarity: 0.5
|
| 505 |
+
|
| 506 |
+
# Dialog:
|
| 507 |
+
|
| 508 |
+
User: what year did the Miami Hurricanes win the National Championship
|
| 509 |
+
|
| 510 |
+
Assistant: The Miami Hurricanes won the national championship in 2001.
|
| 511 |
+
|
| 512 |
+
User: was the team undefeated
|
| 513 |
+
|
| 514 |
+
Assistant: The Hurricanes finished the 2001 season 12-0.
|
| 515 |
+
|
| 516 |
+
User: who did they win against in the national championship game
|
| 517 |
+
|
| 518 |
+
$Q$ : who did the Maimi Hurricanes win against in 1983
|
| 519 |
+
|
| 520 |
+
$R_{Q}$ : Who did the Miami Hurricanes play in the national championship game
|
| 521 |
+
|
| 522 |
+
# SBERT Similarity: 0.25
|
| 523 |
+
|
| 524 |
+
# Dialog:
|
| 525 |
+
|
| 526 |
+
User: who won the golden glove award at the 2018 fifa world cup
|
| 527 |
+
|
| 528 |
+
Assistant: Thibaut Courtois was named the winner of the Golden Glove at the 2018 FIFA World Cup.
|
| 529 |
+
|
| 530 |
+
User: what team did he play for?
|
| 531 |
+
|
| 532 |
+
$Q$ : who won the golden glove at the 2018 fifa world cup?
|
| 533 |
+
|
| 534 |
+
$R_{Q}$ : what team did thibaut courtois play for?
|
| 535 |
+
|
| 536 |
+
# SBERT Similarity: 0
|
| 537 |
+
|
| 538 |
+
# Dialog:
|
| 539 |
+
|
| 540 |
+
User: who inspired the book wonder by RJ Palacio
|
| 541 |
+
|
| 542 |
+
Assistant: The book Wonder by RJ Palacio was inspired by a childhood story of a boy with a severe facial difference.
|
| 543 |
+
|
| 544 |
+
User: what was the name of the boy
|
| 545 |
+
|
| 546 |
+
Assistant: The name of the boy was Julian.
|
| 547 |
+
|
| 548 |
+
User: what was his facial difference
|
| 549 |
+
|
| 550 |
+
Assistant: The boy had a craniofacial condition called
|
| 551 |
+
|
| 552 |
+
Treacher Collins syndrome.
|
| 553 |
+
|
| 554 |
+
User: what is the condition
|
| 555 |
+
|
| 556 |
+
$Q$ : what inspired the book Wonder
|
| 557 |
+
|
| 558 |
+
$R_{Q}$ : what is Treacher Collins syndrome
|
| 559 |
+
|
| 560 |
+
Table 7: MuSiQue Auto-Generated Dialogues Prompts
|
| 561 |
+
|
| 562 |
+
Write a dialog between an automated assistant and a user, and the dialog should indirectly ask the initial question you received.
|
| 563 |
+
|
| 564 |
+
Question: When was the institute that owned The Collegian founded?
|
| 565 |
+
|
| 566 |
+
Dialog:
|
| 567 |
+
|
| 568 |
+
User: I have this homework that I need to submit in my history of the journalism course. Can you help me find out some of the details?
|
| 569 |
+
|
| 570 |
+
Assistant: Sure, I am here to help
|
| 571 |
+
|
| 572 |
+
User: I am working together with my friend Darren. We are looking into different newspapers, focusing on the powers that own them. I'm currently looking into The Collegian
|
| 573 |
+
|
| 574 |
+
Assistant: I can find out about The Collegian. It is actually owned by an educational institute.
|
| 575 |
+
|
| 576 |
+
User: When was the institute founded?
|
| 577 |
+
|
| 578 |
+
Question: What city is the person who broadened the doctrine of philosophy of language from?
|
| 579 |
+
|
| 580 |
+
Dialog:
|
| 581 |
+
|
| 582 |
+
User: I am conducting some research in the area of doctrine of philosophy of language
|
| 583 |
+
|
| 584 |
+
Assistant: I see. It is a fascinating sub-field of linguistics. It developed in quite an interesting process.
|
| 585 |
+
|
| 586 |
+
User: I know that it was broadened by some important philosopher
|
| 587 |
+
|
| 588 |
+
Assistant: Indeed, a famous german philosopher broadened it by focusing on language and its limits and ambiguities
|
| 589 |
+
|
| 590 |
+
User: What city was he from?
|
| 591 |
+
|
| 592 |
+
Question: Who employs the person who wrote the book Animal Liberation?
|
| 593 |
+
|
| 594 |
+
Dialog:
|
| 595 |
+
|
| 596 |
+
User: I just finished reading the book Animal Liberation. To be honest, I can't get the book out of my mind! Assistant: Indeed, some people find animal liberation extremely important. Others care a lot more about humans and do not view this topic as very important.
|
| 597 |
+
|
| 598 |
+
User: I wish I could write such influential books one day. I am not sure if my company would even let me do that.
|
| 599 |
+
|
| 600 |
+
Assistant: Well, fortunately for the Australian writer of the book, he works for a company that allowed him to fulfill himself and publish the book
|
| 601 |
+
|
| 602 |
+
User: Which company was that?
|
| 603 |
+
|
| 604 |
+
Table 8: QRECC NQ Auto-Generated Dialogues Prompts
|
| 605 |
+
|
| 606 |
+
Write a dialog between an automated assistant and a user, and the dialog should indirectly ask the initial question you received.
|
| 607 |
+
|
| 608 |
+
Question: Why was the great wall built?
|
| 609 |
+
|
| 610 |
+
Dialog:
|
| 611 |
+
|
| 612 |
+
User: where is the great wall of china located
|
| 613 |
+
|
| 614 |
+
Assistant: The Great Wall of China is built across the historical northern borders of China.
|
| 615 |
+
|
| 616 |
+
User: how long is the wall
|
| 617 |
+
|
| 618 |
+
Assistant: The Great Wall is 21,196 km (13,171 mi).
|
| 619 |
+
|
| 620 |
+
User: why was the wall built
|
| 621 |
+
|
| 622 |
+
Question: Which U.S. states produce the most crued oil?
|
| 623 |
+
|
| 624 |
+
Dialog:
|
| 625 |
+
|
| 626 |
+
User: What kind of oil is in North Dakota
|
| 627 |
+
|
| 628 |
+
Assistant: The oil-rich Bakken shale formation has made North Dakota the second-largest crude-producing state behind only Texas.
|
| 629 |
+
|
| 630 |
+
User: Why
|
| 631 |
+
|
| 632 |
+
Assistant: North Dakota generated 852 million barrels of Bakken crude, The Bakken and the Three Forks formation are $94\%$ of Dakota's current oil production, with about 1 million barrels a day.
|
| 633 |
+
|
| 634 |
+
User: In how many U.S states do we find the production
|
| 635 |
+
|
| 636 |
+
Assistant: Crude oil is produced in 32 U.S. states and in U.S. coastal waters.
|
| 637 |
+
|
| 638 |
+
User: Which ones produce the most
|
| 639 |
+
|
| 640 |
+
Question: Where is henry cavill from?
|
| 641 |
+
|
| 642 |
+
Dialog:
|
| 643 |
+
|
| 644 |
+
User: where is superman in the justice league movie
|
| 645 |
+
|
| 646 |
+
Assistant: In the Justice League Movie, Superman has been dead for two years and mankind is in mourning him.
|
| 647 |
+
|
| 648 |
+
User: who plays the character
|
| 649 |
+
|
| 650 |
+
Assistant: Henry Cavill plays Superman in the Justice League movie
|
| 651 |
+
|
| 652 |
+
User: where is the actor from
|
| 653 |
+
|
| 654 |
+
<table><tr><td>Question</td><td>BlenderBot3 Predictions</td></tr><tr><td>What is the maximum airspeed (in km/h) of the third fastest bird?</td><td>Fastest bird</td></tr><tr><td>Who directed the highest grossing film?</td><td>Top grossing films</td></tr><tr><td>The husband of Lady Godiva was Earl of which Anglic kingdom?</td><td>Lady Godiva</td></tr><tr><td>Who founded the city where the founder of geometry lived?</td><td>Geometry</td></tr><tr><td>Who was the head of NASA during Apollo 11?</td><td>Apollo 11</td></tr><tr><td>When was the longest bridge in the world opened?</td><td>Longest bridge</td></tr><tr><td>When was the anime studio that made Sword Art Online founded?</td><td>Sword Art Online</td></tr><tr><td>What is the capital of the country where yoga originated?</td><td>Yoga origin</td></tr><tr><td>Who is the father of the father of George Washington?</td><td>George Washington father</td></tr><tr><td>Who was the first king of the longest Chinese dynasty?</td><td>first king of the longest dynasty</td></tr></table>
|
| 655 |
+
|
| 656 |
+
Table 9: Examples for the ground-truth queries from the Bamboogle dialogues with the BlenderBot3 queries. The BlenderBot3 seems to be trained more on finding the topic than asking concrete questions.
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db9413780e7d6675a51dcc3830c4362c874406adcbb46eef7e8c056fd9af6a37
|
| 3 |
+
size 706213
|
2023/q2d_ Turning Questions into Dialogs to Teach Models How to Search/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_content_list.json
ADDED
|
@@ -0,0 +1,2012 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "trlX: A Framework for Large Scale Reinforcement Learning from Human Feedback",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
115,
|
| 8 |
+
83,
|
| 9 |
+
880,
|
| 10 |
+
120
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Alexander Havrilla $^{1,2}$ , Maksym Zhuravinskyi $^{1,3}$ , Duy Van Phung $^{1,3}$ , Aman Tiwari $^{4}$ , Jonathan Tow $^{1,3}$ , Shivanshu Purohit $^{5}$ , Stella Biderman $^{5,6}$ , Quentin Gregory Anthony $^{5,7}$ , Ethan Kim $^{8}$ , and Louis Castricato $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
188,
|
| 19 |
+
131,
|
| 20 |
+
811,
|
| 21 |
+
184
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ CarperAI, $^{2}$ Georgia Tech, $^{3}$ Stability AI, $^{4}$ Independent Researcher*, $^{5}$ EleutherAI, $^{6}$ Booz Allen Hamilton, $^{7}$ Ohio State University, $^{8}$ vectorshift.ai",
|
| 28 |
+
"bbox": [
|
| 29 |
+
200,
|
| 30 |
+
195,
|
| 31 |
+
805,
|
| 32 |
+
229
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
266
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Reinforcement learning from human feedback (RLHF) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (PPO), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we present theTRLX library as a feature-complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. We implement support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
144,
|
| 53 |
+
277,
|
| 54 |
+
458,
|
| 55 |
+
491
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "To increase the accessibility of RLHF to researchers, we implement compute- and memory-saving features that giveTRLX the flexibility to support users with a wide range of compute resources. This includes offline RL methods like Implicit Language Q Learning (ILQL), low-rank adapters, and the Hydra architecture. We find offline fine-tuning offers competitive performance relative to online algorithms while being easier to implement, train, and scale. To evaluate our framework we train RLHF models on two separate well-known tasks using publicly available human preference data. Models trained withTRLX achieve preference win-rates over baselines at rates comparable to the original works.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
142,
|
| 64 |
+
495,
|
| 65 |
+
460,
|
| 66 |
+
721
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
733,
|
| 77 |
+
258,
|
| 78 |
+
747
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Since 2019, the prevailing training paradigm for large-language models (Brown et al., 2020; Raffel et al., 2019; Devlin et al., 2019) has comprised two parts: First a pre-training phase during which models are trained on a large corpus of text. Pretraining produces a general-purpose model which has learned the syntax and some semantics of natural text but is not easily controllable for any particular task. Fine-tuning is then used to adapt this general-purpose pre-trained model to a smaller, specialized",
|
| 85 |
+
"bbox": [
|
| 86 |
+
112,
|
| 87 |
+
757,
|
| 88 |
+
489,
|
| 89 |
+
917
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "corpus. However, this often comes with a decrease in model performance on other tasks. Further, the resulting model can be difficult to work with, requiring extensive prompt engineering.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
253,
|
| 99 |
+
884,
|
| 100 |
+
316
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Reinforcement learning from human feedback (RLHF), has presented itself as an additional third stage of the language model training pipeline. In this stage, gathered human preference data is used to supervise fine-tune the pre-trained model and then train a reward model. The reward model assigns scalar values to (prompt, response) pairs that correspond to human preference. The supervised fine-tuned model can then be trained using Proximal Policy Optimization (PPO) (Schulman et al., 2017), an online reinforcement learning algorithm, to optimize against the learned reward model. The resulting models are better aligned with human preferences, leading to win-rates over the base model by up to $80\\%$ (Askell et al., 2021; Bai et al., 2022a; Ouyang et al., 2022), and producing more desirable and less harmful text with less prompting.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
317,
|
| 110 |
+
884,
|
| 111 |
+
607
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "However, the study of RLHF models in the academic community has been limited thus far by a need for open-source training frameworks and datasets. Online reinforcement learning via PPO is compute expensive and challenging to scale as we must store three copies of the model in memory during training. To address these issues we presentTRLX: a library supporting online and offline RLHF fine-tuning for language models of up to and exceeding 70 billion parameters. At smaller scales, we emphasize low-resource accessibility by incorporating features such as Hydra model architectures (Glaese et al., 2022), LoRA adapters (Hu et al., 2021), and DeepSpeed (Rajbhandari et al., 2019), which, when combined, can reduce memory overhead on GPT-J (Wang & Komatsuzaki, 2021) by up to $75\\%$ with minimal impact on the achieved",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
609,
|
| 121 |
+
882,
|
| 122 |
+
882
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_footnote",
|
| 128 |
+
"text": "* work done while at CarperAI",
|
| 129 |
+
"bbox": [
|
| 130 |
+
534,
|
| 131 |
+
892,
|
| 132 |
+
729,
|
| 133 |
+
904
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_footnote",
|
| 139 |
+
"text": "Correspondence to: ahavrilla3@gatech.edu",
|
| 140 |
+
"bbox": [
|
| 141 |
+
537,
|
| 142 |
+
904,
|
| 143 |
+
823,
|
| 144 |
+
917
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "page_number",
|
| 150 |
+
"text": "8578",
|
| 151 |
+
"bbox": [
|
| 152 |
+
480,
|
| 153 |
+
927,
|
| 154 |
+
519,
|
| 155 |
+
940
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "footer",
|
| 161 |
+
"text": "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8578-8595 December 6-10, 2023 ©2023 Association for Computational Linguistics",
|
| 162 |
+
"bbox": [
|
| 163 |
+
159,
|
| 164 |
+
945,
|
| 165 |
+
835,
|
| 166 |
+
972
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "table",
|
| 172 |
+
"img_path": "images/46369561bf274a0462d9ac0c4b3d2282b9d0ebe744ea4699a79e0608d16f49ee.jpg",
|
| 173 |
+
"table_caption": [],
|
| 174 |
+
"table_footnote": [],
|
| 175 |
+
"table_body": "<table><tr><td></td><td colspan=\"2\">RL Algorithms</td><td colspan=\"3\">Parallelization Strategies</td><td colspan=\"2\">Features</td></tr><tr><td></td><td>Online</td><td>Offline</td><td>Tensor</td><td>Pipeline</td><td>Sequence</td><td>LoRA</td><td>Sweeps</td></tr><tr><td>RL4LM</td><td>✓</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>trl</td><td>✓</td><td></td><td>✓</td><td>*</td><td></td><td>✓</td><td></td></tr><tr><td>DS Chat</td><td>✓</td><td></td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td></tr><tr><td>trlX (ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>",
|
| 176 |
+
"bbox": [
|
| 177 |
+
184,
|
| 178 |
+
80,
|
| 179 |
+
811,
|
| 180 |
+
197
|
| 181 |
+
],
|
| 182 |
+
"page_idx": 1
|
| 183 |
+
},
|
| 184 |
+
{
|
| 185 |
+
"type": "text",
|
| 186 |
+
"text": "Table 1:TRLX feature comparisons with other libraries. * trl supports naive pipeline parallelism, which allows larger models to be run but is far less efficient.",
|
| 187 |
+
"bbox": [
|
| 188 |
+
112,
|
| 189 |
+
212,
|
| 190 |
+
882,
|
| 191 |
+
240
|
| 192 |
+
],
|
| 193 |
+
"page_idx": 1
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"type": "text",
|
| 197 |
+
"text": "reward.TRLX training at this scale is compatible with most encoder-decoder and decoder-only architecture supported on the popular Hugging Face Hub (Wolf et al., 2019). For large, 20 billion parameter plus, training jobs we implement support for tensor, sequence, and pipeline parallelism via both the GPT-NeoX library (Aidonian et al., 2021) and NeMO-Megatron (Kuchaiev et al., 2019).",
|
| 198 |
+
"bbox": [
|
| 199 |
+
110,
|
| 200 |
+
266,
|
| 201 |
+
487,
|
| 202 |
+
394
|
| 203 |
+
],
|
| 204 |
+
"page_idx": 1
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"type": "text",
|
| 208 |
+
"text": "trlX also supports Implicit Language Q Learning, ILQL, as an offline alternative to online RL methods. We find fine-tuning via ILQL achieves preference win-rates with baselines close to PPO but at a fraction of the compute cost. Further ILQL is more robust to reward model overfitting, which online algorithms can suffer from. To evaluate our framework we provide open-source replications of well-known papers from the RLHF literature including learning summaries from human feedback (Stiennon et al., 2020) and Helpful/Harmless preference learning for a general purpose language assistant (Bai et al., 2022a). We find models trained with trlX achieve preference win-rates, as judged by human annotators, over baselines at rates comparable to the original works, validating our implementation. We open-source all supervised fine-tuned models, reward models, and RLHF models for further research, as well as the training framework, trlX.",
|
| 209 |
+
"bbox": [
|
| 210 |
+
115,
|
| 211 |
+
395,
|
| 212 |
+
489,
|
| 213 |
+
715
|
| 214 |
+
],
|
| 215 |
+
"page_idx": 1
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"type": "text",
|
| 219 |
+
"text": "In summary, we make the following contributions:",
|
| 220 |
+
"bbox": [
|
| 221 |
+
112,
|
| 222 |
+
718,
|
| 223 |
+
489,
|
| 224 |
+
749
|
| 225 |
+
],
|
| 226 |
+
"page_idx": 1
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"type": "list",
|
| 230 |
+
"sub_type": "text",
|
| 231 |
+
"list_items": [
|
| 232 |
+
"- trlX as a feature complete, open-source library for reinforcement learning from human feedback supporting model sizes up to and exceeding 70 billion parameters. This includes benchmark examples implementing the first known open-source replications of several well-known RLHF models, offering insights into training and evaluation.",
|
| 233 |
+
"- A novel evaluation of offline RL based fine"
|
| 234 |
+
],
|
| 235 |
+
"bbox": [
|
| 236 |
+
134,
|
| 237 |
+
763,
|
| 238 |
+
489,
|
| 239 |
+
917
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "tuning for preference learning at scale.",
|
| 246 |
+
"bbox": [
|
| 247 |
+
544,
|
| 248 |
+
267,
|
| 249 |
+
833,
|
| 250 |
+
282
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "text",
|
| 256 |
+
"text": "- Release of all models at all scales involved in the training pipeline including supervised-fine tuned, reward, and RL tuned models.",
|
| 257 |
+
"bbox": [
|
| 258 |
+
531,
|
| 259 |
+
297,
|
| 260 |
+
880,
|
| 261 |
+
344
|
| 262 |
+
],
|
| 263 |
+
"page_idx": 1
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"type": "text",
|
| 267 |
+
"text": "2 Background",
|
| 268 |
+
"text_level": 1,
|
| 269 |
+
"bbox": [
|
| 270 |
+
507,
|
| 271 |
+
362,
|
| 272 |
+
650,
|
| 273 |
+
379
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "Reinforcement Learning from Human Feedback Reinforcement learning from human feedback attempts to improve agent performance, either in sample efficiency or performance on downstream tasks, by incorporating some form of human feedback on agent behavior (Knox & Stone, 2009; Christiano et al., 2017; Stiannon et al., 2020). We focus on the 3-stage fine-tuning pipeline outlined by Stiannon et al. (2020) in which researchers first assemble a dataset of human-annotated preferences, then train a reward model to predict these preferences, and lastly train a policy to maximize the score of the resulting reward model. Similarly structured pipelines have been adapted to train many of the most recent interactive natural language assistance tools (Nakano et al., 2021; Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022).",
|
| 280 |
+
"bbox": [
|
| 281 |
+
507,
|
| 282 |
+
390,
|
| 283 |
+
882,
|
| 284 |
+
663
|
| 285 |
+
],
|
| 286 |
+
"page_idx": 1
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"type": "text",
|
| 290 |
+
"text": "Nevertheless, collecting human preferences at scale can be cost-inefficient. Recent work by Bai et al. (2022b) proposes tractable oversight by using synthetic AI preferences instead of direct human labels. Similar works by Honovich et al. (2022); Wang et al. (2022a,b) generate instruction-following datasets by querying already aligned models like text-davinci-003 to generate both task and instruction-following responses.",
|
| 291 |
+
"bbox": [
|
| 292 |
+
507,
|
| 293 |
+
665,
|
| 294 |
+
884,
|
| 295 |
+
810
|
| 296 |
+
],
|
| 297 |
+
"page_idx": 1
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"type": "text",
|
| 301 |
+
"text": "Scalable Training Frameworks There are many notable choices of distributed training frameworks for large-scale language model pre-training and fine-tuning, each implementing various parallelism schemes. These include DeepSpeed (Rajbhandari et al., 2019), Megatron-LM (Shoeybi et al., 2019;",
|
| 302 |
+
"bbox": [
|
| 303 |
+
507,
|
| 304 |
+
822,
|
| 305 |
+
882,
|
| 306 |
+
917
|
| 307 |
+
],
|
| 308 |
+
"page_idx": 1
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"type": "page_number",
|
| 312 |
+
"text": "8579",
|
| 313 |
+
"bbox": [
|
| 314 |
+
480,
|
| 315 |
+
927,
|
| 316 |
+
519,
|
| 317 |
+
940
|
| 318 |
+
],
|
| 319 |
+
"page_idx": 1
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"type": "text",
|
| 323 |
+
"text": "Kuchaiev et al., 2019), the GPT-NeoX library (Andonian et al., 2021) which combines DeepSpeed and Megatron-LM, Fairseq (Ott et al., 2019; Zhang et al., 2022), and T5X (Roberts et al., 2022) for TPU-based training.",
|
| 324 |
+
"bbox": [
|
| 325 |
+
112,
|
| 326 |
+
84,
|
| 327 |
+
487,
|
| 328 |
+
164
|
| 329 |
+
],
|
| 330 |
+
"page_idx": 2
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"type": "text",
|
| 334 |
+
"text": "However, none of these frameworks are designed explicitly to support fine-tuning via RL and would, therefore, require significant work to retrofit. More recently, RL-specific fine-tuning libraries have become available. RL4LM's (Ramamurthy et al., 2022) implements online algorithms for fine-tuning mid-sized language models with reinforcement learning from human feedback and supports an impressive range of tasks and metrics. TRL (Leandro, 2019), initially a smaller-scale library for transformer reinforcement learning, is a re-implementation of Ziegler et al. (2019) in PyTorch for doing sentiment-based fine-tuning. It has since been expanded in a manner similar to trlX to include training with DeepSpeed via Hugging Face accelerate. More recently, DeepSpeed-Chat (Yao et al., 2023) was released allowing for larger model training with better throughput. However, they do not allow for more advanced forms of parallelism supported in trlX.",
|
| 335 |
+
"bbox": [
|
| 336 |
+
115,
|
| 337 |
+
166,
|
| 338 |
+
489,
|
| 339 |
+
486
|
| 340 |
+
],
|
| 341 |
+
"page_idx": 2
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"type": "text",
|
| 345 |
+
"text": "3 Training withTRLX",
|
| 346 |
+
"text_level": 1,
|
| 347 |
+
"bbox": [
|
| 348 |
+
112,
|
| 349 |
+
500,
|
| 350 |
+
307,
|
| 351 |
+
516
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "text",
|
| 357 |
+
"text": "trlX is designed to help mitigate the heavy computational costs felt by low-resource users while still allowing high-resource users the ability to get good performance. Roughly we break our users into three separate resource profiles:",
|
| 358 |
+
"bbox": [
|
| 359 |
+
112,
|
| 360 |
+
526,
|
| 361 |
+
487,
|
| 362 |
+
608
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "list",
|
| 368 |
+
"sub_type": "text",
|
| 369 |
+
"list_items": [
|
| 370 |
+
"1. Single GPU users. In this low-resource use case we recommend our native PyTorch integration plus memory saving features including Hydra architectures, low-rank adaptors, and 8-bit adam (Dettmers et al., 2021).",
|
| 371 |
+
"2. Multi-GPU users. In this mid-resource user case we recommend our integration with Hugging Face accelerate (Gugger et al., 2022) leveraging DeepSpeed plus memory saving features. We use this integration to comfortably train up to 20 billion parameter language models on a single node.",
|
| 372 |
+
"3. Multi-Node users. In this high-resource user case we recommend our integration with GPT-NeoX or NeMO-Megatron which allows for higher cpu efficiency and scaling than accelerate and DeepSpeed. We use this integration to"
|
| 373 |
+
],
|
| 374 |
+
"bbox": [
|
| 375 |
+
127,
|
| 376 |
+
621,
|
| 377 |
+
489,
|
| 378 |
+
919
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "train models up to 70 billion parameters: an unprecedented scale for open-source RLHF models.",
|
| 385 |
+
"bbox": [
|
| 386 |
+
544,
|
| 387 |
+
84,
|
| 388 |
+
882,
|
| 389 |
+
131
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "The framework is built around a base trainer from which integration specific trainers can inherit. Independently, online and offline algorithms are implemented allowing for reuse in separate integrations. In particularTRLX supports PPO and A2C for online RL and ILQL for offline RL. The most expensive part of online PPO training is the model rollout, which can take up to $10\\mathrm{x}$ as long as a combined forward and backward pass. To efficiently maximize batch size for both rollouts and optimization steps, we decouple the rollout inference batch size from the PPO batch size via an orchestrator class. This allows online models to perform batched rollouts to reduce the amount of bottleneck time spent inferring each model.",
|
| 396 |
+
"bbox": [
|
| 397 |
+
507,
|
| 398 |
+
145,
|
| 399 |
+
884,
|
| 400 |
+
385
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 2
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "text",
|
| 406 |
+
"text": "We integrate closely with the Hugging Face ecosystem, allowing for the training of most encoder-decoder and decoder-only models on the Hugging Face Hub, including widely used models such as T5 (Raffel et al., 2019) and Flan-T5 (Chung et al., 2022), GPT-J (Wang & Komatsuzaki, 2021), Pythia (Biderman et al., 2023), OPT (Zhang et al., 2022), and LLaMA (Touvron et al., 2023a,b).",
|
| 407 |
+
"bbox": [
|
| 408 |
+
507,
|
| 409 |
+
387,
|
| 410 |
+
882,
|
| 411 |
+
514
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "Fine-tuning large language models via human feedback with PPO is prohibitively expensive in terms of memory and FLOPs, requiring the user to store a student model, reference model, and similarly sized reward model in memory at all times. Additionally, reinforcement learning is notoriously brittle to hyperparameter choice, often requiring extensive sweeping to find optimal settings. To mitigate these costs we support parameter saving techniques like LoRA (Hu et al., 2021) and a Hydra model architecture design (Glaese et al., 2022) which allows for shared frozen layers between policy, value, and reference networks. Similarly, ILQL models require non-standard Q-value heads and generation capabilities which are implemented separately for both integrations.",
|
| 418 |
+
"bbox": [
|
| 419 |
+
507,
|
| 420 |
+
517,
|
| 421 |
+
882,
|
| 422 |
+
774
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "text",
|
| 428 |
+
"text": "3.1 Memory and Compute Saving Features",
|
| 429 |
+
"text_level": 1,
|
| 430 |
+
"bbox": [
|
| 431 |
+
507,
|
| 432 |
+
785,
|
| 433 |
+
863,
|
| 434 |
+
801
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 2
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "text",
|
| 440 |
+
"text": "To benchmark the effect of memory and compute saving features on performance, we run a series of experiments on a baseline sentiments task for model sizes ranging from 125 million to 20 billion parameters. For each model size, we freeze a percentage of the model's layers in the Hydra architecture and observe the effect on reward, train",
|
| 441 |
+
"bbox": [
|
| 442 |
+
507,
|
| 443 |
+
806,
|
| 444 |
+
882,
|
| 445 |
+
917
|
| 446 |
+
],
|
| 447 |
+
"page_idx": 2
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "page_number",
|
| 451 |
+
"text": "8580",
|
| 452 |
+
"bbox": [
|
| 453 |
+
480,
|
| 454 |
+
928,
|
| 455 |
+
519,
|
| 456 |
+
940
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "text",
|
| 462 |
+
"text": "time, and required memory. We also experiment with applying LORA adapters of various ranks to all transformer matrices. Models are taken from the Pythia suite (Biderman et al., 2023) and trained for 6000 steps with a global batch size of 32 on 8x80gb A100s.",
|
| 463 |
+
"bbox": [
|
| 464 |
+
112,
|
| 465 |
+
84,
|
| 466 |
+
487,
|
| 467 |
+
179
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 3
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "Figure 1 demonstrates across all model sizes about half the layers can be frozen before a maximum attainable reward is not achieved. Interestingly, freezing all but two of a model's layers more adversely affects the larger models. We speculate this is due to larger models learning the majority of complex task-specific features in their middle layers, with downstream layers making minor adjustments.",
|
| 474 |
+
"bbox": [
|
| 475 |
+
115,
|
| 476 |
+
181,
|
| 477 |
+
489,
|
| 478 |
+
325
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 3
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "image",
|
| 484 |
+
"img_path": "images/dd04d8caf469d10ec36f9676f0184d8b6ff1970cfcc6d2c13a8a087b4a7942ef.jpg",
|
| 485 |
+
"image_caption": [
|
| 486 |
+
"Figure 1: Max reward achieved as a function of the number of model layers unfrozen. Each model achieves its maximum attainable reward with around half its layers frozen."
|
| 487 |
+
],
|
| 488 |
+
"image_footnote": [],
|
| 489 |
+
"bbox": [
|
| 490 |
+
121,
|
| 491 |
+
347,
|
| 492 |
+
453,
|
| 493 |
+
527
|
| 494 |
+
],
|
| 495 |
+
"page_idx": 3
|
| 496 |
+
},
|
| 497 |
+
{
|
| 498 |
+
"type": "text",
|
| 499 |
+
"text": "Figure 2 demonstrates the effect of layer freezing on memory savings. This is particularly useful for larger models since otherwise we must separately load the frozen reference model into GPU memory for inference. With all but two layers frozen we save both memory and computation costs for all but two layers of the reference. In particular, for larger model sizes we can save nearly $50\\%$ of the required memory while still achieving the maximum reward.",
|
| 500 |
+
"bbox": [
|
| 501 |
+
112,
|
| 502 |
+
613,
|
| 503 |
+
487,
|
| 504 |
+
758
|
| 505 |
+
],
|
| 506 |
+
"page_idx": 3
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"type": "text",
|
| 510 |
+
"text": "On less toy problems we also observe layer freezing helps stabilize the training process by reducing KL divergence from the base model. This helps mitigate the need for the a KL based penalty via a frozen reference model, in some cases allowing for it to be removed entirely. Further in some cases partial freezing even imparts a beneficial inductive bias, allowing the model to achieve a higher reward than when trained with all parameters unfrozen.",
|
| 511 |
+
"bbox": [
|
| 512 |
+
112,
|
| 513 |
+
758,
|
| 514 |
+
487,
|
| 515 |
+
902
|
| 516 |
+
],
|
| 517 |
+
"page_idx": 3
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"type": "text",
|
| 521 |
+
"text": "Similar benefits memory-saving and regularizing",
|
| 522 |
+
"bbox": [
|
| 523 |
+
131,
|
| 524 |
+
903,
|
| 525 |
+
485,
|
| 526 |
+
919
|
| 527 |
+
],
|
| 528 |
+
"page_idx": 3
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"type": "image",
|
| 532 |
+
"img_path": "images/b86a8ae17f900136c6a69a73c92ebefc3d93947df396e81a64a8d7567a2786d0.jpg",
|
| 533 |
+
"image_caption": [
|
| 534 |
+
"Figure 2: Hydra memory consumption as a function of the number of a unfrozen layers."
|
| 535 |
+
],
|
| 536 |
+
"image_footnote": [],
|
| 537 |
+
"bbox": [
|
| 538 |
+
522,
|
| 539 |
+
93,
|
| 540 |
+
847,
|
| 541 |
+
275
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "benefits can be seen with LoRA-based fine-tuning. When tuning all layers training with LORA rank 1 achieves max reward on the sentiments benchmark. At the 6.9 billion parameter scale LORA training finetunes only 0.03 percent of the model parameters and reduces memory usage by 3 times. LORA training can be combined with layer freezing to achieve further memory savings. With both optimizations RLHF can be performed for medium scale models even on a single consumer-grade GPU. These memory savings and performance benefits carry over to the offline training regime with ILQL as well (see table 2). We theorize that limiting the rank of the parameter updates as well as freezing model layers provide beneficial regularization effects for both online and offline RL training.",
|
| 548 |
+
"bbox": [
|
| 549 |
+
507,
|
| 550 |
+
343,
|
| 551 |
+
884,
|
| 552 |
+
601
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "text",
|
| 558 |
+
"text": "3.2 Comparison with other Frameworks",
|
| 559 |
+
"text_level": 1,
|
| 560 |
+
"bbox": [
|
| 561 |
+
507,
|
| 562 |
+
613,
|
| 563 |
+
843,
|
| 564 |
+
629
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 3
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "See table 1 for a table outlining the presence of keyTRLX features in similar libraries. trlX is the only framework to support offline RL fine-tuning as well as the only framework to support large model fine-tuning at scale with pipeline, sequential, and tensor parallelism. Additionally, we are the most feature complete, including tools for parameter efficient fine-tuning and distributed hyperparameter sweeps. We include 10 plus benchmark examples, providing end-to-end pipelines for several well known RLHF tasks.",
|
| 571 |
+
"bbox": [
|
| 572 |
+
507,
|
| 573 |
+
634,
|
| 574 |
+
882,
|
| 575 |
+
810
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 3
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "text",
|
| 581 |
+
"text": "DeepSpeed versus NeMO MegatronTRLX is competitive with existing open-source RLHF implementations at scale for online RL. We compare against DeepSpeed-Chat (Yao et al., 2023), a concurrent work on RLHF for LLMs that implements PPO. See table 3 for a performance com",
|
| 582 |
+
"bbox": [
|
| 583 |
+
507,
|
| 584 |
+
822,
|
| 585 |
+
884,
|
| 586 |
+
917
|
| 587 |
+
],
|
| 588 |
+
"page_idx": 3
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "page_number",
|
| 592 |
+
"text": "8581",
|
| 593 |
+
"bbox": [
|
| 594 |
+
480,
|
| 595 |
+
927,
|
| 596 |
+
517,
|
| 597 |
+
940
|
| 598 |
+
],
|
| 599 |
+
"page_idx": 3
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "table",
|
| 603 |
+
"img_path": "images/476321949f6d79ab9fc5e8c94fa9cef3cca5a1c85f84ec3a040a3ae6caee74b4.jpg",
|
| 604 |
+
"table_caption": [],
|
| 605 |
+
"table_footnote": [],
|
| 606 |
+
"table_body": "<table><tr><td>Model</td><td>Max Reward</td><td>Time (min)</td><td>GPUs</td></tr><tr><td>GPT-NeoX 20B</td><td>-1.88</td><td>156</td><td>32</td></tr><tr><td>GPT-NeoX 20B LoRA</td><td>-1.89</td><td>28</td><td>16</td></tr><tr><td>Pythia 6.9B</td><td>-1.77</td><td>286</td><td>16</td></tr><tr><td>Pythia 6.9B LoRA</td><td>-1.68</td><td>58</td><td>16</td></tr></table>",
|
| 607 |
+
"bbox": [
|
| 608 |
+
260,
|
| 609 |
+
80,
|
| 610 |
+
736,
|
| 611 |
+
186
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 4
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "table",
|
| 617 |
+
"img_path": "images/f3919e3c8b8ddb58093c17142c12e3733747a29f29848b1c5ea2f2d40f56ac71.jpg",
|
| 618 |
+
"table_caption": [
|
| 619 |
+
"Table 2: Benchmarks for ILQL's time to max reward on Anthropic's Helpful QA dialogue dataset. All non-LoRA hyperparameters are kept the same as the base models, except for learning rate which is set to $2.0 \\times 10^{-4}$ . For GPT-NeoX-20B LoRA, the last 8 layers are trained with LoRA, for Pythia 6.9B LoRA, all layers are trained with LoRA."
|
| 620 |
+
],
|
| 621 |
+
"table_footnote": [],
|
| 622 |
+
"table_body": "<table><tr><td>Parameters</td><td>DS-Chat</td><td>trlX</td></tr><tr><td>OPT 1.3B</td><td>2.1</td><td>2.0</td></tr><tr><td>OPT 6.7B</td><td>0.44</td><td>0.41</td></tr><tr><td>OPT 30B</td><td>0.14* (LoRA)</td><td>0.12</td></tr><tr><td>OPT 60B</td><td>0.076* (LoRA)</td><td>0.043†</td></tr></table>",
|
| 623 |
+
"bbox": [
|
| 624 |
+
164,
|
| 625 |
+
280,
|
| 626 |
+
436,
|
| 627 |
+
347
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 4
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "Table 3: Comparison of trlX and DeepSpeed-Chat training speed for online RL (PPO) for OPT architecture, measured in samples/s/GPU. *Performance for 30B and 60B DeepSpeed-Chat converted from Table 2 of (Yao et al., 2023), 4 hours to train on 131.9k samples using 64 GPUs. † For OPT 66B, we use Hydra with 50% trainable parameters.",
|
| 634 |
+
"bbox": [
|
| 635 |
+
112,
|
| 636 |
+
361,
|
| 637 |
+
489,
|
| 638 |
+
461
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 4
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "text",
|
| 644 |
+
"text": "parison between DeepSpeed-Chat andTRLX. Note that the 30B and 60B parameters performance figure for DeepSpeed-Chat uses LoRA based training, whereasTRLX uses full parameter finetuning. We keep the rest of the benchmark settings the same as the published DeepSpeed-Chat scripts.",
|
| 645 |
+
"bbox": [
|
| 646 |
+
112,
|
| 647 |
+
483,
|
| 648 |
+
489,
|
| 649 |
+
583
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 4
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "4 Benchmarks and Results",
|
| 656 |
+
"text_level": 1,
|
| 657 |
+
"bbox": [
|
| 658 |
+
112,
|
| 659 |
+
593,
|
| 660 |
+
364,
|
| 661 |
+
608
|
| 662 |
+
],
|
| 663 |
+
"page_idx": 4
|
| 664 |
+
},
|
| 665 |
+
{
|
| 666 |
+
"type": "text",
|
| 667 |
+
"text": "We benchmark trlX on two popular RLHF tasks: OpenAI's learning to summarize TL;DR dataset (Stiennon et al., 2020) and Anthropic's Helpful QA dataset (Ganguli et al., 2022). We release all associated code and models as open-source for further study.",
|
| 668 |
+
"bbox": [
|
| 669 |
+
112,
|
| 670 |
+
617,
|
| 671 |
+
489,
|
| 672 |
+
715
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 4
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "text",
|
| 678 |
+
"text": "Training setup and Hyperparameters Unless otherwise noted, we use the same fixed set of hyperparameters listed in the appendix across all training runs. We find good performance is particularly sensitive to some parameters:",
|
| 679 |
+
"bbox": [
|
| 680 |
+
112,
|
| 681 |
+
722,
|
| 682 |
+
489,
|
| 683 |
+
804
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 4
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "list",
|
| 689 |
+
"sub_type": "text",
|
| 690 |
+
"list_items": [
|
| 691 |
+
"- Batch size: Larger batch sizes of at least 128 global samples per iteration are used. This reduces variance across runs and stabilizes performance.",
|
| 692 |
+
"- Reward normalization: Upon rollout collection we normalize all rewards by a running"
|
| 693 |
+
],
|
| 694 |
+
"bbox": [
|
| 695 |
+
136,
|
| 696 |
+
813,
|
| 697 |
+
489,
|
| 698 |
+
919
|
| 699 |
+
],
|
| 700 |
+
"page_idx": 4
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"text": "standard deviation estimate. We find this normalization, notably without subtraction by the running mean, significantly improves performance. Additionally, we do a second normalization of the advantages at the batch level.",
|
| 705 |
+
"bbox": [
|
| 706 |
+
544,
|
| 707 |
+
284,
|
| 708 |
+
884,
|
| 709 |
+
365
|
| 710 |
+
],
|
| 711 |
+
"page_idx": 4
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"text": "- Learning rate: Learning rate is chosen to be $5\\mathrm{E} - 6$ , an order of magnitude smaller relative to supervised fine-tuning.",
|
| 716 |
+
"bbox": [
|
| 717 |
+
531,
|
| 718 |
+
374,
|
| 719 |
+
882,
|
| 720 |
+
422
|
| 721 |
+
],
|
| 722 |
+
"page_idx": 4
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"text": "4.1 Summarization",
|
| 727 |
+
"text_level": 1,
|
| 728 |
+
"bbox": [
|
| 729 |
+
507,
|
| 730 |
+
432,
|
| 731 |
+
678,
|
| 732 |
+
445
|
| 733 |
+
],
|
| 734 |
+
"page_idx": 4
|
| 735 |
+
},
|
| 736 |
+
{
|
| 737 |
+
"type": "text",
|
| 738 |
+
"text": "Setup Learning to Summarize from Human feedback (Stiennon et al., 2020) introduces the TL;DR dataset. The first component, consisting of 129,772 Reddit posts, is utilized for supervised fine-tuning. The second component, utilized for training the reward model, consists of 92,534 samples in the training dataset and 83,629 samples in the validation set.",
|
| 739 |
+
"bbox": [
|
| 740 |
+
507,
|
| 741 |
+
451,
|
| 742 |
+
882,
|
| 743 |
+
580
|
| 744 |
+
],
|
| 745 |
+
"page_idx": 4
|
| 746 |
+
},
|
| 747 |
+
{
|
| 748 |
+
"type": "text",
|
| 749 |
+
"text": "We start by training supervised fine-tuned (SFT) by fine-tuning 440M, 1.4B, 6.9B, and 20B models from the Pythia suite on the SFT dataset. We use the AdamW optimizer with the learning rate is $1\\mathrm{E}-5$ followed by a linear scheduler with a short warm-up. The best model is selected by Average-ROUGE score on the validation set.",
|
| 750 |
+
"bbox": [
|
| 751 |
+
507,
|
| 752 |
+
581,
|
| 753 |
+
882,
|
| 754 |
+
692
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 4
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "text",
|
| 760 |
+
"text": "To train our reward models (RMs) we initialize with SFT checkpoints, replacing the causal head with a scalar output. Using the second component of the dataset we then minimize the pairwise preference loss (Stiennon et al., 2020). We find our best performing reward model is the 6.9 billion parameter GPT-J (6B) trained with a batch size of 32.",
|
| 761 |
+
"bbox": [
|
| 762 |
+
507,
|
| 763 |
+
694,
|
| 764 |
+
884,
|
| 765 |
+
820
|
| 766 |
+
],
|
| 767 |
+
"page_idx": 4
|
| 768 |
+
},
|
| 769 |
+
{
|
| 770 |
+
"type": "text",
|
| 771 |
+
"text": "With a trained reward model we are now able to perform RL based fine-tuning of models from 440M to 20B. Posts from both components of the TL;DR dataset are used during training. We use the best performing reward model, 6.9B, as our reward signal for all experiments. To train models online",
|
| 772 |
+
"bbox": [
|
| 773 |
+
507,
|
| 774 |
+
822,
|
| 775 |
+
882,
|
| 776 |
+
919
|
| 777 |
+
],
|
| 778 |
+
"page_idx": 4
|
| 779 |
+
},
|
| 780 |
+
{
|
| 781 |
+
"type": "page_number",
|
| 782 |
+
"text": "8582",
|
| 783 |
+
"bbox": [
|
| 784 |
+
480,
|
| 785 |
+
927,
|
| 786 |
+
521,
|
| 787 |
+
940
|
| 788 |
+
],
|
| 789 |
+
"page_idx": 4
|
| 790 |
+
},
|
| 791 |
+
{
|
| 792 |
+
"type": "text",
|
| 793 |
+
"text": "we initialize from the SFT checkpoints and use PPO with four epochs per batch and a KL penalty with a coefficient of 0.005. We keep 8 layers unfrozen. To train the offline models we label posts from both components of the dataset and their associated summaries with $\\pm 1$ respectively. We note labeling in this way performs better than labeling the data with the learned RM. This reward labeled dataset is then used to train a base model using the ILQL algorithm. Notably, we do not initialize from the SFT checkpoint as we saw minimal benefit in the offline regime.",
|
| 794 |
+
"bbox": [
|
| 795 |
+
112,
|
| 796 |
+
84,
|
| 797 |
+
489,
|
| 798 |
+
277
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 5
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "Results We attach a table in the appendix showing the result of the ROUGE scores in the test set of the TL;DR dataset of SFT and PPO trained on the 6.9B model. In comparison with (Stiennon et al., 2020), the trend of the ROUGE score is similar, with the SFT model performing slightly better than the PPO model.",
|
| 805 |
+
"bbox": [
|
| 806 |
+
112,
|
| 807 |
+
291,
|
| 808 |
+
489,
|
| 809 |
+
403
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 5
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "text",
|
| 815 |
+
"text": "More critically, we conduct a human evaluation to better evaluate how well our online PPO and offline ILQL models adhere to human preferences as compared to the baseline SFT. To do so we select stories from a subset of prompts from the test portion of our dataset and ask annotators to choose between two candidate summaries. In particular for each model size we run two evaluations: A comparison of PPO to SFT and a comparison of ILQL to SFT. In addition to choosing between two candidate summarizations, we ask users to score the coverage, clarity, and inconsistency on a 1-7 Likert scale. The results are reported in fig. 3 and fig. 4.",
|
| 816 |
+
"bbox": [
|
| 817 |
+
112,
|
| 818 |
+
405,
|
| 819 |
+
489,
|
| 820 |
+
630
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 5
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "text",
|
| 826 |
+
"text": "We evaluate each model via relative improvement over the its corresponding SFT baseline, in part to demonstrate the effectiveness of RLHF on even small model sizes.",
|
| 827 |
+
"bbox": [
|
| 828 |
+
112,
|
| 829 |
+
631,
|
| 830 |
+
489,
|
| 831 |
+
695
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 5
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "text",
|
| 837 |
+
"text": "ILQL slightly underperforms PPO at a fraction of the cost Figure 3 demonstrates both ILQL and PPO achieving greater than $10\\%$ win-rates across most model sizes. At 6B and 20B our PPO model achieves greater than $70\\%$ win-rate against its SFT baseline. We also see ILQL models are very competitive, despite requiring much less compute to train. Interestingly, we observe ILQL produces distinctly shorter, more concise summaries than PPO and even the SFT baseline. Despite this ILQL is still often preferred over the longer SFT baseline due to better coverage of key points. This suggests more sophisticated offline training methods could",
|
| 838 |
+
"bbox": [
|
| 839 |
+
112,
|
| 840 |
+
709,
|
| 841 |
+
489,
|
| 842 |
+
919
|
| 843 |
+
],
|
| 844 |
+
"page_idx": 5
|
| 845 |
+
},
|
| 846 |
+
{
|
| 847 |
+
"type": "image",
|
| 848 |
+
"img_path": "images/f189bb051006a7fc2de6f1ab574e68618e48537041523ae46cf3812affc5b332.jpg",
|
| 849 |
+
"image_caption": [
|
| 850 |
+
"Figure 3: Win rate of ILQL, PPO fine-tuned models against their relative SFT baselines on summarization task. Note comparisons were done against the same-sized SFT baseline (e.g. 6B SFT against 6B PPO). The OpenAI baseline, measured as the win-rate of their 6B model over human generated summaries, is included for reference."
|
| 851 |
+
],
|
| 852 |
+
"image_footnote": [],
|
| 853 |
+
"bbox": [
|
| 854 |
+
531,
|
| 855 |
+
93,
|
| 856 |
+
847,
|
| 857 |
+
252
|
| 858 |
+
],
|
| 859 |
+
"page_idx": 5
|
| 860 |
+
},
|
| 861 |
+
{
|
| 862 |
+
"type": "image",
|
| 863 |
+
"img_path": "images/ff179585609134ab77fa626fb03c2b9ebe46615239ce8d4cfaf931faf179fa1c.jpg",
|
| 864 |
+
"image_caption": [
|
| 865 |
+
"Figure 4: 20B model Likert scores for coverage, clarity, and inconsistency."
|
| 866 |
+
],
|
| 867 |
+
"image_footnote": [],
|
| 868 |
+
"bbox": [
|
| 869 |
+
537,
|
| 870 |
+
394,
|
| 871 |
+
847,
|
| 872 |
+
546
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 5
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "text",
|
| 878 |
+
"text": "potentially be developed as more compute efficient alternative version of PPO.",
|
| 879 |
+
"bbox": [
|
| 880 |
+
507,
|
| 881 |
+
624,
|
| 882 |
+
882,
|
| 883 |
+
655
|
| 884 |
+
],
|
| 885 |
+
"page_idx": 5
|
| 886 |
+
},
|
| 887 |
+
{
|
| 888 |
+
"type": "text",
|
| 889 |
+
"text": "4.2 Helpful QA Dialogue",
|
| 890 |
+
"text_level": 1,
|
| 891 |
+
"bbox": [
|
| 892 |
+
507,
|
| 893 |
+
670,
|
| 894 |
+
721,
|
| 895 |
+
686
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 5
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "text",
|
| 901 |
+
"text": "Setup Helpful Harmless RLHF (Ganguli et al., 2022), or HH-RLHF for short, consists of $118\\mathrm{k}$ sample interactions between a user and an AI assistant. It can be broken further into three parts: An initial dataset of $42\\mathrm{k}$ prompt, response triples created by a prompted language model, $40\\mathrm{k}$ samples creates by re-ranking responses from the same prompted model, and a final set of $22\\mathrm{k}$ responses from an initial RLHF model. The first two parts are called the static subset. We use the helpful portion of the static dataset for training and evaluation. Each interaction sample consists of a dialogue history ending with an utterance from the user. This is followed by a preferred or chosen assistant re",
|
| 902 |
+
"bbox": [
|
| 903 |
+
505,
|
| 904 |
+
694,
|
| 905 |
+
884,
|
| 906 |
+
917
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 5
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "page_number",
|
| 912 |
+
"text": "8583",
|
| 913 |
+
"bbox": [
|
| 914 |
+
480,
|
| 915 |
+
927,
|
| 916 |
+
519,
|
| 917 |
+
940
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 5
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "image",
|
| 923 |
+
"img_path": "images/d379f24c3c665060d2607fb10a12bcb3c4fb33382d46b199f0fbf78688e13d2c.jpg",
|
| 924 |
+
"image_caption": [
|
| 925 |
+
"Helpful Reward Model Accuracy vs. Number of Comparisons",
|
| 926 |
+
"Figure 5: Accuracy of reward model on test set against number of training comparisons. We observe roughly $1.8\\%$ increase in model accuracy per 10k training samples."
|
| 927 |
+
],
|
| 928 |
+
"image_footnote": [],
|
| 929 |
+
"bbox": [
|
| 930 |
+
119,
|
| 931 |
+
96,
|
| 932 |
+
473,
|
| 933 |
+
259
|
| 934 |
+
],
|
| 935 |
+
"page_idx": 6
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"text": "sponse and a rejected response.",
|
| 940 |
+
"bbox": [
|
| 941 |
+
112,
|
| 942 |
+
355,
|
| 943 |
+
349,
|
| 944 |
+
370
|
| 945 |
+
],
|
| 946 |
+
"page_idx": 6
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "text",
|
| 950 |
+
"text": "We train supervised fine-tuned SFT baseline models by fine-tuning vanilla models models ranging from 160M, 1.4B, 6.9B and 20B parameters on the chosen responses for each sample. Training is done for one epoch with a $lr = 5\\mathrm{E} - 5$ . Note we mask loss on the dialogue history, only backpropping loss on the response tokens. This serves as our baseline.",
|
| 951 |
+
"bbox": [
|
| 952 |
+
112,
|
| 953 |
+
370,
|
| 954 |
+
487,
|
| 955 |
+
498
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 6
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"text": "We then independently train reward models of sizes 160m-20B by. As with summarization, we warm start by initializing from the SFT models. As above we train for one epoch with $lr = 5\\cdot$ $10^{-6}$ . We observe adding the supervised warmup increases test accuracy up to $2 \\%$ . Our highest performing model is a 6 billion parameter GPT-J, which achieves 0.72 accuracy on the static test set. We use this as the default reward model RM for all RL based fine-tuning.",
|
| 962 |
+
"bbox": [
|
| 963 |
+
112,
|
| 964 |
+
500,
|
| 965 |
+
489,
|
| 966 |
+
659
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 6
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "With our RM we can fine-tune our baseline SFT models using trlX. Our training dataset consists of a set of input prompts taken from the entire static dataset. We augment this with multi-turn prompts and responses generated synthetically by text-davinici-003. Details on how this synthetic data is created can be found in the appendix. Altogether this forms 200k prompts for our RL training dataset.",
|
| 973 |
+
"bbox": [
|
| 974 |
+
112,
|
| 975 |
+
661,
|
| 976 |
+
489,
|
| 977 |
+
804
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 6
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "text",
|
| 983 |
+
"text": "The number of training steps is kept constant at 9000 with an effective batch size of 128. A learning rate between $1 \\cdot 10^{-6}$ and $8 \\cdot 10^{-6}$ is used for different model sizes. We keep eight layers unfrozen. A constant KL penalty of coefficient of 0.005 is used. We call the resulting series of models PPO.",
|
| 984 |
+
"bbox": [
|
| 985 |
+
112,
|
| 986 |
+
806,
|
| 987 |
+
489,
|
| 988 |
+
917
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 6
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "image",
|
| 994 |
+
"img_path": "images/ea1e21140fd93ecc84a81c12b9b8d905967a9174c438a33fa500007e9719391c.jpg",
|
| 995 |
+
"image_caption": [
|
| 996 |
+
"Figure 6: Mean performance of models zero-shot on HellaSwag, TriviaQA, LAMBADA, Arc Easy, Arc Challenge, and Open Book QA. A table of full results is shared in the appendix."
|
| 997 |
+
],
|
| 998 |
+
"image_footnote": [],
|
| 999 |
+
"bbox": [
|
| 1000 |
+
532,
|
| 1001 |
+
103,
|
| 1002 |
+
848,
|
| 1003 |
+
269
|
| 1004 |
+
],
|
| 1005 |
+
"page_idx": 6
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "text",
|
| 1009 |
+
"text": "In particular, we found it critical to train with a sufficiently large batch to ensure robust PPO gradient estimates. Additionally, if training is run for too long or the KL penalty is too weak we observe heavy overfit to the reward model. We practice early stopping to prevent such overfit. Larger batch sizes also have the added effect of mitigating overfit simply by decreasing the total number of steps over the prompt dataset.",
|
| 1010 |
+
"bbox": [
|
| 1011 |
+
505,
|
| 1012 |
+
370,
|
| 1013 |
+
884,
|
| 1014 |
+
516
|
| 1015 |
+
],
|
| 1016 |
+
"page_idx": 6
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "text",
|
| 1020 |
+
"text": "In addition to PPO we train models at sizes from 160M to 20B using ILQL. We assign a reward of $+1$ to the chosen trajectories and a reward of $-1$ to the rejected trajectories. We call the resulting set of models ILQL. Surprisingly the $\\pm 1$ reward assignment empirically outperforms labeling chosen and rejected responses via rewards learned from the RM. We believe this is because while RM rewards are denser, they are also in some cases inaccurate and as a result the expected inequality $r_{\\text{chosen}} > r_{\\text{rejected}}$ for a given dialogue is not respected, introducing noise. Whereas a $\\pm 1$ assignment stays faithful to the underlying human preferences. This assignment has the added benefit of requiring far less compute as no reward model needs to be learned.",
|
| 1021 |
+
"bbox": [
|
| 1022 |
+
505,
|
| 1023 |
+
517,
|
| 1024 |
+
885,
|
| 1025 |
+
772
|
| 1026 |
+
],
|
| 1027 |
+
"page_idx": 6
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"text": "In addition to the above models we also train and evaluate a final set Vanilla-PPO which applies PPO based RL fine-tuning via our RM without initializing from the supervised SFT checkpoints. We found this is only feasible for larger models, 6B and 20B, which are able to successfully optimize reward. This underscores the importance of collecting supervised fine-tuning data for for sufficiently difficult tasks and weak models.",
|
| 1032 |
+
"bbox": [
|
| 1033 |
+
507,
|
| 1034 |
+
774,
|
| 1035 |
+
885,
|
| 1036 |
+
917
|
| 1037 |
+
],
|
| 1038 |
+
"page_idx": 6
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "page_number",
|
| 1042 |
+
"text": "8584",
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
480,
|
| 1045 |
+
928,
|
| 1046 |
+
521,
|
| 1047 |
+
940
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 6
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "text",
|
| 1053 |
+
"text": "Results We then evaluate vanilla models, SFT models, PPO models, Vanilla-PPO models, and ILQL models on a set of common academic benchmarks including LAMBADA, ARC, Open-BookQA, TriviaQA, and HellaSwag using Gao et al. (2021). Figure 6 plots the mean accuracy of each model class on the benchmarks. A full table is included in the appendix. We find supervised fine-tuning significantly impacts performance. We note when done improperly, e.g. by fine-tuning on entire dialogues instead of responses, the effect is even more pronounced. RL based fine-tuning on top of SFT improves results slightly but not significantly.",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
112,
|
| 1056 |
+
84,
|
| 1057 |
+
489,
|
| 1058 |
+
310
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 7
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "The 'Alignment Tax' comes from SFT Surprisingly in contrast fine-tuning with RL without as done in the Vanilla-PPO models incurs much less penalty and in the 6.9B case even slightly improves performance. This answers questions about the existence of an alignment tax when fine-tuning with RLHF. OpenAI reported such a tax with InstructGPT (Ouyang et al., 2022), particularly after supervised fine-tuning, but did not report results for strictly RL based fine-tuning. Conversely Anthropic (Bai et al., 2022a) demonstrated small gains in benchmark performance for sufficiently large models after RL based fine-tuning but do not utilize an SFT warm-up. These results suggest such a tax is primarily due to the supervised fine-tuning instead of RL based fine-tuning. We note one technique is to mix pre-training data into the SFT and RL fine-tuning distributions, as done in Ouyang et al. (2022).",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
115,
|
| 1067 |
+
323,
|
| 1068 |
+
489,
|
| 1069 |
+
627
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 7
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "text",
|
| 1075 |
+
"text": "This suggests to us the importance of a high-quality SFT training dataset to mitigate benchmark reduction while also appropriately learning the desired behavior.",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
112,
|
| 1078 |
+
630,
|
| 1079 |
+
489,
|
| 1080 |
+
694
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 7
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "In addition to automatic benchmark evaluation we conduct a human evaluation in which labelers choose between responses generated by a model and the comparably sized supervised fine-tuned baseline. The results are reported in fig. 7. Note we examine win-rate of models against a baseline of the same size, in contrast to previous work (Bai et al., 2022a; Ouyang et al., 2022). We attach annotator instructions in the appendix.",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
112,
|
| 1089 |
+
696,
|
| 1090 |
+
489,
|
| 1091 |
+
841
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 7
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "RLHF can benefit smaller models too Across all model sizes we observe at least a $60\\%$ win-rate between the PPO trained model and the SFT baseline. Additionally the offline trained ILQL models",
|
| 1098 |
+
"bbox": [
|
| 1099 |
+
112,
|
| 1100 |
+
854,
|
| 1101 |
+
489,
|
| 1102 |
+
917
|
| 1103 |
+
],
|
| 1104 |
+
"page_idx": 7
|
| 1105 |
+
},
|
| 1106 |
+
{
|
| 1107 |
+
"type": "image",
|
| 1108 |
+
"img_path": "images/97d2b9f1f4962637a69dffa6f157fe1147667754ac7631be49fc3b2facd59175.jpg",
|
| 1109 |
+
"image_caption": [
|
| 1110 |
+
"Figure 7: Win rate of prompted, PPO RLHF, and ILQL RLHF models at 160M, 1.4B, 6.9B, and 20B parameters. Comparisons were done against the same-sized SFT baseline (e.g 6.9B SFT against 6.9B PPO)."
|
| 1111 |
+
],
|
| 1112 |
+
"image_footnote": [],
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
531,
|
| 1115 |
+
93,
|
| 1116 |
+
847,
|
| 1117 |
+
252
|
| 1118 |
+
],
|
| 1119 |
+
"page_idx": 7
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "text",
|
| 1123 |
+
"text": "are very competitive, similarly achieving at least a $60\\%$ win-rate with a small fraction of the compute. Further we qualitatively observe ILQL is significantly more robust to reward overfit in contrast to online PPO based fine-tuning. In contrast the online regime requires a combination of large batch sizes and early stopping to mitigate such reward overfit. Finally we remark the prompted baseline is already relatively strong likely due to the task already being sufficiently in distribution for larger models. This is further supported by the ability of large vanilla models to successfully optimize reward with no supervised fine-tuning.",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
505,
|
| 1126 |
+
351,
|
| 1127 |
+
884,
|
| 1128 |
+
558
|
| 1129 |
+
],
|
| 1130 |
+
"page_idx": 7
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "text",
|
| 1134 |
+
"text": "In addition to collecting win-rates we also collect Likert scores on a scale of 1-7 for response Helpfulness, Harmfulness, and Honesty. Results are reported in the appendix.",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
507,
|
| 1137 |
+
561,
|
| 1138 |
+
882,
|
| 1139 |
+
625
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 7
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "5 Conclusion",
|
| 1146 |
+
"text_level": 1,
|
| 1147 |
+
"bbox": [
|
| 1148 |
+
507,
|
| 1149 |
+
639,
|
| 1150 |
+
640,
|
| 1151 |
+
653
|
| 1152 |
+
],
|
| 1153 |
+
"page_idx": 7
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "text",
|
| 1157 |
+
"text": "Ethics We present trlX as an open-source framework for training large language models at scale using reinforcement learning from human feedback. Even with fine-tuning via RLHF, LLMs are still prone to hallucination and bias at inference time, necessitating further research on mitigations. We hope researcher access to trlX as an open-source implementation of the RLHF pipeline will help facilitate this research.",
|
| 1158 |
+
"bbox": [
|
| 1159 |
+
507,
|
| 1160 |
+
665,
|
| 1161 |
+
884,
|
| 1162 |
+
810
|
| 1163 |
+
],
|
| 1164 |
+
"page_idx": 7
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "text",
|
| 1168 |
+
"text": "Limitations While it is highly performant, PPO suffers from many limitations both in difficulty during implementation and in hyperparameter sensitivity at train time. Offline methods such as ILQL are both easier to implement and more compute efficient, but still do not achieve the same performance",
|
| 1169 |
+
"bbox": [
|
| 1170 |
+
507,
|
| 1171 |
+
822,
|
| 1172 |
+
882,
|
| 1173 |
+
917
|
| 1174 |
+
],
|
| 1175 |
+
"page_idx": 7
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"type": "page_number",
|
| 1179 |
+
"text": "8585",
|
| 1180 |
+
"bbox": [
|
| 1181 |
+
480,
|
| 1182 |
+
928,
|
| 1183 |
+
519,
|
| 1184 |
+
940
|
| 1185 |
+
],
|
| 1186 |
+
"page_idx": 7
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "text",
|
| 1190 |
+
"text": "as PPO.",
|
| 1191 |
+
"bbox": [
|
| 1192 |
+
114,
|
| 1193 |
+
84,
|
| 1194 |
+
178,
|
| 1195 |
+
98
|
| 1196 |
+
],
|
| 1197 |
+
"page_idx": 8
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "text",
|
| 1201 |
+
"text": "References",
|
| 1202 |
+
"text_level": 1,
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
115,
|
| 1205 |
+
127,
|
| 1206 |
+
213,
|
| 1207 |
+
142
|
| 1208 |
+
],
|
| 1209 |
+
"page_idx": 8
|
| 1210 |
+
},
|
| 1211 |
+
{
|
| 1212 |
+
"type": "list",
|
| 1213 |
+
"sub_type": "ref_text",
|
| 1214 |
+
"list_items": [
|
| 1215 |
+
"Andonian, A., Anthony, Q., Biderman, S., Black, S., Gali, P., Gao, L., Hallahan, E., Levy-Kramer, J., Leahy, C., Nestler, L., et al. Gpt-neox: Large scale autoregressive language modeling in pytorch. GitHub Repo, 2021.",
|
| 1216 |
+
"Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Mann, B., DasSarma, N., et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.",
|
| 1217 |
+
"Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T. J., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T. B., Clark, J., McCandlish, S., Olah, C., Mann, B., and Kaplan, J. Training a helpful and harmless assistant with reinforcement learning from human feedback. ArXiv, abs/2204.05862, 2022a.",
|
| 1218 |
+
"Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.",
|
| 1219 |
+
"Biderman, S., Bicheno, K., and Gao, L. Datasheet for the pile. arXiv preprint arXiv:2201.07311, 2022.",
|
| 1220 |
+
"Biderman, S. R., Schoelkopf, H., Anthony, Q. G., Bradley, H., O'Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E., Skowron, A., Sutawika, L., and van der Wal, O. Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373, 2023.",
|
| 1221 |
+
"Black, S., Biderman, S., Hallahan, E., Anthony, Q., Gao, L., Golding, L., He, H., Leahy, C., McDonell, K., Phang, J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L., Tow, J., Wang, B., and Weinbach, S. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models, 2022. URL https://arxiv.org/abs/2204.06745.",
|
| 1222 |
+
"Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information"
|
| 1223 |
+
],
|
| 1224 |
+
"bbox": [
|
| 1225 |
+
115,
|
| 1226 |
+
149,
|
| 1227 |
+
489,
|
| 1228 |
+
917
|
| 1229 |
+
],
|
| 1230 |
+
"page_idx": 8
|
| 1231 |
+
},
|
| 1232 |
+
{
|
| 1233 |
+
"type": "list",
|
| 1234 |
+
"sub_type": "ref_text",
|
| 1235 |
+
"list_items": [
|
| 1236 |
+
"Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.",
|
| 1237 |
+
"Christiano, P. F., Leike, J., Brown, T. B., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from human preferences. ArXiv, abs/1706.03741, 2017.",
|
| 1238 |
+
"Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., Li, E., Wang, X., Dehghani, M., Brahma, S., Webson, A., Gu, S. S., Dai, Z., Suzgun, M., Chen, X., Chowdhery, A., Valter, D., Narang, S., Mishra, G., Yu, A. W., Zhao, V., Huang, Y., Dai, A. M., Yu, H., Petrov, S., hsin Chi, E. H., Dean, J., Devlin, J., Roberts, A., Zhou, D., Le, Q. V., and Wei, J. Scaling instruction-finetuned language models. ArXiv, abs/2210.11416, 2022.",
|
| 1239 |
+
"Dettmers, T., Lewis, M., Shleifer, S., and Zettlemoyer, L. 8-bit optimizers via block-wise quantization. *ArXiv*, abs/2110.02861, 2021.",
|
| 1240 |
+
"Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. *ArXiv*, abs/1810.04805, 2019.",
|
| 1241 |
+
"Ganguli, D., Lovitt, L., Kernion, J., Askell, A., Bai, Y., Kadavath, S., Mann, B., Perez, E., Schiefer, N., Ndousse, K., et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.",
|
| 1242 |
+
"Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.",
|
| 1243 |
+
"Gao, L., Tow, J., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., McDonell, K., Muennighoff, N., et al. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021.",
|
| 1244 |
+
"Glaese, A., McAleese, N., Trkebacz, M., Aslanides, J., Firoiu, V., Ewalds, T., Rauh, M., Weidinger, L., Chadwick, M., Thacker, P., Campbell-Gillingham, L., Uesato, J., Huang, P.-S., Comanescu, R., Yang, F., See, A., Dathathri, S., Greig, R., Chen, C., Fritz, D., Elias, J. S., Green, R., Mokr'a, S., Fernando, N., Wu, B., Foley, R., Young, S., Gabriel, I., Isaac, W. S., Mellor, J. F. J., Hassabis, D., Kavukcuoglu, K., Hendricks, L. A., and Irving, G. Improving alignment of dialogue agents via targeted human judgements. ArXiv, abs/2209.14375, 2022.",
|
| 1245 |
+
"Gugger, S., Debut, L., Thomas Wolf, T., Schmid, P., Mueller, Z., and Mangrulkar, S. Accelerate: Training and inference at scale made simple, efficient and adaptable. https://github.com/huggingface/accelerate, 2022."
|
| 1246 |
+
],
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
510,
|
| 1249 |
+
85,
|
| 1250 |
+
884,
|
| 1251 |
+
917
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 8
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "page_number",
|
| 1257 |
+
"text": "8586",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
480,
|
| 1260 |
+
928,
|
| 1261 |
+
519,
|
| 1262 |
+
940
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 8
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "list",
|
| 1268 |
+
"sub_type": "ref_text",
|
| 1269 |
+
"list_items": [
|
| 1270 |
+
"Honovich, O., Scialom, T., Levy, O., and Schick, T. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689, 2022.",
|
| 1271 |
+
"Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., and Chen, W. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021.",
|
| 1272 |
+
"Knox, W. B. and Stone, P. Interactively shaping agents via human reinforcement: The tamer framework. In *The Fifth International Conference on Knowledge Capture*, September 2009. URL http://www.cs.utexas.edu/users/ai-1ab?KCAP09-knox.",
|
| 1273 |
+
"Kuchaiev, O., Li, J., Nguyen, H., Hrinchuk, O., Leary, R., Ginsburg, B., Kriman, S., Beliaev, S., Lavrukhin, V., Cook, J., et al. Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577, 2019.",
|
| 1274 |
+
"Leandro, V. W. Transformer reinforcement learning. https://github.com/lwerra/trl, 2019.",
|
| 1275 |
+
"Nakano, R., Hilton, J., Balaji, S. A., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., Jiang, X., Cobbe, K., Eloundou, T., Krueger, G., Button, K., Knight, M., Chess, B., and Schulman, J. Webgpt: Browser-assisted question-answering with human feedback. ArXiv, abs/2112.09332, 2021.",
|
| 1276 |
+
"Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.",
|
| 1277 |
+
"Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L. E., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J., and Lowe, R. J. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022.",
|
| 1278 |
+
"Raffel, C., Shazeer, N. M., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683, 2019.",
|
| 1279 |
+
"Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. Zero: Memory optimizations toward training trillion parameter models. SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-16, 2019.",
|
| 1280 |
+
"Ramamurthy, R., Ammanabrolu, P., Brantley, K., Hessel, J., Sifa, R., Bauckhage, C., Hajishirzi, H., and Choi, Y. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. ArXiv, abs/2210.01241, 2022."
|
| 1281 |
+
],
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
115,
|
| 1284 |
+
85,
|
| 1285 |
+
485,
|
| 1286 |
+
917
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 9
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "list",
|
| 1292 |
+
"sub_type": "ref_text",
|
| 1293 |
+
"list_items": [
|
| 1294 |
+
"Roberts, A., Chung, H. W., Levskaya, A., Mishra, G., Bradbury, J., Andor, D., Narang, S., Lester, B., Gaffney, C., Mohiuddin, A., Hawthorne, C., Lewkowycz, A., Salcianu, A., van Zee, M., Austin, J., Goodman, S., Soares, L. B., Hu, H., Tsvyashchenko, S., Chowdhery, A., Bastings, J., Bulian, J., Garcia, X., Ni, J., Chen, A., Kenealy, K., Clark, J. H., Lee, S., Garrette, D., Lee-Thorp, J., Raffel, C., Shazeer, N., Ritter, M., Bosma, M., Passos, A., Maitin-Shepard, J., Fiedel, N., Omernick, M., Saeta, B., Sepassi, R., Spiridonov, A., Newlan, J., and Gesmundo, A. Scaling up models and data with t5x and seqio. arXiv preprint arXiv:2203.17189, 2022. URL https://arxiv.org/abs/2203.17189.",
|
| 1295 |
+
"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. ArXiv, abs/1707.06347, 2017.",
|
| 1296 |
+
"Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053, 2019.",
|
| 1297 |
+
"Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020.",
|
| 1298 |
+
"Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023a.",
|
| 1299 |
+
"Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288, 2023b.",
|
| 1300 |
+
"Wang, B. and Komatsuzaki, A. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021.",
|
| 1301 |
+
"Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., and Hajishirzi, H. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.",
|
| 1302 |
+
"Wang, Y., Mishra, S., Alipoormolabashi, P., Kordi, Y., Mirzaei, A., Arunkumar, A., Ashok, A., Dhanasekaran, A. S., Naik, A., Stap, D., et al. Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. URL https://arxiv.org/abs/2204.07705, 2022b.",
|
| 1303 |
+
"Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funowicz, M., et al. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019."
|
| 1304 |
+
],
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
510,
|
| 1307 |
+
85,
|
| 1308 |
+
880,
|
| 1309 |
+
917
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 9
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "page_number",
|
| 1315 |
+
"text": "8587",
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
480,
|
| 1318 |
+
928,
|
| 1319 |
+
519,
|
| 1320 |
+
940
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 9
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "list",
|
| 1326 |
+
"sub_type": "ref_text",
|
| 1327 |
+
"list_items": [
|
| 1328 |
+
"Yao, Z., Aminabadi, R. Y., Ruwase, O., Rajbhandari, S., Wu, X., Awan, A. A., Rasley, J., Zhang, M., Li, C., Holmes, C., Zhou, Z., Wyatt, M., Smith, M., Kurilenko, L., Qin, H., Tanaka, M., Che, S., Song, S. L., and He, Y. DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. arXiv preprint arXiv:2308.01320, 2023.",
|
| 1329 |
+
"Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., and Zettlemoyer, L. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068, 2022.",
|
| 1330 |
+
"Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., and Irving, G. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019."
|
| 1331 |
+
],
|
| 1332 |
+
"bbox": [
|
| 1333 |
+
115,
|
| 1334 |
+
85,
|
| 1335 |
+
489,
|
| 1336 |
+
341
|
| 1337 |
+
],
|
| 1338 |
+
"page_idx": 10
|
| 1339 |
+
},
|
| 1340 |
+
{
|
| 1341 |
+
"type": "page_number",
|
| 1342 |
+
"text": "8588",
|
| 1343 |
+
"bbox": [
|
| 1344 |
+
480,
|
| 1345 |
+
928,
|
| 1346 |
+
519,
|
| 1347 |
+
940
|
| 1348 |
+
],
|
| 1349 |
+
"page_idx": 10
|
| 1350 |
+
},
|
| 1351 |
+
{
|
| 1352 |
+
"type": "text",
|
| 1353 |
+
"text": "A Model Hyperparameters",
|
| 1354 |
+
"text_level": 1,
|
| 1355 |
+
"bbox": [
|
| 1356 |
+
114,
|
| 1357 |
+
84,
|
| 1358 |
+
366,
|
| 1359 |
+
99
|
| 1360 |
+
],
|
| 1361 |
+
"page_idx": 11
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"type": "text",
|
| 1365 |
+
"text": "See table 4 for a full list of hyperparameters across all types of model training. In particular, we find a large batch size to be crucial for PPO. The KL coefficient can be relaxed for HH training but is important for summarizations. Additionally, scaling the reward by a running standard deviation estimate provides a small boost.",
|
| 1366 |
+
"bbox": [
|
| 1367 |
+
112,
|
| 1368 |
+
109,
|
| 1369 |
+
882,
|
| 1370 |
+
171
|
| 1371 |
+
],
|
| 1372 |
+
"page_idx": 11
|
| 1373 |
+
},
|
| 1374 |
+
{
|
| 1375 |
+
"type": "table",
|
| 1376 |
+
"img_path": "images/8f094b6d13a3a409920fb3f406ef53e8cf1e35b5266eda32d4a2f50d23734482.jpg",
|
| 1377 |
+
"table_caption": [],
|
| 1378 |
+
"table_footnote": [],
|
| 1379 |
+
"table_body": "<table><tr><td></td><td>SFT</td><td>RM</td><td>PPO</td></tr><tr><td>lr</td><td>5E-5</td><td>5E-6</td><td>5E-6</td></tr><tr><td>bs</td><td>64</td><td>64</td><td>256</td></tr><tr><td>Layers frozen</td><td>N/A</td><td>50%</td><td>8</td></tr><tr><td>reward normalization</td><td>N/A</td><td>N/A</td><td>scaling</td></tr><tr><td>Target KL</td><td>N/A</td><td>N/A</td><td>6</td></tr><tr><td>λ (GAE)</td><td>N/A</td><td>N/A</td><td>0.95</td></tr><tr><td>γ (discount)</td><td>N/A</td><td>N/A</td><td>1</td></tr><tr><td>Mini-batch normalization</td><td>N/A</td><td>N/A</td><td>True</td></tr><tr><td>PPO epochs</td><td>N/A</td><td>N/A</td><td>4</td></tr><tr><td>KL coefficient</td><td>N/A</td><td>N/A</td><td>0.01</td></tr></table>",
|
| 1380 |
+
"bbox": [
|
| 1381 |
+
309,
|
| 1382 |
+
179,
|
| 1383 |
+
687,
|
| 1384 |
+
367
|
| 1385 |
+
],
|
| 1386 |
+
"page_idx": 11
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "text",
|
| 1390 |
+
"text": "B LM Eval Results",
|
| 1391 |
+
"text_level": 1,
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
114,
|
| 1394 |
+
420,
|
| 1395 |
+
297,
|
| 1396 |
+
434
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 11
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "A full list of scores for all HH models on considered lm-eval benchmarks can be found in table 5. Note how, similarly to findings by InstructGPT, SFT models perform poorly. In contrast, pure RL fine-tuning negligibly impacts benchmark scores.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
112,
|
| 1405 |
+
444,
|
| 1406 |
+
882,
|
| 1407 |
+
492
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 11
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "table",
|
| 1413 |
+
"img_path": "images/7374f2261dfad361b115a216f445a836d59547cd35f059c0dec73908b2f1dab1.jpg",
|
| 1414 |
+
"table_caption": [
|
| 1415 |
+
"Table 4: Training hparams."
|
| 1416 |
+
],
|
| 1417 |
+
"table_footnote": [],
|
| 1418 |
+
"table_body": "<table><tr><td>Model</td><td>HellaSwag</td><td>LAMBADA</td><td>ARC Easy</td><td>ARC Challenge</td><td>OpenBookQA</td><td>TriviaQA</td></tr><tr><td>Pythia 160M Vanilla</td><td>0.294</td><td>0.248</td><td>0.451</td><td>0.203</td><td>0.172</td><td>0.011</td></tr><tr><td>Pythia 160M SFT</td><td>0.291</td><td>0.215</td><td>0.453</td><td>0.206</td><td>0.17</td><td>0.013</td></tr><tr><td>Pythia 160M PPO</td><td>0.292</td><td>0.218</td><td>0.454</td><td>0.209</td><td>0.162</td><td>0.013</td></tr><tr><td>Pythia 160M ILQL</td><td>0.292</td><td>0.217</td><td>0.455</td><td>0.205</td><td>0.167</td><td>0.015</td></tr><tr><td>Pythia 1.4B Vanilla</td><td>0.402</td><td>0.458</td><td>0.581</td><td>0.265</td><td>0.198</td><td>0.048</td></tr><tr><td>Pythia 1.4B SFT</td><td>0.374</td><td>0.344</td><td>0.547</td><td>0.255</td><td>0.192</td><td>0.016</td></tr><tr><td>Pythia 1.4B PPO</td><td>0.369</td><td>0.360</td><td>0.543</td><td>0.257</td><td>0.196</td><td>0.011</td></tr><tr><td>Pythia 1.4B ILQL</td><td>0.392</td><td>0.439</td><td>0.563</td><td>0.255</td><td>0.189</td><td>0.015</td></tr><tr><td>Pythia 6.9B Vanilla</td><td>0.488</td><td>0.564</td><td>0.667</td><td>0.319</td><td>0.252</td><td>0.151</td></tr><tr><td>Pythia 6.9B SFT</td><td>0.432</td><td>0.398</td><td>0.606</td><td>0.309</td><td>0.236</td><td>0.027</td></tr><tr><td>Pythia 6.9B PPO</td><td>0.421</td><td>0.409</td><td>0.619</td><td>0.291</td><td>0.25</td><td>0.030</td></tr><tr><td>Pythia 6.9B ILQL</td><td>0.469</td><td>0.557</td><td>0.654</td><td>0.301</td><td>0.250</td><td>0.149</td></tr><tr><td>Pythia 6.9B Vanilla-PPO</td><td>0.495</td><td>0.605</td><td>0.670</td><td>0.312</td><td>0.28</td><td>0.149</td></tr><tr><td>GPT-NeoX Vanilla</td><td>0.535</td><td>0.72</td><td>0.723</td><td>0.38</td><td>0.29</td><td>0.259</td></tr><tr><td>GPT-NeoX SFT</td><td>0.462</td><td>0.505</td><td>0.664</td><td>0.343</td><td>0.252</td><td>0.041</td></tr><tr><td>GPT-NeoX PPO</td><td>0.463</td><td>0.529</td><td>0.666</td><td>0.347</td><td>0.256</td><td>0.037</td></tr><tr><td>GPT-NeoX ILQL</td><td>0.541</td><td>0.709</td><td>0.720</td><td>0.375</td><td>0.294</td><td>0.251</td></tr><tr><td>GPT-NeoX Vanilla-PPO</td><td>0.548</td><td>0.618</td><td>0.731</td><td>0.383</td><td>0.3</td><td>0.227</td></tr></table>",
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
144,
|
| 1421 |
+
502,
|
| 1422 |
+
850,
|
| 1423 |
+
822
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 11
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "text",
|
| 1429 |
+
"text": "Table 5: Table of results from the lm-eval-harness",
|
| 1430 |
+
"bbox": [
|
| 1431 |
+
324,
|
| 1432 |
+
838,
|
| 1433 |
+
668,
|
| 1434 |
+
851
|
| 1435 |
+
],
|
| 1436 |
+
"page_idx": 11
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "text",
|
| 1440 |
+
"text": "C Annotator Instructions",
|
| 1441 |
+
"text_level": 1,
|
| 1442 |
+
"bbox": [
|
| 1443 |
+
112,
|
| 1444 |
+
878,
|
| 1445 |
+
352,
|
| 1446 |
+
892
|
| 1447 |
+
],
|
| 1448 |
+
"page_idx": 11
|
| 1449 |
+
},
|
| 1450 |
+
{
|
| 1451 |
+
"type": "text",
|
| 1452 |
+
"text": "Link to annotator instructions here. They are also displayed in fig. 8 for convenience.",
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
112,
|
| 1455 |
+
903,
|
| 1456 |
+
742,
|
| 1457 |
+
917
|
| 1458 |
+
],
|
| 1459 |
+
"page_idx": 11
|
| 1460 |
+
},
|
| 1461 |
+
{
|
| 1462 |
+
"type": "page_number",
|
| 1463 |
+
"text": "8589",
|
| 1464 |
+
"bbox": [
|
| 1465 |
+
480,
|
| 1466 |
+
928,
|
| 1467 |
+
519,
|
| 1468 |
+
940
|
| 1469 |
+
],
|
| 1470 |
+
"page_idx": 11
|
| 1471 |
+
},
|
| 1472 |
+
{
|
| 1473 |
+
"type": "text",
|
| 1474 |
+
"text": "Annotation Guidelines:",
|
| 1475 |
+
"text_level": 1,
|
| 1476 |
+
"bbox": [
|
| 1477 |
+
203,
|
| 1478 |
+
214,
|
| 1479 |
+
455,
|
| 1480 |
+
231
|
| 1481 |
+
],
|
| 1482 |
+
"page_idx": 12
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "text",
|
| 1486 |
+
"text": "Task: You will be shown a prompt to a language assistant and two possible responses to the prompt: A and B. You will choose between response A and response B which is overall the most helpful response to the prompt. Sometimes you will be asked to explain your reasoning for this choice.",
|
| 1487 |
+
"bbox": [
|
| 1488 |
+
203,
|
| 1489 |
+
255,
|
| 1490 |
+
789,
|
| 1491 |
+
310
|
| 1492 |
+
],
|
| 1493 |
+
"page_idx": 12
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"type": "text",
|
| 1497 |
+
"text": "Helpfulness",
|
| 1498 |
+
"text_level": 1,
|
| 1499 |
+
"bbox": [
|
| 1500 |
+
203,
|
| 1501 |
+
325,
|
| 1502 |
+
292,
|
| 1503 |
+
338
|
| 1504 |
+
],
|
| 1505 |
+
"page_idx": 12
|
| 1506 |
+
},
|
| 1507 |
+
{
|
| 1508 |
+
"type": "text",
|
| 1509 |
+
"text": "A helpful response:",
|
| 1510 |
+
"bbox": [
|
| 1511 |
+
203,
|
| 1512 |
+
338,
|
| 1513 |
+
337,
|
| 1514 |
+
351
|
| 1515 |
+
],
|
| 1516 |
+
"page_idx": 12
|
| 1517 |
+
},
|
| 1518 |
+
{
|
| 1519 |
+
"type": "list",
|
| 1520 |
+
"sub_type": "text",
|
| 1521 |
+
"list_items": [
|
| 1522 |
+
"- Is easily understandable by the reader",
|
| 1523 |
+
"Stays on topic with the user's intention",
|
| 1524 |
+
"Is straightforward and to the point",
|
| 1525 |
+
"- Provides a satisfying response to the reader"
|
| 1526 |
+
],
|
| 1527 |
+
"bbox": [
|
| 1528 |
+
226,
|
| 1529 |
+
353,
|
| 1530 |
+
544,
|
| 1531 |
+
407
|
| 1532 |
+
],
|
| 1533 |
+
"page_idx": 12
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "text",
|
| 1537 |
+
"text": "You will also rate helpfulness on a scale of 1-5. Additionally for both responses you will rate on a scale of 1-5 the correctness of the response and the potential harmfulness of the response. Note: If the user's intention is to request something harmful, then harmfulness SHOULD NOT affect one's choice between A and B for Helpfulness, because that response is \"helpful\" to the user.",
|
| 1538 |
+
"bbox": [
|
| 1539 |
+
203,
|
| 1540 |
+
423,
|
| 1541 |
+
781,
|
| 1542 |
+
492
|
| 1543 |
+
],
|
| 1544 |
+
"page_idx": 12
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "text",
|
| 1548 |
+
"text": "Drafts of definitions adapted from InstructGPT Doc:",
|
| 1549 |
+
"bbox": [
|
| 1550 |
+
203,
|
| 1551 |
+
506,
|
| 1552 |
+
547,
|
| 1553 |
+
520
|
| 1554 |
+
],
|
| 1555 |
+
"page_idx": 12
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "text",
|
| 1559 |
+
"text": "Correctness",
|
| 1560 |
+
"text_level": 1,
|
| 1561 |
+
"bbox": [
|
| 1562 |
+
203,
|
| 1563 |
+
533,
|
| 1564 |
+
295,
|
| 1565 |
+
546
|
| 1566 |
+
],
|
| 1567 |
+
"page_idx": 12
|
| 1568 |
+
},
|
| 1569 |
+
{
|
| 1570 |
+
"type": "text",
|
| 1571 |
+
"text": "A correct response:",
|
| 1572 |
+
"bbox": [
|
| 1573 |
+
203,
|
| 1574 |
+
549,
|
| 1575 |
+
337,
|
| 1576 |
+
561
|
| 1577 |
+
],
|
| 1578 |
+
"page_idx": 12
|
| 1579 |
+
},
|
| 1580 |
+
{
|
| 1581 |
+
"type": "list",
|
| 1582 |
+
"sub_type": "text",
|
| 1583 |
+
"list_items": [
|
| 1584 |
+
"- Contains accurate facts",
|
| 1585 |
+
"- Doesn't contain any inaccurate facts",
|
| 1586 |
+
"- Doesn't mislead the user",
|
| 1587 |
+
"- Doesn't invent spurious details",
|
| 1588 |
+
"- If the input prompt contains incorrect facts, the model refutes or corrects the prompt facts and refuses to answer the question."
|
| 1589 |
+
],
|
| 1590 |
+
"bbox": [
|
| 1591 |
+
226,
|
| 1592 |
+
563,
|
| 1593 |
+
752,
|
| 1594 |
+
645
|
| 1595 |
+
],
|
| 1596 |
+
"page_idx": 12
|
| 1597 |
+
},
|
| 1598 |
+
{
|
| 1599 |
+
"type": "text",
|
| 1600 |
+
"text": "Harmful",
|
| 1601 |
+
"text_level": 1,
|
| 1602 |
+
"bbox": [
|
| 1603 |
+
203,
|
| 1604 |
+
659,
|
| 1605 |
+
265,
|
| 1606 |
+
671
|
| 1607 |
+
],
|
| 1608 |
+
"page_idx": 12
|
| 1609 |
+
},
|
| 1610 |
+
{
|
| 1611 |
+
"type": "text",
|
| 1612 |
+
"text": "A harmful response, in and of itself, or if taken into action, includes any of the following:",
|
| 1613 |
+
"bbox": [
|
| 1614 |
+
203,
|
| 1615 |
+
674,
|
| 1616 |
+
783,
|
| 1617 |
+
687
|
| 1618 |
+
],
|
| 1619 |
+
"page_idx": 12
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "list",
|
| 1623 |
+
"sub_type": "text",
|
| 1624 |
+
"list_items": [
|
| 1625 |
+
"- Could cause cause physical, psychological, or social harm to people",
|
| 1626 |
+
"- Could cause damage or loss of equipment or property",
|
| 1627 |
+
"- Could harm the environment",
|
| 1628 |
+
"- Could harm institutions or resources necessary to human wellbeing, such as schools, governments, medicine, electricity, etc.",
|
| 1629 |
+
"Denigrates members of groups, or uses prejudicial terms against groups"
|
| 1630 |
+
],
|
| 1631 |
+
"bbox": [
|
| 1632 |
+
226,
|
| 1633 |
+
688,
|
| 1634 |
+
756,
|
| 1635 |
+
771
|
| 1636 |
+
],
|
| 1637 |
+
"page_idx": 12
|
| 1638 |
+
},
|
| 1639 |
+
{
|
| 1640 |
+
"type": "page_number",
|
| 1641 |
+
"text": "8590",
|
| 1642 |
+
"bbox": [
|
| 1643 |
+
480,
|
| 1644 |
+
928,
|
| 1645 |
+
519,
|
| 1646 |
+
940
|
| 1647 |
+
],
|
| 1648 |
+
"page_idx": 12
|
| 1649 |
+
},
|
| 1650 |
+
{
|
| 1651 |
+
"type": "text",
|
| 1652 |
+
"text": "An example is shown below:",
|
| 1653 |
+
"bbox": [
|
| 1654 |
+
200,
|
| 1655 |
+
175,
|
| 1656 |
+
460,
|
| 1657 |
+
193
|
| 1658 |
+
],
|
| 1659 |
+
"page_idx": 13
|
| 1660 |
+
},
|
| 1661 |
+
{
|
| 1662 |
+
"type": "text",
|
| 1663 |
+
"text": "Prompt: Human: Why is the sky blue?",
|
| 1664 |
+
"bbox": [
|
| 1665 |
+
200,
|
| 1666 |
+
208,
|
| 1667 |
+
463,
|
| 1668 |
+
223
|
| 1669 |
+
],
|
| 1670 |
+
"page_idx": 13
|
| 1671 |
+
},
|
| 1672 |
+
{
|
| 1673 |
+
"type": "text",
|
| 1674 |
+
"text": "Response A: Assistant: The sky is blue because blue light has shorter wavelengths which get scattered in the atmosphere.",
|
| 1675 |
+
"bbox": [
|
| 1676 |
+
198,
|
| 1677 |
+
293,
|
| 1678 |
+
766,
|
| 1679 |
+
322
|
| 1680 |
+
],
|
| 1681 |
+
"page_idx": 13
|
| 1682 |
+
},
|
| 1683 |
+
{
|
| 1684 |
+
"type": "text",
|
| 1685 |
+
"text": "Response B: Assistant: I am not completely sure but I think it has something to do with the physics of different light wavelengths and the current composition of the earth's atmosphere. In fact the sky was orange billions of years ago!",
|
| 1686 |
+
"bbox": [
|
| 1687 |
+
198,
|
| 1688 |
+
335,
|
| 1689 |
+
789,
|
| 1690 |
+
379
|
| 1691 |
+
],
|
| 1692 |
+
"page_idx": 13
|
| 1693 |
+
},
|
| 1694 |
+
{
|
| 1695 |
+
"type": "text",
|
| 1696 |
+
"text": "Which of the two responses is most helpful towards addressing the prompt?: [A, B]",
|
| 1697 |
+
"bbox": [
|
| 1698 |
+
200,
|
| 1699 |
+
392,
|
| 1700 |
+
717,
|
| 1701 |
+
420
|
| 1702 |
+
],
|
| 1703 |
+
"page_idx": 13
|
| 1704 |
+
},
|
| 1705 |
+
{
|
| 1706 |
+
"type": "text",
|
| 1707 |
+
"text": "Response A is factually correct: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1708 |
+
"bbox": [
|
| 1709 |
+
200,
|
| 1710 |
+
463,
|
| 1711 |
+
684,
|
| 1712 |
+
492
|
| 1713 |
+
],
|
| 1714 |
+
"page_idx": 13
|
| 1715 |
+
},
|
| 1716 |
+
{
|
| 1717 |
+
"type": "text",
|
| 1718 |
+
"text": "Response A is helpful towards answering the prompt: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1719 |
+
"bbox": [
|
| 1720 |
+
200,
|
| 1721 |
+
505,
|
| 1722 |
+
684,
|
| 1723 |
+
533
|
| 1724 |
+
],
|
| 1725 |
+
"page_idx": 13
|
| 1726 |
+
},
|
| 1727 |
+
{
|
| 1728 |
+
"type": "text",
|
| 1729 |
+
"text": "Response A is harmful: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1730 |
+
"bbox": [
|
| 1731 |
+
200,
|
| 1732 |
+
546,
|
| 1733 |
+
684,
|
| 1734 |
+
576
|
| 1735 |
+
],
|
| 1736 |
+
"page_idx": 13
|
| 1737 |
+
},
|
| 1738 |
+
{
|
| 1739 |
+
"type": "text",
|
| 1740 |
+
"text": "Response B is factually correct: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1741 |
+
"bbox": [
|
| 1742 |
+
200,
|
| 1743 |
+
633,
|
| 1744 |
+
684,
|
| 1745 |
+
661
|
| 1746 |
+
],
|
| 1747 |
+
"page_idx": 13
|
| 1748 |
+
},
|
| 1749 |
+
{
|
| 1750 |
+
"type": "text",
|
| 1751 |
+
"text": "Response B is helpful towards answering the prompt: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1752 |
+
"bbox": [
|
| 1753 |
+
200,
|
| 1754 |
+
675,
|
| 1755 |
+
684,
|
| 1756 |
+
703
|
| 1757 |
+
],
|
| 1758 |
+
"page_idx": 13
|
| 1759 |
+
},
|
| 1760 |
+
{
|
| 1761 |
+
"type": "text",
|
| 1762 |
+
"text": "Response B is harmful: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]",
|
| 1763 |
+
"bbox": [
|
| 1764 |
+
200,
|
| 1765 |
+
717,
|
| 1766 |
+
684,
|
| 1767 |
+
746
|
| 1768 |
+
],
|
| 1769 |
+
"page_idx": 13
|
| 1770 |
+
},
|
| 1771 |
+
{
|
| 1772 |
+
"type": "text",
|
| 1773 |
+
"text": "Figure 8: Annotator instructions for Helpful and Harmless QA dialogue.",
|
| 1774 |
+
"bbox": [
|
| 1775 |
+
250,
|
| 1776 |
+
813,
|
| 1777 |
+
744,
|
| 1778 |
+
829
|
| 1779 |
+
],
|
| 1780 |
+
"page_idx": 13
|
| 1781 |
+
},
|
| 1782 |
+
{
|
| 1783 |
+
"type": "page_number",
|
| 1784 |
+
"text": "8591",
|
| 1785 |
+
"bbox": [
|
| 1786 |
+
480,
|
| 1787 |
+
928,
|
| 1788 |
+
519,
|
| 1789 |
+
940
|
| 1790 |
+
],
|
| 1791 |
+
"page_idx": 13
|
| 1792 |
+
},
|
| 1793 |
+
{
|
| 1794 |
+
"type": "text",
|
| 1795 |
+
"text": "D Synthetic Reward Modeling",
|
| 1796 |
+
"text_level": 1,
|
| 1797 |
+
"bbox": [
|
| 1798 |
+
112,
|
| 1799 |
+
83,
|
| 1800 |
+
394,
|
| 1801 |
+
99
|
| 1802 |
+
],
|
| 1803 |
+
"page_idx": 14
|
| 1804 |
+
},
|
| 1805 |
+
{
|
| 1806 |
+
"type": "text",
|
| 1807 |
+
"text": "We conducted an additional series of experiments attempting to augment/supplant helpful reward modeling using human preference data collected by Anthropic with synthetic data generated via LLMs. In particular, we consider two ways of generating synthetic preference data:",
|
| 1808 |
+
"bbox": [
|
| 1809 |
+
112,
|
| 1810 |
+
109,
|
| 1811 |
+
882,
|
| 1812 |
+
156
|
| 1813 |
+
],
|
| 1814 |
+
"page_idx": 14
|
| 1815 |
+
},
|
| 1816 |
+
{
|
| 1817 |
+
"type": "list",
|
| 1818 |
+
"sub_type": "text",
|
| 1819 |
+
"list_items": [
|
| 1820 |
+
"1. Use a strong instruction tuned model (i.e. text-davinci-003) to choose between two candidate responses generated by an LLM.",
|
| 1821 |
+
"2. Assume that larger models supervised fine-tuned to be helpful give more preferable responses compared to smaller models conditioned on the same prompt. This defines a partial ordering on responses via model size."
|
| 1822 |
+
],
|
| 1823 |
+
"bbox": [
|
| 1824 |
+
127,
|
| 1825 |
+
168,
|
| 1826 |
+
880,
|
| 1827 |
+
256
|
| 1828 |
+
],
|
| 1829 |
+
"page_idx": 14
|
| 1830 |
+
},
|
| 1831 |
+
{
|
| 1832 |
+
"type": "text",
|
| 1833 |
+
"text": "To validate the efficacy of the first approach, we evaluate the accuracy of GPT-NeoX-20B (Black et al., 2022), text-davinci-002, and text-davinci-003 as classifiers on the helpful split of the HH test set. The results are reported in table 6. We find even the best available RLHF model, text-davinici-003, gets 0.64 accuracy. This is relatively poor compared to our best 0.71 accuracy GPT-J RM, but still potentially good enough to do synthetic RLHF.",
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
112,
|
| 1836 |
+
269,
|
| 1837 |
+
882,
|
| 1838 |
+
350
|
| 1839 |
+
],
|
| 1840 |
+
"page_idx": 14
|
| 1841 |
+
},
|
| 1842 |
+
{
|
| 1843 |
+
"type": "text",
|
| 1844 |
+
"text": "Constructing a synthetic dialogue dataset To construct our training set, we first few-shot prompt text-davinci-003 with samples from the helpful HH train set to produce a set of over 150k synthetic prompts a human may ask of an AI assistant. We sample at a high temperature (T=1.4) to maximize diversity and filter out low quality responses by prompting GPT-NeoX-20B to judge whether or not the generated request is plausible. We additionally filter out repetitive/semantically similar responses by embedding each response with GPT-J. The result is a dataset of around 60k highly diverse prompts. We then prompt text-davinci-003 to respond to these generated prompts and repeat the process to synthetically generate two full turns of interactive dialogue giving around 120k samples total. See table 7 for a synthetic prompt and model responses across different size models.",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
112,
|
| 1847 |
+
359,
|
| 1848 |
+
882,
|
| 1849 |
+
502
|
| 1850 |
+
],
|
| 1851 |
+
"page_idx": 14
|
| 1852 |
+
},
|
| 1853 |
+
{
|
| 1854 |
+
"type": "text",
|
| 1855 |
+
"text": "Next, we supervise fine-tune models of sizes 125M-20B on a 20k size subset of our synthetic dialog dataset. The same hyperparameters are used as elsewhere in teh paper. We then sample each model for responses on the entire dataset, as well as text-davinci-002. We then use text-davinci-003 to determine preference over responses from text-davinci-002 and text-davinci-003. To avoid any ordering bias, we randomize the order of model responses when presented to text-davinci-003, as done in (Bai et al., 2022b). Surprisingly, we find text-davinci-003 prefers itself only $58\\%$ of the time, suggesting its impartiality as a judge. This defines our first synthetic preference dataset.",
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
112,
|
| 1858 |
+
504,
|
| 1859 |
+
882,
|
| 1860 |
+
615
|
| 1861 |
+
],
|
| 1862 |
+
"page_idx": 14
|
| 1863 |
+
},
|
| 1864 |
+
{
|
| 1865 |
+
"type": "text",
|
| 1866 |
+
"text": "We can construct our second synthetic preference dataset using the second method described above: by ordering model responses according to model size. Specifically, this gives us the preference ordering $125\\mathrm{M} < 1.4\\mathrm{B} < 6.9\\mathrm{B} < 20\\mathrm{B} < \\text{text-davinci-002} < \\text{text-davinci-003}$ . We can then train reward models of various sizes using this synthetic dataset. The overall accuracies of these models on a test split as a function of the number of training samples is plotted in fig. 9. Additionally, we plot the accuracy across RM model sizes at predicting preferences between each category of model size comparisons (e.g., choosing the 6.9B response over the 125M response). These results are reported in Figure",
|
| 1867 |
+
"bbox": [
|
| 1868 |
+
112,
|
| 1869 |
+
617,
|
| 1870 |
+
882,
|
| 1871 |
+
728
|
| 1872 |
+
],
|
| 1873 |
+
"page_idx": 14
|
| 1874 |
+
},
|
| 1875 |
+
{
|
| 1876 |
+
"type": "text",
|
| 1877 |
+
"text": "Overall we find the best RM model, 6.9B, does a very good job at correctly picking the more preferable response with over $90\\%$ accuracy. However, it's unclear how well our size-ordered preference modeling assumption translates to a useful RM. To test this, we evaluate the 6.9B RM on the helpful HH test split. The result is a relatively poor score of 0.61. In contrast, we find the best GPT-J HH RM convincingly generalizes to this synthetic dataset with a score of 0.78.",
|
| 1878 |
+
"bbox": [
|
| 1879 |
+
112,
|
| 1880 |
+
728,
|
| 1881 |
+
882,
|
| 1882 |
+
809
|
| 1883 |
+
],
|
| 1884 |
+
"page_idx": 14
|
| 1885 |
+
},
|
| 1886 |
+
{
|
| 1887 |
+
"type": "page_number",
|
| 1888 |
+
"text": "8592",
|
| 1889 |
+
"bbox": [
|
| 1890 |
+
480,
|
| 1891 |
+
928,
|
| 1892 |
+
519,
|
| 1893 |
+
940
|
| 1894 |
+
],
|
| 1895 |
+
"page_idx": 14
|
| 1896 |
+
},
|
| 1897 |
+
{
|
| 1898 |
+
"type": "image",
|
| 1899 |
+
"img_path": "images/58f6cc162470ef8650cc44509beca0686970b6c3904a2e17768247f5f5de0751.jpg",
|
| 1900 |
+
"image_caption": [
|
| 1901 |
+
"Total Synthetic Reward Model Accuracy vs. Number of Comparisons",
|
| 1902 |
+
"Figure 9: Accuracies of size ordered synthetic preference RMs as a function of training data size. We see the 20B is most sample efficient until 120,000 samples, at which point the 6B model does slightly better."
|
| 1903 |
+
],
|
| 1904 |
+
"image_footnote": [],
|
| 1905 |
+
"bbox": [
|
| 1906 |
+
124,
|
| 1907 |
+
112,
|
| 1908 |
+
811,
|
| 1909 |
+
432
|
| 1910 |
+
],
|
| 1911 |
+
"page_idx": 15
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "image",
|
| 1915 |
+
"img_path": "images/89c2be8b593479ab2d477e21fc7321c5c359b81d4ec160f59533147702c8d982.jpg",
|
| 1916 |
+
"image_caption": [
|
| 1917 |
+
"Accuracy vs. Reward Model Size",
|
| 1918 |
+
"Figure 10: Accuracies of size ordered synthetic preference RMs on fine-grained categories of model size comparisons (e.g. choosing a 6.9B vs. a 125M response). We see some comparisons are very easy, e.g. 125M vs. text-davinci-003, whereas others are much harder."
|
| 1919 |
+
],
|
| 1920 |
+
"image_footnote": [],
|
| 1921 |
+
"bbox": [
|
| 1922 |
+
127,
|
| 1923 |
+
525,
|
| 1924 |
+
875,
|
| 1925 |
+
850
|
| 1926 |
+
],
|
| 1927 |
+
"page_idx": 15
|
| 1928 |
+
},
|
| 1929 |
+
{
|
| 1930 |
+
"type": "page_number",
|
| 1931 |
+
"text": "8593",
|
| 1932 |
+
"bbox": [
|
| 1933 |
+
480,
|
| 1934 |
+
927,
|
| 1935 |
+
519,
|
| 1936 |
+
940
|
| 1937 |
+
],
|
| 1938 |
+
"page_idx": 15
|
| 1939 |
+
},
|
| 1940 |
+
{
|
| 1941 |
+
"type": "table",
|
| 1942 |
+
"img_path": "images/52778a0429b9c37b98e604b98a10212ff95993c9368e0061bf8349049dfea325.jpg",
|
| 1943 |
+
"table_caption": [],
|
| 1944 |
+
"table_footnote": [],
|
| 1945 |
+
"table_body": "<table><tr><td></td><td>Zero-shot accuracy</td><td>One-shot accuracy</td></tr><tr><td>GPT-NeoX-20B</td><td>0.55</td><td>0.57</td></tr><tr><td>text-davinci-002</td><td>0.61</td><td>0.63</td></tr><tr><td>text-davinici-003</td><td>0.63</td><td>0.64</td></tr><tr><td>GPT-J RM</td><td>0.71</td><td>N/A</td></tr></table>",
|
| 1946 |
+
"bbox": [
|
| 1947 |
+
253,
|
| 1948 |
+
442,
|
| 1949 |
+
742,
|
| 1950 |
+
533
|
| 1951 |
+
],
|
| 1952 |
+
"page_idx": 16
|
| 1953 |
+
},
|
| 1954 |
+
{
|
| 1955 |
+
"type": "text",
|
| 1956 |
+
"text": "Table 6: Accuracies of LLMs as zero-shot and few-sho helpfulness classifiers on the helpful split of Anthropic's HH dataset.",
|
| 1957 |
+
"bbox": [
|
| 1958 |
+
112,
|
| 1959 |
+
542,
|
| 1960 |
+
882,
|
| 1961 |
+
570
|
| 1962 |
+
],
|
| 1963 |
+
"page_idx": 16
|
| 1964 |
+
},
|
| 1965 |
+
{
|
| 1966 |
+
"type": "page_number",
|
| 1967 |
+
"text": "8594",
|
| 1968 |
+
"bbox": [
|
| 1969 |
+
480,
|
| 1970 |
+
928,
|
| 1971 |
+
519,
|
| 1972 |
+
940
|
| 1973 |
+
],
|
| 1974 |
+
"page_idx": 16
|
| 1975 |
+
},
|
| 1976 |
+
{
|
| 1977 |
+
"type": "table",
|
| 1978 |
+
"img_path": "images/36198835b7519e7dc05ef60361989ba1f78b4aadaf316578b9e70d3b5374cb7c.jpg",
|
| 1979 |
+
"table_caption": [],
|
| 1980 |
+
"table_footnote": [],
|
| 1981 |
+
"table_body": "<table><tr><td>Synthetic dialog</td><td>Human: What is the proper technique for playing the guitar. Assistant: The proper technique for playing the guitar involves mastering proper hand and body posture, learning to hold the pick correctly, learning basic chords, scales, and music theory, and getting in the habit of regularly practicing. Human:I'm feeling a bit overwhelmed-could you break down the steps more? Assistant:</td></tr><tr><td>text-davinci-003 response</td><td>Absolutely. The first step is mastering proper hand and body posture for holding the guitar. It's important to maintain a good posture in order to prevent fatigue and make sure your hand muscles remain nimble and rested. Then, you'll need to work on specific lead exercises, like frailing or 'down-up' picking. This will help your fingers stay strong, fast and dexterous for chords and solos. Additionally, it's essential to learn about basic chords, chords progression, scales and music theory as that knowledge underpins almost every piece of guitar music. Lastly, consistently practicing will build your technique and, of course, strengthen your skills, stay sharp, and improve your tone.</td></tr><tr><td>text-davinici-002 response</td><td>1. Master proper hand and body posture. 2. Learn to hold the pick correctly. 3. Learn basic chords, scales, and music theory. 4. Practice regularly.</td></tr><tr><td>GPT-NeoX-20B response</td><td>Absolutely. First, it's helpful to get comfortable with a pick and know where it should be stored. Next, you should make sure you use the right posture for playing the guitar. This can be achieved through stretching before playing, or using an instructional program after playing. After that's done, you will need to learn some basics of chords. And last but not the least, find ways to regularly get in practice. Does have an easier time understanding what you mean?</td></tr><tr><td>Pythia 6.9B response</td><td>Certainly, I can! Step 1: Familiarize yourself with the parts of the guitar. Step 2: Set a realistic goal for the project and step 3: Find the right materials. Step 4: Start building your first album. Step 5: Don't worry if you don't have all the instruments and need the help to get started.</td></tr><tr><td>Pythia 1.4B response</td><td>Absolutely! First, it's important to have a good eye-in-air position during which the end of the pick stands at you, rather than leaning at either shoulder and your back. Then begin by learning a few chord progressions, such as major, minor, and major and minor. In the process, use shifts and fingerings and practice using scales and arpeggios to create a better melodic or harmonic feel. Finally, use practice and repetition to solidify your skills in general chords.</td></tr><tr><td>Pythia 125M response</td><td>Practicing is key, when you can make more on your practice day as well as taking the proper stance. Is there anything I can help you with?</td></tr></table>",
|
| 1982 |
+
"bbox": [
|
| 1983 |
+
144,
|
| 1984 |
+
131,
|
| 1985 |
+
853,
|
| 1986 |
+
869
|
| 1987 |
+
],
|
| 1988 |
+
"page_idx": 17
|
| 1989 |
+
},
|
| 1990 |
+
{
|
| 1991 |
+
"type": "text",
|
| 1992 |
+
"text": "Table 7: An example of a synthetically generated multi-turn dialog and responses from different model sizes.",
|
| 1993 |
+
"bbox": [
|
| 1994 |
+
129,
|
| 1995 |
+
881,
|
| 1996 |
+
863,
|
| 1997 |
+
897
|
| 1998 |
+
],
|
| 1999 |
+
"page_idx": 17
|
| 2000 |
+
},
|
| 2001 |
+
{
|
| 2002 |
+
"type": "page_number",
|
| 2003 |
+
"text": "8595",
|
| 2004 |
+
"bbox": [
|
| 2005 |
+
480,
|
| 2006 |
+
928,
|
| 2007 |
+
519,
|
| 2008 |
+
940
|
| 2009 |
+
],
|
| 2010 |
+
"page_idx": 17
|
| 2011 |
+
}
|
| 2012 |
+
]
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/d0cd9fc5-d30b-4d39-a5f4-81d0ab79e251_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b4ca21136429d17d48a88b028871d74df2b5e54b54654dae17826b66d71335d3
|
| 3 |
+
size 737519
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/full.md
ADDED
|
@@ -0,0 +1,360 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# trlX: A Framework for Large Scale Reinforcement Learning from Human Feedback
|
| 2 |
+
|
| 3 |
+
Alexander Havrilla $^{1,2}$ , Maksym Zhuravinskyi $^{1,3}$ , Duy Van Phung $^{1,3}$ , Aman Tiwari $^{4}$ , Jonathan Tow $^{1,3}$ , Shivanshu Purohit $^{5}$ , Stella Biderman $^{5,6}$ , Quentin Gregory Anthony $^{5,7}$ , Ethan Kim $^{8}$ , and Louis Castricato $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ CarperAI, $^{2}$ Georgia Tech, $^{3}$ Stability AI, $^{4}$ Independent Researcher*, $^{5}$ EleutherAI, $^{6}$ Booz Allen Hamilton, $^{7}$ Ohio State University, $^{8}$ vectorshift.ai
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Reinforcement learning from human feedback (RLHF) utilizes human feedback to better align large language models with human preferences via online optimization against a learned reward model. Current RLHF paradigms rely on Proximal Policy Optimization (PPO), which quickly becomes a challenge to implement and scale up to large architectures. To address this difficulty we present theTRLX library as a feature-complete open-source framework for RLHF fine-tuning of models up to and exceeding 70 billion parameters. We implement support for multiple types of distributed training including distributed data parallel, model sharded, as well as tensor, sequential, and pipeline parallelism.
|
| 10 |
+
|
| 11 |
+
To increase the accessibility of RLHF to researchers, we implement compute- and memory-saving features that giveTRLX the flexibility to support users with a wide range of compute resources. This includes offline RL methods like Implicit Language Q Learning (ILQL), low-rank adapters, and the Hydra architecture. We find offline fine-tuning offers competitive performance relative to online algorithms while being easier to implement, train, and scale. To evaluate our framework we train RLHF models on two separate well-known tasks using publicly available human preference data. Models trained withTRLX achieve preference win-rates over baselines at rates comparable to the original works.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Since 2019, the prevailing training paradigm for large-language models (Brown et al., 2020; Raffel et al., 2019; Devlin et al., 2019) has comprised two parts: First a pre-training phase during which models are trained on a large corpus of text. Pretraining produces a general-purpose model which has learned the syntax and some semantics of natural text but is not easily controllable for any particular task. Fine-tuning is then used to adapt this general-purpose pre-trained model to a smaller, specialized
|
| 16 |
+
|
| 17 |
+
corpus. However, this often comes with a decrease in model performance on other tasks. Further, the resulting model can be difficult to work with, requiring extensive prompt engineering.
|
| 18 |
+
|
| 19 |
+
Reinforcement learning from human feedback (RLHF), has presented itself as an additional third stage of the language model training pipeline. In this stage, gathered human preference data is used to supervise fine-tune the pre-trained model and then train a reward model. The reward model assigns scalar values to (prompt, response) pairs that correspond to human preference. The supervised fine-tuned model can then be trained using Proximal Policy Optimization (PPO) (Schulman et al., 2017), an online reinforcement learning algorithm, to optimize against the learned reward model. The resulting models are better aligned with human preferences, leading to win-rates over the base model by up to $80\%$ (Askell et al., 2021; Bai et al., 2022a; Ouyang et al., 2022), and producing more desirable and less harmful text with less prompting.
|
| 20 |
+
|
| 21 |
+
However, the study of RLHF models in the academic community has been limited thus far by a need for open-source training frameworks and datasets. Online reinforcement learning via PPO is compute expensive and challenging to scale as we must store three copies of the model in memory during training. To address these issues we presentTRLX: a library supporting online and offline RLHF fine-tuning for language models of up to and exceeding 70 billion parameters. At smaller scales, we emphasize low-resource accessibility by incorporating features such as Hydra model architectures (Glaese et al., 2022), LoRA adapters (Hu et al., 2021), and DeepSpeed (Rajbhandari et al., 2019), which, when combined, can reduce memory overhead on GPT-J (Wang & Komatsuzaki, 2021) by up to $75\%$ with minimal impact on the achieved
|
| 22 |
+
|
| 23 |
+
<table><tr><td></td><td colspan="2">RL Algorithms</td><td colspan="3">Parallelization Strategies</td><td colspan="2">Features</td></tr><tr><td></td><td>Online</td><td>Offline</td><td>Tensor</td><td>Pipeline</td><td>Sequence</td><td>LoRA</td><td>Sweeps</td></tr><tr><td>RL4LM</td><td>✓</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>trl</td><td>✓</td><td></td><td>✓</td><td>*</td><td></td><td>✓</td><td></td></tr><tr><td>DS Chat</td><td>✓</td><td></td><td>✓</td><td>✓</td><td></td><td>✓</td><td></td></tr><tr><td>trlX (ours)</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
|
| 24 |
+
|
| 25 |
+
Table 1:TRLX feature comparisons with other libraries. * trl supports naive pipeline parallelism, which allows larger models to be run but is far less efficient.
|
| 26 |
+
|
| 27 |
+
reward.TRLX training at this scale is compatible with most encoder-decoder and decoder-only architecture supported on the popular Hugging Face Hub (Wolf et al., 2019). For large, 20 billion parameter plus, training jobs we implement support for tensor, sequence, and pipeline parallelism via both the GPT-NeoX library (Aidonian et al., 2021) and NeMO-Megatron (Kuchaiev et al., 2019).
|
| 28 |
+
|
| 29 |
+
trlX also supports Implicit Language Q Learning, ILQL, as an offline alternative to online RL methods. We find fine-tuning via ILQL achieves preference win-rates with baselines close to PPO but at a fraction of the compute cost. Further ILQL is more robust to reward model overfitting, which online algorithms can suffer from. To evaluate our framework we provide open-source replications of well-known papers from the RLHF literature including learning summaries from human feedback (Stiennon et al., 2020) and Helpful/Harmless preference learning for a general purpose language assistant (Bai et al., 2022a). We find models trained with trlX achieve preference win-rates, as judged by human annotators, over baselines at rates comparable to the original works, validating our implementation. We open-source all supervised fine-tuned models, reward models, and RLHF models for further research, as well as the training framework, trlX.
|
| 30 |
+
|
| 31 |
+
In summary, we make the following contributions:
|
| 32 |
+
|
| 33 |
+
- trlX as a feature complete, open-source library for reinforcement learning from human feedback supporting model sizes up to and exceeding 70 billion parameters. This includes benchmark examples implementing the first known open-source replications of several well-known RLHF models, offering insights into training and evaluation.
|
| 34 |
+
- A novel evaluation of offline RL based fine
|
| 35 |
+
|
| 36 |
+
tuning for preference learning at scale.
|
| 37 |
+
|
| 38 |
+
- Release of all models at all scales involved in the training pipeline including supervised-fine tuned, reward, and RL tuned models.
|
| 39 |
+
|
| 40 |
+
# 2 Background
|
| 41 |
+
|
| 42 |
+
Reinforcement Learning from Human Feedback Reinforcement learning from human feedback attempts to improve agent performance, either in sample efficiency or performance on downstream tasks, by incorporating some form of human feedback on agent behavior (Knox & Stone, 2009; Christiano et al., 2017; Stiannon et al., 2020). We focus on the 3-stage fine-tuning pipeline outlined by Stiannon et al. (2020) in which researchers first assemble a dataset of human-annotated preferences, then train a reward model to predict these preferences, and lastly train a policy to maximize the score of the resulting reward model. Similarly structured pipelines have been adapted to train many of the most recent interactive natural language assistance tools (Nakano et al., 2021; Ouyang et al., 2022; Bai et al., 2022a; Glaese et al., 2022).
|
| 43 |
+
|
| 44 |
+
Nevertheless, collecting human preferences at scale can be cost-inefficient. Recent work by Bai et al. (2022b) proposes tractable oversight by using synthetic AI preferences instead of direct human labels. Similar works by Honovich et al. (2022); Wang et al. (2022a,b) generate instruction-following datasets by querying already aligned models like text-davinci-003 to generate both task and instruction-following responses.
|
| 45 |
+
|
| 46 |
+
Scalable Training Frameworks There are many notable choices of distributed training frameworks for large-scale language model pre-training and fine-tuning, each implementing various parallelism schemes. These include DeepSpeed (Rajbhandari et al., 2019), Megatron-LM (Shoeybi et al., 2019;
|
| 47 |
+
|
| 48 |
+
Kuchaiev et al., 2019), the GPT-NeoX library (Andonian et al., 2021) which combines DeepSpeed and Megatron-LM, Fairseq (Ott et al., 2019; Zhang et al., 2022), and T5X (Roberts et al., 2022) for TPU-based training.
|
| 49 |
+
|
| 50 |
+
However, none of these frameworks are designed explicitly to support fine-tuning via RL and would, therefore, require significant work to retrofit. More recently, RL-specific fine-tuning libraries have become available. RL4LM's (Ramamurthy et al., 2022) implements online algorithms for fine-tuning mid-sized language models with reinforcement learning from human feedback and supports an impressive range of tasks and metrics. TRL (Leandro, 2019), initially a smaller-scale library for transformer reinforcement learning, is a re-implementation of Ziegler et al. (2019) in PyTorch for doing sentiment-based fine-tuning. It has since been expanded in a manner similar to trlX to include training with DeepSpeed via Hugging Face accelerate. More recently, DeepSpeed-Chat (Yao et al., 2023) was released allowing for larger model training with better throughput. However, they do not allow for more advanced forms of parallelism supported in trlX.
|
| 51 |
+
|
| 52 |
+
# 3 Training withTRLX
|
| 53 |
+
|
| 54 |
+
trlX is designed to help mitigate the heavy computational costs felt by low-resource users while still allowing high-resource users the ability to get good performance. Roughly we break our users into three separate resource profiles:
|
| 55 |
+
|
| 56 |
+
1. Single GPU users. In this low-resource use case we recommend our native PyTorch integration plus memory saving features including Hydra architectures, low-rank adaptors, and 8-bit adam (Dettmers et al., 2021).
|
| 57 |
+
2. Multi-GPU users. In this mid-resource user case we recommend our integration with Hugging Face accelerate (Gugger et al., 2022) leveraging DeepSpeed plus memory saving features. We use this integration to comfortably train up to 20 billion parameter language models on a single node.
|
| 58 |
+
3. Multi-Node users. In this high-resource user case we recommend our integration with GPT-NeoX or NeMO-Megatron which allows for higher cpu efficiency and scaling than accelerate and DeepSpeed. We use this integration to
|
| 59 |
+
|
| 60 |
+
train models up to 70 billion parameters: an unprecedented scale for open-source RLHF models.
|
| 61 |
+
|
| 62 |
+
The framework is built around a base trainer from which integration specific trainers can inherit. Independently, online and offline algorithms are implemented allowing for reuse in separate integrations. In particularTRLX supports PPO and A2C for online RL and ILQL for offline RL. The most expensive part of online PPO training is the model rollout, which can take up to $10\mathrm{x}$ as long as a combined forward and backward pass. To efficiently maximize batch size for both rollouts and optimization steps, we decouple the rollout inference batch size from the PPO batch size via an orchestrator class. This allows online models to perform batched rollouts to reduce the amount of bottleneck time spent inferring each model.
|
| 63 |
+
|
| 64 |
+
We integrate closely with the Hugging Face ecosystem, allowing for the training of most encoder-decoder and decoder-only models on the Hugging Face Hub, including widely used models such as T5 (Raffel et al., 2019) and Flan-T5 (Chung et al., 2022), GPT-J (Wang & Komatsuzaki, 2021), Pythia (Biderman et al., 2023), OPT (Zhang et al., 2022), and LLaMA (Touvron et al., 2023a,b).
|
| 65 |
+
|
| 66 |
+
Fine-tuning large language models via human feedback with PPO is prohibitively expensive in terms of memory and FLOPs, requiring the user to store a student model, reference model, and similarly sized reward model in memory at all times. Additionally, reinforcement learning is notoriously brittle to hyperparameter choice, often requiring extensive sweeping to find optimal settings. To mitigate these costs we support parameter saving techniques like LoRA (Hu et al., 2021) and a Hydra model architecture design (Glaese et al., 2022) which allows for shared frozen layers between policy, value, and reference networks. Similarly, ILQL models require non-standard Q-value heads and generation capabilities which are implemented separately for both integrations.
|
| 67 |
+
|
| 68 |
+
# 3.1 Memory and Compute Saving Features
|
| 69 |
+
|
| 70 |
+
To benchmark the effect of memory and compute saving features on performance, we run a series of experiments on a baseline sentiments task for model sizes ranging from 125 million to 20 billion parameters. For each model size, we freeze a percentage of the model's layers in the Hydra architecture and observe the effect on reward, train
|
| 71 |
+
|
| 72 |
+
time, and required memory. We also experiment with applying LORA adapters of various ranks to all transformer matrices. Models are taken from the Pythia suite (Biderman et al., 2023) and trained for 6000 steps with a global batch size of 32 on 8x80gb A100s.
|
| 73 |
+
|
| 74 |
+
Figure 1 demonstrates across all model sizes about half the layers can be frozen before a maximum attainable reward is not achieved. Interestingly, freezing all but two of a model's layers more adversely affects the larger models. We speculate this is due to larger models learning the majority of complex task-specific features in their middle layers, with downstream layers making minor adjustments.
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
Figure 1: Max reward achieved as a function of the number of model layers unfrozen. Each model achieves its maximum attainable reward with around half its layers frozen.
|
| 78 |
+
|
| 79 |
+
Figure 2 demonstrates the effect of layer freezing on memory savings. This is particularly useful for larger models since otherwise we must separately load the frozen reference model into GPU memory for inference. With all but two layers frozen we save both memory and computation costs for all but two layers of the reference. In particular, for larger model sizes we can save nearly $50\%$ of the required memory while still achieving the maximum reward.
|
| 80 |
+
|
| 81 |
+
On less toy problems we also observe layer freezing helps stabilize the training process by reducing KL divergence from the base model. This helps mitigate the need for the a KL based penalty via a frozen reference model, in some cases allowing for it to be removed entirely. Further in some cases partial freezing even imparts a beneficial inductive bias, allowing the model to achieve a higher reward than when trained with all parameters unfrozen.
|
| 82 |
+
|
| 83 |
+
Similar benefits memory-saving and regularizing
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Figure 2: Hydra memory consumption as a function of the number of a unfrozen layers.
|
| 87 |
+
|
| 88 |
+
benefits can be seen with LoRA-based fine-tuning. When tuning all layers training with LORA rank 1 achieves max reward on the sentiments benchmark. At the 6.9 billion parameter scale LORA training finetunes only 0.03 percent of the model parameters and reduces memory usage by 3 times. LORA training can be combined with layer freezing to achieve further memory savings. With both optimizations RLHF can be performed for medium scale models even on a single consumer-grade GPU. These memory savings and performance benefits carry over to the offline training regime with ILQL as well (see table 2). We theorize that limiting the rank of the parameter updates as well as freezing model layers provide beneficial regularization effects for both online and offline RL training.
|
| 89 |
+
|
| 90 |
+
# 3.2 Comparison with other Frameworks
|
| 91 |
+
|
| 92 |
+
See table 1 for a table outlining the presence of keyTRLX features in similar libraries. trlX is the only framework to support offline RL fine-tuning as well as the only framework to support large model fine-tuning at scale with pipeline, sequential, and tensor parallelism. Additionally, we are the most feature complete, including tools for parameter efficient fine-tuning and distributed hyperparameter sweeps. We include 10 plus benchmark examples, providing end-to-end pipelines for several well known RLHF tasks.
|
| 93 |
+
|
| 94 |
+
DeepSpeed versus NeMO MegatronTRLX is competitive with existing open-source RLHF implementations at scale for online RL. We compare against DeepSpeed-Chat (Yao et al., 2023), a concurrent work on RLHF for LLMs that implements PPO. See table 3 for a performance com
|
| 95 |
+
|
| 96 |
+
<table><tr><td>Model</td><td>Max Reward</td><td>Time (min)</td><td>GPUs</td></tr><tr><td>GPT-NeoX 20B</td><td>-1.88</td><td>156</td><td>32</td></tr><tr><td>GPT-NeoX 20B LoRA</td><td>-1.89</td><td>28</td><td>16</td></tr><tr><td>Pythia 6.9B</td><td>-1.77</td><td>286</td><td>16</td></tr><tr><td>Pythia 6.9B LoRA</td><td>-1.68</td><td>58</td><td>16</td></tr></table>
|
| 97 |
+
|
| 98 |
+
Table 2: Benchmarks for ILQL's time to max reward on Anthropic's Helpful QA dialogue dataset. All non-LoRA hyperparameters are kept the same as the base models, except for learning rate which is set to $2.0 \times 10^{-4}$ . For GPT-NeoX-20B LoRA, the last 8 layers are trained with LoRA, for Pythia 6.9B LoRA, all layers are trained with LoRA.
|
| 99 |
+
|
| 100 |
+
<table><tr><td>Parameters</td><td>DS-Chat</td><td>trlX</td></tr><tr><td>OPT 1.3B</td><td>2.1</td><td>2.0</td></tr><tr><td>OPT 6.7B</td><td>0.44</td><td>0.41</td></tr><tr><td>OPT 30B</td><td>0.14* (LoRA)</td><td>0.12</td></tr><tr><td>OPT 60B</td><td>0.076* (LoRA)</td><td>0.043†</td></tr></table>
|
| 101 |
+
|
| 102 |
+
Table 3: Comparison of trlX and DeepSpeed-Chat training speed for online RL (PPO) for OPT architecture, measured in samples/s/GPU. *Performance for 30B and 60B DeepSpeed-Chat converted from Table 2 of (Yao et al., 2023), 4 hours to train on 131.9k samples using 64 GPUs. † For OPT 66B, we use Hydra with 50% trainable parameters.
|
| 103 |
+
|
| 104 |
+
parison between DeepSpeed-Chat andTRLX. Note that the 30B and 60B parameters performance figure for DeepSpeed-Chat uses LoRA based training, whereasTRLX uses full parameter finetuning. We keep the rest of the benchmark settings the same as the published DeepSpeed-Chat scripts.
|
| 105 |
+
|
| 106 |
+
# 4 Benchmarks and Results
|
| 107 |
+
|
| 108 |
+
We benchmark trlX on two popular RLHF tasks: OpenAI's learning to summarize TL;DR dataset (Stiennon et al., 2020) and Anthropic's Helpful QA dataset (Ganguli et al., 2022). We release all associated code and models as open-source for further study.
|
| 109 |
+
|
| 110 |
+
Training setup and Hyperparameters Unless otherwise noted, we use the same fixed set of hyperparameters listed in the appendix across all training runs. We find good performance is particularly sensitive to some parameters:
|
| 111 |
+
|
| 112 |
+
- Batch size: Larger batch sizes of at least 128 global samples per iteration are used. This reduces variance across runs and stabilizes performance.
|
| 113 |
+
- Reward normalization: Upon rollout collection we normalize all rewards by a running
|
| 114 |
+
|
| 115 |
+
standard deviation estimate. We find this normalization, notably without subtraction by the running mean, significantly improves performance. Additionally, we do a second normalization of the advantages at the batch level.
|
| 116 |
+
|
| 117 |
+
- Learning rate: Learning rate is chosen to be $5\mathrm{E} - 6$ , an order of magnitude smaller relative to supervised fine-tuning.
|
| 118 |
+
|
| 119 |
+
# 4.1 Summarization
|
| 120 |
+
|
| 121 |
+
Setup Learning to Summarize from Human feedback (Stiennon et al., 2020) introduces the TL;DR dataset. The first component, consisting of 129,772 Reddit posts, is utilized for supervised fine-tuning. The second component, utilized for training the reward model, consists of 92,534 samples in the training dataset and 83,629 samples in the validation set.
|
| 122 |
+
|
| 123 |
+
We start by training supervised fine-tuned (SFT) by fine-tuning 440M, 1.4B, 6.9B, and 20B models from the Pythia suite on the SFT dataset. We use the AdamW optimizer with the learning rate is $1\mathrm{E}-5$ followed by a linear scheduler with a short warm-up. The best model is selected by Average-ROUGE score on the validation set.
|
| 124 |
+
|
| 125 |
+
To train our reward models (RMs) we initialize with SFT checkpoints, replacing the causal head with a scalar output. Using the second component of the dataset we then minimize the pairwise preference loss (Stiennon et al., 2020). We find our best performing reward model is the 6.9 billion parameter GPT-J (6B) trained with a batch size of 32.
|
| 126 |
+
|
| 127 |
+
With a trained reward model we are now able to perform RL based fine-tuning of models from 440M to 20B. Posts from both components of the TL;DR dataset are used during training. We use the best performing reward model, 6.9B, as our reward signal for all experiments. To train models online
|
| 128 |
+
|
| 129 |
+
we initialize from the SFT checkpoints and use PPO with four epochs per batch and a KL penalty with a coefficient of 0.005. We keep 8 layers unfrozen. To train the offline models we label posts from both components of the dataset and their associated summaries with $\pm 1$ respectively. We note labeling in this way performs better than labeling the data with the learned RM. This reward labeled dataset is then used to train a base model using the ILQL algorithm. Notably, we do not initialize from the SFT checkpoint as we saw minimal benefit in the offline regime.
|
| 130 |
+
|
| 131 |
+
Results We attach a table in the appendix showing the result of the ROUGE scores in the test set of the TL;DR dataset of SFT and PPO trained on the 6.9B model. In comparison with (Stiennon et al., 2020), the trend of the ROUGE score is similar, with the SFT model performing slightly better than the PPO model.
|
| 132 |
+
|
| 133 |
+
More critically, we conduct a human evaluation to better evaluate how well our online PPO and offline ILQL models adhere to human preferences as compared to the baseline SFT. To do so we select stories from a subset of prompts from the test portion of our dataset and ask annotators to choose between two candidate summaries. In particular for each model size we run two evaluations: A comparison of PPO to SFT and a comparison of ILQL to SFT. In addition to choosing between two candidate summarizations, we ask users to score the coverage, clarity, and inconsistency on a 1-7 Likert scale. The results are reported in fig. 3 and fig. 4.
|
| 134 |
+
|
| 135 |
+
We evaluate each model via relative improvement over the its corresponding SFT baseline, in part to demonstrate the effectiveness of RLHF on even small model sizes.
|
| 136 |
+
|
| 137 |
+
ILQL slightly underperforms PPO at a fraction of the cost Figure 3 demonstrates both ILQL and PPO achieving greater than $10\%$ win-rates across most model sizes. At 6B and 20B our PPO model achieves greater than $70\%$ win-rate against its SFT baseline. We also see ILQL models are very competitive, despite requiring much less compute to train. Interestingly, we observe ILQL produces distinctly shorter, more concise summaries than PPO and even the SFT baseline. Despite this ILQL is still often preferred over the longer SFT baseline due to better coverage of key points. This suggests more sophisticated offline training methods could
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
Figure 3: Win rate of ILQL, PPO fine-tuned models against their relative SFT baselines on summarization task. Note comparisons were done against the same-sized SFT baseline (e.g. 6B SFT against 6B PPO). The OpenAI baseline, measured as the win-rate of their 6B model over human generated summaries, is included for reference.
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 4: 20B model Likert scores for coverage, clarity, and inconsistency.
|
| 144 |
+
|
| 145 |
+
potentially be developed as more compute efficient alternative version of PPO.
|
| 146 |
+
|
| 147 |
+
# 4.2 Helpful QA Dialogue
|
| 148 |
+
|
| 149 |
+
Setup Helpful Harmless RLHF (Ganguli et al., 2022), or HH-RLHF for short, consists of $118\mathrm{k}$ sample interactions between a user and an AI assistant. It can be broken further into three parts: An initial dataset of $42\mathrm{k}$ prompt, response triples created by a prompted language model, $40\mathrm{k}$ samples creates by re-ranking responses from the same prompted model, and a final set of $22\mathrm{k}$ responses from an initial RLHF model. The first two parts are called the static subset. We use the helpful portion of the static dataset for training and evaluation. Each interaction sample consists of a dialogue history ending with an utterance from the user. This is followed by a preferred or chosen assistant re
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Helpful Reward Model Accuracy vs. Number of Comparisons
|
| 153 |
+
Figure 5: Accuracy of reward model on test set against number of training comparisons. We observe roughly $1.8\%$ increase in model accuracy per 10k training samples.
|
| 154 |
+
|
| 155 |
+
sponse and a rejected response.
|
| 156 |
+
|
| 157 |
+
We train supervised fine-tuned SFT baseline models by fine-tuning vanilla models models ranging from 160M, 1.4B, 6.9B and 20B parameters on the chosen responses for each sample. Training is done for one epoch with a $lr = 5\mathrm{E} - 5$ . Note we mask loss on the dialogue history, only backpropping loss on the response tokens. This serves as our baseline.
|
| 158 |
+
|
| 159 |
+
We then independently train reward models of sizes 160m-20B by. As with summarization, we warm start by initializing from the SFT models. As above we train for one epoch with $lr = 5\cdot$ $10^{-6}$ . We observe adding the supervised warmup increases test accuracy up to $2 \%$ . Our highest performing model is a 6 billion parameter GPT-J, which achieves 0.72 accuracy on the static test set. We use this as the default reward model RM for all RL based fine-tuning.
|
| 160 |
+
|
| 161 |
+
With our RM we can fine-tune our baseline SFT models using trlX. Our training dataset consists of a set of input prompts taken from the entire static dataset. We augment this with multi-turn prompts and responses generated synthetically by text-davinici-003. Details on how this synthetic data is created can be found in the appendix. Altogether this forms 200k prompts for our RL training dataset.
|
| 162 |
+
|
| 163 |
+
The number of training steps is kept constant at 9000 with an effective batch size of 128. A learning rate between $1 \cdot 10^{-6}$ and $8 \cdot 10^{-6}$ is used for different model sizes. We keep eight layers unfrozen. A constant KL penalty of coefficient of 0.005 is used. We call the resulting series of models PPO.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
Figure 6: Mean performance of models zero-shot on HellaSwag, TriviaQA, LAMBADA, Arc Easy, Arc Challenge, and Open Book QA. A table of full results is shared in the appendix.
|
| 167 |
+
|
| 168 |
+
In particular, we found it critical to train with a sufficiently large batch to ensure robust PPO gradient estimates. Additionally, if training is run for too long or the KL penalty is too weak we observe heavy overfit to the reward model. We practice early stopping to prevent such overfit. Larger batch sizes also have the added effect of mitigating overfit simply by decreasing the total number of steps over the prompt dataset.
|
| 169 |
+
|
| 170 |
+
In addition to PPO we train models at sizes from 160M to 20B using ILQL. We assign a reward of $+1$ to the chosen trajectories and a reward of $-1$ to the rejected trajectories. We call the resulting set of models ILQL. Surprisingly the $\pm 1$ reward assignment empirically outperforms labeling chosen and rejected responses via rewards learned from the RM. We believe this is because while RM rewards are denser, they are also in some cases inaccurate and as a result the expected inequality $r_{\text{chosen}} > r_{\text{rejected}}$ for a given dialogue is not respected, introducing noise. Whereas a $\pm 1$ assignment stays faithful to the underlying human preferences. This assignment has the added benefit of requiring far less compute as no reward model needs to be learned.
|
| 171 |
+
|
| 172 |
+
In addition to the above models we also train and evaluate a final set Vanilla-PPO which applies PPO based RL fine-tuning via our RM without initializing from the supervised SFT checkpoints. We found this is only feasible for larger models, 6B and 20B, which are able to successfully optimize reward. This underscores the importance of collecting supervised fine-tuning data for for sufficiently difficult tasks and weak models.
|
| 173 |
+
|
| 174 |
+
Results We then evaluate vanilla models, SFT models, PPO models, Vanilla-PPO models, and ILQL models on a set of common academic benchmarks including LAMBADA, ARC, Open-BookQA, TriviaQA, and HellaSwag using Gao et al. (2021). Figure 6 plots the mean accuracy of each model class on the benchmarks. A full table is included in the appendix. We find supervised fine-tuning significantly impacts performance. We note when done improperly, e.g. by fine-tuning on entire dialogues instead of responses, the effect is even more pronounced. RL based fine-tuning on top of SFT improves results slightly but not significantly.
|
| 175 |
+
|
| 176 |
+
The 'Alignment Tax' comes from SFT Surprisingly in contrast fine-tuning with RL without as done in the Vanilla-PPO models incurs much less penalty and in the 6.9B case even slightly improves performance. This answers questions about the existence of an alignment tax when fine-tuning with RLHF. OpenAI reported such a tax with InstructGPT (Ouyang et al., 2022), particularly after supervised fine-tuning, but did not report results for strictly RL based fine-tuning. Conversely Anthropic (Bai et al., 2022a) demonstrated small gains in benchmark performance for sufficiently large models after RL based fine-tuning but do not utilize an SFT warm-up. These results suggest such a tax is primarily due to the supervised fine-tuning instead of RL based fine-tuning. We note one technique is to mix pre-training data into the SFT and RL fine-tuning distributions, as done in Ouyang et al. (2022).
|
| 177 |
+
|
| 178 |
+
This suggests to us the importance of a high-quality SFT training dataset to mitigate benchmark reduction while also appropriately learning the desired behavior.
|
| 179 |
+
|
| 180 |
+
In addition to automatic benchmark evaluation we conduct a human evaluation in which labelers choose between responses generated by a model and the comparably sized supervised fine-tuned baseline. The results are reported in fig. 7. Note we examine win-rate of models against a baseline of the same size, in contrast to previous work (Bai et al., 2022a; Ouyang et al., 2022). We attach annotator instructions in the appendix.
|
| 181 |
+
|
| 182 |
+
RLHF can benefit smaller models too Across all model sizes we observe at least a $60\%$ win-rate between the PPO trained model and the SFT baseline. Additionally the offline trained ILQL models
|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
Figure 7: Win rate of prompted, PPO RLHF, and ILQL RLHF models at 160M, 1.4B, 6.9B, and 20B parameters. Comparisons were done against the same-sized SFT baseline (e.g 6.9B SFT against 6.9B PPO).
|
| 186 |
+
|
| 187 |
+
are very competitive, similarly achieving at least a $60\%$ win-rate with a small fraction of the compute. Further we qualitatively observe ILQL is significantly more robust to reward overfit in contrast to online PPO based fine-tuning. In contrast the online regime requires a combination of large batch sizes and early stopping to mitigate such reward overfit. Finally we remark the prompted baseline is already relatively strong likely due to the task already being sufficiently in distribution for larger models. This is further supported by the ability of large vanilla models to successfully optimize reward with no supervised fine-tuning.
|
| 188 |
+
|
| 189 |
+
In addition to collecting win-rates we also collect Likert scores on a scale of 1-7 for response Helpfulness, Harmfulness, and Honesty. Results are reported in the appendix.
|
| 190 |
+
|
| 191 |
+
# 5 Conclusion
|
| 192 |
+
|
| 193 |
+
Ethics We present trlX as an open-source framework for training large language models at scale using reinforcement learning from human feedback. Even with fine-tuning via RLHF, LLMs are still prone to hallucination and bias at inference time, necessitating further research on mitigations. We hope researcher access to trlX as an open-source implementation of the RLHF pipeline will help facilitate this research.
|
| 194 |
+
|
| 195 |
+
Limitations While it is highly performant, PPO suffers from many limitations both in difficulty during implementation and in hyperparameter sensitivity at train time. Offline methods such as ILQL are both easier to implement and more compute efficient, but still do not achieve the same performance
|
| 196 |
+
|
| 197 |
+
as PPO.
|
| 198 |
+
|
| 199 |
+
# References
|
| 200 |
+
|
| 201 |
+
Andonian, A., Anthony, Q., Biderman, S., Black, S., Gali, P., Gao, L., Hallahan, E., Levy-Kramer, J., Leahy, C., Nestler, L., et al. Gpt-neox: Large scale autoregressive language modeling in pytorch. GitHub Repo, 2021.
|
| 202 |
+
Askell, A., Bai, Y., Chen, A., Drain, D., Ganguli, D., Henighan, T., Jones, A., Joseph, N., Mann, B., DasSarma, N., et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
|
| 203 |
+
Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T. J., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatfield-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T. B., Clark, J., McCandlish, S., Olah, C., Mann, B., and Kaplan, J. Training a helpful and harmless assistant with reinforcement learning from human feedback. ArXiv, abs/2204.05862, 2022a.
|
| 204 |
+
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022b.
|
| 205 |
+
Biderman, S., Bicheno, K., and Gao, L. Datasheet for the pile. arXiv preprint arXiv:2201.07311, 2022.
|
| 206 |
+
Biderman, S. R., Schoelkopf, H., Anthony, Q. G., Bradley, H., O'Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E., Skowron, A., Sutawika, L., and van der Wal, O. Pythia: A suite for analyzing large language models across training and scaling. ArXiv, abs/2304.01373, 2023.
|
| 207 |
+
Black, S., Biderman, S., Hallahan, E., Anthony, Q., Gao, L., Golding, L., He, H., Leahy, C., McDonell, K., Phang, J., Pieler, M., Prashanth, U. S., Purohit, S., Reynolds, L., Tow, J., Wang, B., and Weinbach, S. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of the ACL Workshop on Challenges & Perspectives in Creating Large Language Models, 2022. URL https://arxiv.org/abs/2204.06745.
|
| 208 |
+
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information
|
| 209 |
+
|
| 210 |
+
Processing Systems, volume 33, pp. 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
|
| 211 |
+
Christiano, P. F., Leike, J., Brown, T. B., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from human preferences. ArXiv, abs/1706.03741, 2017.
|
| 212 |
+
Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., Li, E., Wang, X., Dehghani, M., Brahma, S., Webson, A., Gu, S. S., Dai, Z., Suzgun, M., Chen, X., Chowdhery, A., Valter, D., Narang, S., Mishra, G., Yu, A. W., Zhao, V., Huang, Y., Dai, A. M., Yu, H., Petrov, S., hsin Chi, E. H., Dean, J., Devlin, J., Roberts, A., Zhou, D., Le, Q. V., and Wei, J. Scaling instruction-finetuned language models. ArXiv, abs/2210.11416, 2022.
|
| 213 |
+
Dettmers, T., Lewis, M., Shleifer, S., and Zettlemoyer, L. 8-bit optimizers via block-wise quantization. *ArXiv*, abs/2110.02861, 2021.
|
| 214 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. *ArXiv*, abs/1810.04805, 2019.
|
| 215 |
+
Ganguli, D., Lovitt, L., Kernion, J., Askell, A., Bai, Y., Kadavath, S., Mann, B., Perez, E., Schiefer, N., Ndousse, K., et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022.
|
| 216 |
+
Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., et al. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020.
|
| 217 |
+
Gao, L., Tow, J., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., McDonell, K., Muennighoff, N., et al. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 2021.
|
| 218 |
+
Glaese, A., McAleese, N., Trkebacz, M., Aslanides, J., Firoiu, V., Ewalds, T., Rauh, M., Weidinger, L., Chadwick, M., Thacker, P., Campbell-Gillingham, L., Uesato, J., Huang, P.-S., Comanescu, R., Yang, F., See, A., Dathathri, S., Greig, R., Chen, C., Fritz, D., Elias, J. S., Green, R., Mokr'a, S., Fernando, N., Wu, B., Foley, R., Young, S., Gabriel, I., Isaac, W. S., Mellor, J. F. J., Hassabis, D., Kavukcuoglu, K., Hendricks, L. A., and Irving, G. Improving alignment of dialogue agents via targeted human judgements. ArXiv, abs/2209.14375, 2022.
|
| 219 |
+
Gugger, S., Debut, L., Thomas Wolf, T., Schmid, P., Mueller, Z., and Mangrulkar, S. Accelerate: Training and inference at scale made simple, efficient and adaptable. https://github.com/huggingface/accelerate, 2022.
|
| 220 |
+
|
| 221 |
+
Honovich, O., Scialom, T., Levy, O., and Schick, T. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689, 2022.
|
| 222 |
+
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., and Chen, W. Lora: Low-rank adaptation of large language models. ArXiv, abs/2106.09685, 2021.
|
| 223 |
+
Knox, W. B. and Stone, P. Interactively shaping agents via human reinforcement: The tamer framework. In *The Fifth International Conference on Knowledge Capture*, September 2009. URL http://www.cs.utexas.edu/users/ai-1ab?KCAP09-knox.
|
| 224 |
+
Kuchaiev, O., Li, J., Nguyen, H., Hrinchuk, O., Leary, R., Ginsburg, B., Kriman, S., Beliaev, S., Lavrukhin, V., Cook, J., et al. Nemo: a toolkit for building ai applications using neural modules. arXiv preprint arXiv:1909.09577, 2019.
|
| 225 |
+
Leandro, V. W. Transformer reinforcement learning. https://github.com/lwerra/trl, 2019.
|
| 226 |
+
Nakano, R., Hilton, J., Balaji, S. A., Wu, J., Ouyang, L., Kim, C., Hesse, C., Jain, S., Kosaraju, V., Saunders, W., Jiang, X., Cobbe, K., Eloundou, T., Krueger, G., Button, K., Knight, M., Chess, B., and Schulman, J. Webgpt: Browser-assisted question-answering with human feedback. ArXiv, abs/2112.09332, 2021.
|
| 227 |
+
Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
|
| 228 |
+
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L. E., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J., and Lowe, R. J. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155, 2022.
|
| 229 |
+
Raffel, C., Shazeer, N. M., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. ArXiv, abs/1910.10683, 2019.
|
| 230 |
+
Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. Zero: Memory optimizations toward training trillion parameter models. SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1-16, 2019.
|
| 231 |
+
Ramamurthy, R., Ammanabrolu, P., Brantley, K., Hessel, J., Sifa, R., Bauckhage, C., Hajishirzi, H., and Choi, Y. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. ArXiv, abs/2210.01241, 2022.
|
| 232 |
+
|
| 233 |
+
Roberts, A., Chung, H. W., Levskaya, A., Mishra, G., Bradbury, J., Andor, D., Narang, S., Lester, B., Gaffney, C., Mohiuddin, A., Hawthorne, C., Lewkowycz, A., Salcianu, A., van Zee, M., Austin, J., Goodman, S., Soares, L. B., Hu, H., Tsvyashchenko, S., Chowdhery, A., Bastings, J., Bulian, J., Garcia, X., Ni, J., Chen, A., Kenealy, K., Clark, J. H., Lee, S., Garrette, D., Lee-Thorp, J., Raffel, C., Shazeer, N., Ritter, M., Bosma, M., Passos, A., Maitin-Shepard, J., Fiedel, N., Omernick, M., Saeta, B., Sepassi, R., Spiridonov, A., Newlan, J., and Gesmundo, A. Scaling up models and data with t5x and seqio. arXiv preprint arXiv:2203.17189, 2022. URL https://arxiv.org/abs/2203.17189.
|
| 234 |
+
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. ArXiv, abs/1707.06347, 2017.
|
| 235 |
+
Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053, 2019.
|
| 236 |
+
Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., Radford, A., Amodei, D., and Christiano, P. F. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021, 2020.
|
| 237 |
+
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., and Lample, G. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023a.
|
| 238 |
+
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288, 2023b.
|
| 239 |
+
Wang, B. and Komatsuzaki, A. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/mesh-transformer-jax, May 2021.
|
| 240 |
+
Wang, Y., Kordi, Y., Mishra, S., Liu, A., Smith, N. A., Khashabi, D., and Hajishirzi, H. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560, 2022a.
|
| 241 |
+
Wang, Y., Mishra, S., Alipoormolabashi, P., Kordi, Y., Mirzaei, A., Arunkumar, A., Ashok, A., Dhanasekaran, A. S., Naik, A., Stap, D., et al. Supernaturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. URL https://arxiv.org/abs/2204.07705, 2022b.
|
| 242 |
+
Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funowicz, M., et al. Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
|
| 243 |
+
|
| 244 |
+
Yao, Z., Aminabadi, R. Y., Ruwase, O., Rajbhandari, S., Wu, X., Awan, A. A., Rasley, J., Zhang, M., Li, C., Holmes, C., Zhou, Z., Wyatt, M., Smith, M., Kurilenko, L., Qin, H., Tanaka, M., Che, S., Song, S. L., and He, Y. DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales. arXiv preprint arXiv:2308.01320, 2023.
|
| 245 |
+
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., and Zettlemoyer, L. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068, 2022.
|
| 246 |
+
Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., and Irving, G. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019.
|
| 247 |
+
|
| 248 |
+
# A Model Hyperparameters
|
| 249 |
+
|
| 250 |
+
See table 4 for a full list of hyperparameters across all types of model training. In particular, we find a large batch size to be crucial for PPO. The KL coefficient can be relaxed for HH training but is important for summarizations. Additionally, scaling the reward by a running standard deviation estimate provides a small boost.
|
| 251 |
+
|
| 252 |
+
<table><tr><td></td><td>SFT</td><td>RM</td><td>PPO</td></tr><tr><td>lr</td><td>5E-5</td><td>5E-6</td><td>5E-6</td></tr><tr><td>bs</td><td>64</td><td>64</td><td>256</td></tr><tr><td>Layers frozen</td><td>N/A</td><td>50%</td><td>8</td></tr><tr><td>reward normalization</td><td>N/A</td><td>N/A</td><td>scaling</td></tr><tr><td>Target KL</td><td>N/A</td><td>N/A</td><td>6</td></tr><tr><td>λ (GAE)</td><td>N/A</td><td>N/A</td><td>0.95</td></tr><tr><td>γ (discount)</td><td>N/A</td><td>N/A</td><td>1</td></tr><tr><td>Mini-batch normalization</td><td>N/A</td><td>N/A</td><td>True</td></tr><tr><td>PPO epochs</td><td>N/A</td><td>N/A</td><td>4</td></tr><tr><td>KL coefficient</td><td>N/A</td><td>N/A</td><td>0.01</td></tr></table>
|
| 253 |
+
|
| 254 |
+
# B LM Eval Results
|
| 255 |
+
|
| 256 |
+
A full list of scores for all HH models on considered lm-eval benchmarks can be found in table 5. Note how, similarly to findings by InstructGPT, SFT models perform poorly. In contrast, pure RL fine-tuning negligibly impacts benchmark scores.
|
| 257 |
+
|
| 258 |
+
Table 4: Training hparams.
|
| 259 |
+
|
| 260 |
+
<table><tr><td>Model</td><td>HellaSwag</td><td>LAMBADA</td><td>ARC Easy</td><td>ARC Challenge</td><td>OpenBookQA</td><td>TriviaQA</td></tr><tr><td>Pythia 160M Vanilla</td><td>0.294</td><td>0.248</td><td>0.451</td><td>0.203</td><td>0.172</td><td>0.011</td></tr><tr><td>Pythia 160M SFT</td><td>0.291</td><td>0.215</td><td>0.453</td><td>0.206</td><td>0.17</td><td>0.013</td></tr><tr><td>Pythia 160M PPO</td><td>0.292</td><td>0.218</td><td>0.454</td><td>0.209</td><td>0.162</td><td>0.013</td></tr><tr><td>Pythia 160M ILQL</td><td>0.292</td><td>0.217</td><td>0.455</td><td>0.205</td><td>0.167</td><td>0.015</td></tr><tr><td>Pythia 1.4B Vanilla</td><td>0.402</td><td>0.458</td><td>0.581</td><td>0.265</td><td>0.198</td><td>0.048</td></tr><tr><td>Pythia 1.4B SFT</td><td>0.374</td><td>0.344</td><td>0.547</td><td>0.255</td><td>0.192</td><td>0.016</td></tr><tr><td>Pythia 1.4B PPO</td><td>0.369</td><td>0.360</td><td>0.543</td><td>0.257</td><td>0.196</td><td>0.011</td></tr><tr><td>Pythia 1.4B ILQL</td><td>0.392</td><td>0.439</td><td>0.563</td><td>0.255</td><td>0.189</td><td>0.015</td></tr><tr><td>Pythia 6.9B Vanilla</td><td>0.488</td><td>0.564</td><td>0.667</td><td>0.319</td><td>0.252</td><td>0.151</td></tr><tr><td>Pythia 6.9B SFT</td><td>0.432</td><td>0.398</td><td>0.606</td><td>0.309</td><td>0.236</td><td>0.027</td></tr><tr><td>Pythia 6.9B PPO</td><td>0.421</td><td>0.409</td><td>0.619</td><td>0.291</td><td>0.25</td><td>0.030</td></tr><tr><td>Pythia 6.9B ILQL</td><td>0.469</td><td>0.557</td><td>0.654</td><td>0.301</td><td>0.250</td><td>0.149</td></tr><tr><td>Pythia 6.9B Vanilla-PPO</td><td>0.495</td><td>0.605</td><td>0.670</td><td>0.312</td><td>0.28</td><td>0.149</td></tr><tr><td>GPT-NeoX Vanilla</td><td>0.535</td><td>0.72</td><td>0.723</td><td>0.38</td><td>0.29</td><td>0.259</td></tr><tr><td>GPT-NeoX SFT</td><td>0.462</td><td>0.505</td><td>0.664</td><td>0.343</td><td>0.252</td><td>0.041</td></tr><tr><td>GPT-NeoX PPO</td><td>0.463</td><td>0.529</td><td>0.666</td><td>0.347</td><td>0.256</td><td>0.037</td></tr><tr><td>GPT-NeoX ILQL</td><td>0.541</td><td>0.709</td><td>0.720</td><td>0.375</td><td>0.294</td><td>0.251</td></tr><tr><td>GPT-NeoX Vanilla-PPO</td><td>0.548</td><td>0.618</td><td>0.731</td><td>0.383</td><td>0.3</td><td>0.227</td></tr></table>
|
| 261 |
+
|
| 262 |
+
Table 5: Table of results from the lm-eval-harness
|
| 263 |
+
|
| 264 |
+
# C Annotator Instructions
|
| 265 |
+
|
| 266 |
+
Link to annotator instructions here. They are also displayed in fig. 8 for convenience.
|
| 267 |
+
|
| 268 |
+
# Annotation Guidelines:
|
| 269 |
+
|
| 270 |
+
Task: You will be shown a prompt to a language assistant and two possible responses to the prompt: A and B. You will choose between response A and response B which is overall the most helpful response to the prompt. Sometimes you will be asked to explain your reasoning for this choice.
|
| 271 |
+
|
| 272 |
+
# Helpfulness
|
| 273 |
+
|
| 274 |
+
A helpful response:
|
| 275 |
+
|
| 276 |
+
- Is easily understandable by the reader
|
| 277 |
+
Stays on topic with the user's intention
|
| 278 |
+
Is straightforward and to the point
|
| 279 |
+
- Provides a satisfying response to the reader
|
| 280 |
+
|
| 281 |
+
You will also rate helpfulness on a scale of 1-5. Additionally for both responses you will rate on a scale of 1-5 the correctness of the response and the potential harmfulness of the response. Note: If the user's intention is to request something harmful, then harmfulness SHOULD NOT affect one's choice between A and B for Helpfulness, because that response is "helpful" to the user.
|
| 282 |
+
|
| 283 |
+
Drafts of definitions adapted from InstructGPT Doc:
|
| 284 |
+
|
| 285 |
+
# Correctness
|
| 286 |
+
|
| 287 |
+
A correct response:
|
| 288 |
+
|
| 289 |
+
- Contains accurate facts
|
| 290 |
+
- Doesn't contain any inaccurate facts
|
| 291 |
+
- Doesn't mislead the user
|
| 292 |
+
- Doesn't invent spurious details
|
| 293 |
+
- If the input prompt contains incorrect facts, the model refutes or corrects the prompt facts and refuses to answer the question.
|
| 294 |
+
|
| 295 |
+
# Harmful
|
| 296 |
+
|
| 297 |
+
A harmful response, in and of itself, or if taken into action, includes any of the following:
|
| 298 |
+
|
| 299 |
+
- Could cause cause physical, psychological, or social harm to people
|
| 300 |
+
- Could cause damage or loss of equipment or property
|
| 301 |
+
- Could harm the environment
|
| 302 |
+
- Could harm institutions or resources necessary to human wellbeing, such as schools, governments, medicine, electricity, etc.
|
| 303 |
+
Denigrates members of groups, or uses prejudicial terms against groups
|
| 304 |
+
|
| 305 |
+
An example is shown below:
|
| 306 |
+
|
| 307 |
+
Prompt: Human: Why is the sky blue?
|
| 308 |
+
|
| 309 |
+
Response A: Assistant: The sky is blue because blue light has shorter wavelengths which get scattered in the atmosphere.
|
| 310 |
+
|
| 311 |
+
Response B: Assistant: I am not completely sure but I think it has something to do with the physics of different light wavelengths and the current composition of the earth's atmosphere. In fact the sky was orange billions of years ago!
|
| 312 |
+
|
| 313 |
+
Which of the two responses is most helpful towards addressing the prompt?: [A, B]
|
| 314 |
+
|
| 315 |
+
Response A is factually correct: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 316 |
+
|
| 317 |
+
Response A is helpful towards answering the prompt: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 318 |
+
|
| 319 |
+
Response A is harmful: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 320 |
+
|
| 321 |
+
Response B is factually correct: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 322 |
+
|
| 323 |
+
Response B is helpful towards answering the prompt: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 324 |
+
|
| 325 |
+
Response B is harmful: [Strong disagree, Disagree, Unsure, Agree, Strong Agree]
|
| 326 |
+
|
| 327 |
+
Figure 8: Annotator instructions for Helpful and Harmless QA dialogue.
|
| 328 |
+
|
| 329 |
+
# D Synthetic Reward Modeling
|
| 330 |
+
|
| 331 |
+
We conducted an additional series of experiments attempting to augment/supplant helpful reward modeling using human preference data collected by Anthropic with synthetic data generated via LLMs. In particular, we consider two ways of generating synthetic preference data:
|
| 332 |
+
|
| 333 |
+
1. Use a strong instruction tuned model (i.e. text-davinci-003) to choose between two candidate responses generated by an LLM.
|
| 334 |
+
2. Assume that larger models supervised fine-tuned to be helpful give more preferable responses compared to smaller models conditioned on the same prompt. This defines a partial ordering on responses via model size.
|
| 335 |
+
|
| 336 |
+
To validate the efficacy of the first approach, we evaluate the accuracy of GPT-NeoX-20B (Black et al., 2022), text-davinci-002, and text-davinci-003 as classifiers on the helpful split of the HH test set. The results are reported in table 6. We find even the best available RLHF model, text-davinici-003, gets 0.64 accuracy. This is relatively poor compared to our best 0.71 accuracy GPT-J RM, but still potentially good enough to do synthetic RLHF.
|
| 337 |
+
|
| 338 |
+
Constructing a synthetic dialogue dataset To construct our training set, we first few-shot prompt text-davinci-003 with samples from the helpful HH train set to produce a set of over 150k synthetic prompts a human may ask of an AI assistant. We sample at a high temperature (T=1.4) to maximize diversity and filter out low quality responses by prompting GPT-NeoX-20B to judge whether or not the generated request is plausible. We additionally filter out repetitive/semantically similar responses by embedding each response with GPT-J. The result is a dataset of around 60k highly diverse prompts. We then prompt text-davinci-003 to respond to these generated prompts and repeat the process to synthetically generate two full turns of interactive dialogue giving around 120k samples total. See table 7 for a synthetic prompt and model responses across different size models.
|
| 339 |
+
|
| 340 |
+
Next, we supervise fine-tune models of sizes 125M-20B on a 20k size subset of our synthetic dialog dataset. The same hyperparameters are used as elsewhere in teh paper. We then sample each model for responses on the entire dataset, as well as text-davinci-002. We then use text-davinci-003 to determine preference over responses from text-davinci-002 and text-davinci-003. To avoid any ordering bias, we randomize the order of model responses when presented to text-davinci-003, as done in (Bai et al., 2022b). Surprisingly, we find text-davinci-003 prefers itself only $58\%$ of the time, suggesting its impartiality as a judge. This defines our first synthetic preference dataset.
|
| 341 |
+
|
| 342 |
+
We can construct our second synthetic preference dataset using the second method described above: by ordering model responses according to model size. Specifically, this gives us the preference ordering $125\mathrm{M} < 1.4\mathrm{B} < 6.9\mathrm{B} < 20\mathrm{B} < \text{text-davinci-002} < \text{text-davinci-003}$ . We can then train reward models of various sizes using this synthetic dataset. The overall accuracies of these models on a test split as a function of the number of training samples is plotted in fig. 9. Additionally, we plot the accuracy across RM model sizes at predicting preferences between each category of model size comparisons (e.g., choosing the 6.9B response over the 125M response). These results are reported in Figure
|
| 343 |
+
|
| 344 |
+
Overall we find the best RM model, 6.9B, does a very good job at correctly picking the more preferable response with over $90\%$ accuracy. However, it's unclear how well our size-ordered preference modeling assumption translates to a useful RM. To test this, we evaluate the 6.9B RM on the helpful HH test split. The result is a relatively poor score of 0.61. In contrast, we find the best GPT-J HH RM convincingly generalizes to this synthetic dataset with a score of 0.78.
|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
Total Synthetic Reward Model Accuracy vs. Number of Comparisons
|
| 348 |
+
Figure 9: Accuracies of size ordered synthetic preference RMs as a function of training data size. We see the 20B is most sample efficient until 120,000 samples, at which point the 6B model does slightly better.
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
Accuracy vs. Reward Model Size
|
| 352 |
+
Figure 10: Accuracies of size ordered synthetic preference RMs on fine-grained categories of model size comparisons (e.g. choosing a 6.9B vs. a 125M response). We see some comparisons are very easy, e.g. 125M vs. text-davinci-003, whereas others are much harder.
|
| 353 |
+
|
| 354 |
+
<table><tr><td></td><td>Zero-shot accuracy</td><td>One-shot accuracy</td></tr><tr><td>GPT-NeoX-20B</td><td>0.55</td><td>0.57</td></tr><tr><td>text-davinci-002</td><td>0.61</td><td>0.63</td></tr><tr><td>text-davinici-003</td><td>0.63</td><td>0.64</td></tr><tr><td>GPT-J RM</td><td>0.71</td><td>N/A</td></tr></table>
|
| 355 |
+
|
| 356 |
+
Table 6: Accuracies of LLMs as zero-shot and few-sho helpfulness classifiers on the helpful split of Anthropic's HH dataset.
|
| 357 |
+
|
| 358 |
+
<table><tr><td>Synthetic dialog</td><td>Human: What is the proper technique for playing the guitar. Assistant: The proper technique for playing the guitar involves mastering proper hand and body posture, learning to hold the pick correctly, learning basic chords, scales, and music theory, and getting in the habit of regularly practicing. Human:I'm feeling a bit overwhelmed-could you break down the steps more? Assistant:</td></tr><tr><td>text-davinci-003 response</td><td>Absolutely. The first step is mastering proper hand and body posture for holding the guitar. It's important to maintain a good posture in order to prevent fatigue and make sure your hand muscles remain nimble and rested. Then, you'll need to work on specific lead exercises, like frailing or 'down-up' picking. This will help your fingers stay strong, fast and dexterous for chords and solos. Additionally, it's essential to learn about basic chords, chords progression, scales and music theory as that knowledge underpins almost every piece of guitar music. Lastly, consistently practicing will build your technique and, of course, strengthen your skills, stay sharp, and improve your tone.</td></tr><tr><td>text-davinici-002 response</td><td>1. Master proper hand and body posture. 2. Learn to hold the pick correctly. 3. Learn basic chords, scales, and music theory. 4. Practice regularly.</td></tr><tr><td>GPT-NeoX-20B response</td><td>Absolutely. First, it's helpful to get comfortable with a pick and know where it should be stored. Next, you should make sure you use the right posture for playing the guitar. This can be achieved through stretching before playing, or using an instructional program after playing. After that's done, you will need to learn some basics of chords. And last but not the least, find ways to regularly get in practice. Does have an easier time understanding what you mean?</td></tr><tr><td>Pythia 6.9B response</td><td>Certainly, I can! Step 1: Familiarize yourself with the parts of the guitar. Step 2: Set a realistic goal for the project and step 3: Find the right materials. Step 4: Start building your first album. Step 5: Don't worry if you don't have all the instruments and need the help to get started.</td></tr><tr><td>Pythia 1.4B response</td><td>Absolutely! First, it's important to have a good eye-in-air position during which the end of the pick stands at you, rather than leaning at either shoulder and your back. Then begin by learning a few chord progressions, such as major, minor, and major and minor. In the process, use shifts and fingerings and practice using scales and arpeggios to create a better melodic or harmonic feel. Finally, use practice and repetition to solidify your skills in general chords.</td></tr><tr><td>Pythia 125M response</td><td>Practicing is key, when you can make more on your practice day as well as taking the proper stance. Is there anything I can help you with?</td></tr></table>
|
| 359 |
+
|
| 360 |
+
Table 7: An example of a synthetically generated multi-turn dialog and responses from different model sizes.
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:552c8331ef89c0fb5e78265b071f6e81af4311ed4623c8446c0700cb05c4cf1f
|
| 3 |
+
size 795638
|
2023/trlX_ A Framework for Large Scale Reinforcement Learning from Human Feedback/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2023/‘Don’t Get Too Technical with Me’_ A Discourse Structure-Based Framework for Automatic Science Journalism/07696dca-e5fa-4303-a9d3-d8a2fe310a97_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|