Add Batch b9c9350a-8021-436e-98de-5dc0ac7b64a8 data
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +46 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_content_list.json +0 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_model.json +0 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_origin.pdf +3 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/full.md +910 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/images.zip +3 -0
- 2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/layout.json +0 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_content_list.json +0 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_model.json +0 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_origin.pdf +3 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/full.md +446 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/images.zip +3 -0
- 2025/A Closer Look into Mixture-of-Experts in Large Language Models/layout.json +0 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_content_list.json +1558 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_model.json +0 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_origin.pdf +3 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/full.md +302 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/images.zip +3 -0
- 2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/layout.json +0 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_content_list.json +0 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_model.json +0 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_origin.pdf +3 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/full.md +529 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/images.zip +3 -0
- 2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/layout.json +0 -0
- 2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_content_list.json +0 -0
- 2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_model.json +0 -0
- 2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_origin.pdf +3 -0
- 2025/A Federated Framework for LLM-based Recommendation/full.md +423 -0
- 2025/A Federated Framework for LLM-based Recommendation/images.zip +3 -0
- 2025/A Federated Framework for LLM-based Recommendation/layout.json +0 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_content_list.json +0 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_model.json +0 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_origin.pdf +3 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/full.md +465 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/images.zip +3 -0
- 2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/layout.json +0 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_content_list.json +0 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_model.json +0 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_origin.pdf +3 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/full.md +453 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/images.zip +3 -0
- 2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/layout.json +0 -0
- 2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_content_list.json +1526 -0
- 2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_model.json +2044 -0
- 2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_origin.pdf +3 -0
- 2025/A Practical Analysis of Human Alignment with _PO/full.md +276 -0
- 2025/A Practical Analysis of Human Alignment with _PO/images.zip +3 -0
- 2025/A Practical Analysis of Human Alignment with _PO/layout.json +0 -0
- 2025/A Practical Examination of AI-Generated Text Detectors for Large Language Models/b14cf0fa-1fb0-49d3-8f53-f8604a27965e_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,49 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
2025/2D-DPO_[[:space:]]Scaling[[:space:]]Direct[[:space:]]Preference[[:space:]]Optimization[[:space:]]with[[:space:]]2-Dimensional[[:space:]]Supervision/22ad622f-3151-4838-8999-a0ed67034d48_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
2025/A[[:space:]]Closer[[:space:]]Look[[:space:]]into[[:space:]]Mixture-of-Experts[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 62 |
+
2025/A[[:space:]]Comprehensive[[:space:]]Survey[[:space:]]of[[:space:]]Contemporary[[:space:]]Arabic[[:space:]]Sentiment[[:space:]]Analysis_[[:space:]]Methods,[[:space:]]Challenges,[[:space:]]and[[:space:]]Future[[:space:]]Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 63 |
+
2025/A[[:space:]]Context-Aware[[:space:]]Contrastive[[:space:]]Learning[[:space:]]Framework[[:space:]]for[[:space:]]Hateful[[:space:]]Meme[[:space:]]Detection[[:space:]]and[[:space:]]Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 64 |
+
2025/A[[:space:]]Federated[[:space:]]Framework[[:space:]]for[[:space:]]LLM-based[[:space:]]Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 65 |
+
2025/A[[:space:]]Guide[[:space:]]To[[:space:]]Effectively[[:space:]]Leveraging[[:space:]]LLMs[[:space:]]for[[:space:]]Low-Resource[[:space:]]Text[[:space:]]Summarization_[[:space:]]Data[[:space:]]Augmentation[[:space:]]and[[:space:]]Semi-supervised[[:space:]]Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 66 |
+
2025/A[[:space:]]Large-Scale[[:space:]]Benchmark[[:space:]]for[[:space:]]Vietnamese[[:space:]]Sentence[[:space:]]Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 67 |
+
2025/A[[:space:]]Practical[[:space:]]Analysis[[:space:]]of[[:space:]]Human[[:space:]]Alignment[[:space:]]with[[:space:]]_PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 68 |
+
2025/A[[:space:]]Practical[[:space:]]Examination[[:space:]]of[[:space:]]AI-Generated[[:space:]]Text[[:space:]]Detectors[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models/b14cf0fa-1fb0-49d3-8f53-f8604a27965e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 69 |
+
2025/A[[:space:]]Practical[[:space:]]Method[[:space:]]for[[:space:]]Generating[[:space:]]String[[:space:]]Counterfactuals/2d0e547d-2c88-448d-b995-0a0f806121b3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 70 |
+
2025/A[[:space:]]Recipe[[:space:]]of[[:space:]]Parallel[[:space:]]Corpora[[:space:]]Exploitation[[:space:]]for[[:space:]]Multilingual[[:space:]]Large[[:space:]]Language[[:space:]]Models/0082b7d3-27e1-4e78-966c-f3a90564dd12_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 71 |
+
2025/A[[:space:]]Survey[[:space:]]to[[:space:]]Recent[[:space:]]Progress[[:space:]]Towards[[:space:]]Understanding[[:space:]]In-Context[[:space:]]Learning/98d392f9-1212-430d-8118-b75449d89059_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 72 |
+
2025/ARISE_[[:space:]]Iterative[[:space:]]Rule[[:space:]]Induction[[:space:]]and[[:space:]]Synthetic[[:space:]]Data[[:space:]]Generation[[:space:]]for[[:space:]]Text[[:space:]]Classification/fd085659-b9b8-4a56-9a85-de329dcdaf9e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 73 |
+
2025/ASRank_[[:space:]]Zero-Shot[[:space:]]Re-Ranking[[:space:]]with[[:space:]]Answer[[:space:]]Scent[[:space:]]for[[:space:]]Document[[:space:]]Retrieval/6690a2ac-67eb-48d4-a204-81090c210fd0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 74 |
+
2025/Accounting[[:space:]]for[[:space:]]Sycophancy[[:space:]]in[[:space:]]Language[[:space:]]Model[[:space:]]Uncertainty[[:space:]]Estimation/95006bdd-d5d3-4909-949c-5a60ed6f92ac_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 75 |
+
2025/AcrosticSleuth_[[:space:]]Probabilistic[[:space:]]Identification[[:space:]]and[[:space:]]Ranking[[:space:]]of[[:space:]]Acrostics[[:space:]]in[[:space:]]Multilingual[[:space:]]Corpora/664e5fd9-586d-4999-ad54-834d0aa67d9d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 76 |
+
2025/AdParaphrase_[[:space:]]Paraphrase[[:space:]]Dataset[[:space:]]for[[:space:]]Analyzing[[:space:]]Linguistic[[:space:]]Features[[:space:]]toward[[:space:]]Generating[[:space:]]Attractive[[:space:]]Ad[[:space:]]Texts/d0a41cd2-fa3a-4cf9-b210-3205bfe27ab3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 77 |
+
2025/Adapting[[:space:]]LLM[[:space:]]Agents[[:space:]]with[[:space:]]Universal[[:space:]]Communication[[:space:]]Feedback/dbbc3b1f-3be7-484c-a97d-9a892671a1d8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 78 |
+
2025/Adaptive[[:space:]]Attacks[[:space:]]Break[[:space:]]Defenses[[:space:]]Against[[:space:]]Indirect[[:space:]]Prompt[[:space:]]Injection[[:space:]]Attacks[[:space:]]on[[:space:]]LLM[[:space:]]Agents/8c3a093c-008b-4356-ad03-430218d7ad03_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 79 |
+
2025/Adaptive[[:space:]]Parameter[[:space:]]Compression[[:space:]]for[[:space:]]Language[[:space:]]Models/99a1f4a8-e7b7-4bc0-bf94-1f77924fc5db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 80 |
+
2025/Adaptive[[:space:]]Retrieval-Augmented[[:space:]]Generation[[:space:]]for[[:space:]]Conversational[[:space:]]Systems/9581fa9c-efab-4549-bc96-151b61d313d8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 81 |
+
2025/Advancing[[:space:]]Persian[[:space:]]LLM[[:space:]]Evaluation/a9819853-c519-484c-b94e-dc6ef33b2b73_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 82 |
+
2025/Advocating[[:space:]]Character[[:space:]]Error[[:space:]]Rate[[:space:]]for[[:space:]]Multilingual[[:space:]]ASR[[:space:]]Evaluation/230865a1-3510-4dde-a66d-c3c3def23b60_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 83 |
+
2025/Aligning[[:space:]]Black-box[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Human[[:space:]]Judgments/400f4092-08c3-40f5-9409-486e85473cad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 84 |
+
2025/Aligning[[:space:]]to[[:space:]]Constraints[[:space:]]for[[:space:]]Data-Efficient[[:space:]]Language[[:space:]]Model[[:space:]]Customization/4b718305-9a84-4eb2-a2df-a452f01f9a8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 85 |
+
2025/Aligning[[:space:]]to[[:space:]]What_[[:space:]]Limits[[:space:]]to[[:space:]]RLHF[[:space:]]Based[[:space:]]Alignment/8a34cc21-0a27-4a1a-a50c-bc7cdd56be6e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 86 |
+
2025/Alleviating[[:space:]]Hallucinations[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]through[[:space:]]Induced[[:space:]]Hallucinations/1ab237d9-382c-491f-88b5-13a9e4c593de_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 87 |
+
2025/An[[:space:]]Annotated[[:space:]]Dataset[[:space:]]of[[:space:]]Errors[[:space:]]in[[:space:]]Premodern[[:space:]]Greek[[:space:]]and[[:space:]]Baselines[[:space:]]for[[:space:]]Detecting[[:space:]]Them/f78f5f5a-9119-4649-875d-c84d78ddc715_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 88 |
+
2025/An[[:space:]]Efficient[[:space:]]Rehearsal[[:space:]]Scheme[[:space:]]for[[:space:]]Catastrophic[[:space:]]Forgetting[[:space:]]Mitigation[[:space:]]during[[:space:]]Multi-stage[[:space:]]Fine-tuning/b52ebe77-4d3d-4aa0-b6a4-9cec5ca2eb82_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 89 |
+
2025/An[[:space:]]Optimizable[[:space:]]Suffix[[:space:]]Is[[:space:]]Worth[[:space:]]A[[:space:]]Thousand[[:space:]]Templates_[[:space:]]Efficient[[:space:]]Black-box[[:space:]]Jailbreaking[[:space:]]without[[:space:]]Affirmative[[:space:]]Phrases[[:space:]]via[[:space:]]LLM[[:space:]]as[[:space:]]Optimizer/05f0459f-50fa-42a1-aa78-4f80b6be386d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 90 |
+
2025/An[[:space:]]empirical[[:space:]]study[[:space:]]of[[:space:]]validating[[:space:]]synthetic[[:space:]]data[[:space:]]for[[:space:]]formula[[:space:]]generation/7f697dc9-109b-4f5d-9ae9-0e6d3e7d4220_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 91 |
+
2025/Analysis[[:space:]]of[[:space:]]LLM[[:space:]]as[[:space:]]a[[:space:]]grammatical[[:space:]]feature[[:space:]]tagger[[:space:]]for[[:space:]]African[[:space:]]American[[:space:]]English/342c2813-3ca9-419e-8ce5-965f707fbae4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 92 |
+
2025/Are[[:space:]]Language[[:space:]]Models[[:space:]]Agnostic[[:space:]]to[[:space:]]Linguistically[[:space:]]Grounded[[:space:]]Perturbations_[[:space:]]A[[:space:]]Case[[:space:]]Study[[:space:]]of[[:space:]]Indic[[:space:]]Languages/e1fa9887-051c-4832-866b-862bf99239f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 93 |
+
2025/Are[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Effective[[:space:]]in[[:space:]]Clinical[[:space:]]Trial[[:space:]]Design_[[:space:]]A[[:space:]]Study[[:space:]]on[[:space:]]Baseline[[:space:]]Feature[[:space:]]Generation/cc5d8570-30d7-4c38-9d06-f369816fd26b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 94 |
+
2025/As[[:space:]]easy[[:space:]]as[[:space:]]PIE_[[:space:]]understanding[[:space:]]when[[:space:]]pruning[[:space:]]causes[[:space:]]language[[:space:]]models[[:space:]]to[[:space:]]disagree/05b49b77-e416-402c-89b5-893cf23f3662_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 95 |
+
2025/Ask[[:space:]]Optimal[[:space:]]Questions_[[:space:]]Aligning[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Retriever’s[[:space:]]Preference[[:space:]]in[[:space:]]Conversation/da50d15a-ddd2-4f4e-83cb-745191ac8eac_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 96 |
+
2025/AssertionBench_[[:space:]]A[[:space:]]Benchmark[[:space:]]to[[:space:]]Evaluate[[:space:]]Large-Language[[:space:]]Models[[:space:]]for[[:space:]]Assertion[[:space:]]Generation/c62cbc37-2b37-42e6-8d24-dcee87e7c962_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 97 |
+
2025/Assessing[[:space:]]LLMs[[:space:]]for[[:space:]]Zero-shot[[:space:]]Abstractive[[:space:]]Summarization[[:space:]]Through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Relevance[[:space:]]Paraphrasing/94a2970b-e947-42a2-94f2-b59651066e0d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 98 |
+
2025/Atoxia_[[:space:]]Red-teaming[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Target[[:space:]]Toxic[[:space:]]Answers/68a52c8f-07c6-4452-8e70-0ca8906b5a23_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 99 |
+
2025/Attention[[:space:]]Tracker_[[:space:]]Detecting[[:space:]]Prompt[[:space:]]Injection[[:space:]]Attacks[[:space:]]in[[:space:]]LLMs/4e79fd1b-4e5c-40a1-bd8c-9482a25d68ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 100 |
+
2025/Attention[[:space:]]on[[:space:]]Multiword[[:space:]]Expressions_[[:space:]]A[[:space:]]Multilingual[[:space:]]Study[[:space:]]of[[:space:]]BERT-based[[:space:]]Models[[:space:]]with[[:space:]]Regard[[:space:]]to[[:space:]]Idiomaticity[[:space:]]and[[:space:]]Microsyntax/f99ce4b1-3f2f-4acc-b43e-f3268aa32b1e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 101 |
+
2025/Audio[[:space:]]Description[[:space:]]Generation[[:space:]]in[[:space:]]the[[:space:]]Era[[:space:]]of[[:space:]]LLMs[[:space:]]and[[:space:]]VLMs_[[:space:]]A[[:space:]]Review[[:space:]]of[[:space:]]Transferable[[:space:]]Generative[[:space:]]AI[[:space:]]Technologies/4d676017-986e-438e-b906-594d330cbcc4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 102 |
+
2025/Augmented[[:space:]]Adversarial[[:space:]]Trigger[[:space:]]Learning/2c6a698f-371b-429a-b3c0-9d52ed009952_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 103 |
+
2025/AutoBreach_[[:space:]]Universal[[:space:]]and[[:space:]]Adaptive[[:space:]]Jailbreaking[[:space:]]with[[:space:]]Efficient[[:space:]]Wordplay-Guided[[:space:]]Optimization[[:space:]]via[[:space:]]Multi-LLMs/27715e9e-2e7f-42da-8755-80d4e6496d54_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 104 |
+
2025/Automatic[[:space:]]Annotation[[:space:]]Augmentation[[:space:]]Boosts[[:space:]]Translation[[:space:]]between[[:space:]]Molecules[[:space:]]and[[:space:]]Natural[[:space:]]Language/8720907e-d6ab-4c71-82d0-d5d4b74cb6b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 105 |
+
2025/Avoiding[[:space:]]Copyright[[:space:]]Infringement[[:space:]]via[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Unlearning/0629ea9e-e7d5-431f-9d55-3de2635e0f13_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/22ad622f-3151-4838-8999-a0ed67034d48_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f31c7e809f41725741723af8a721d12891a7810a776ddee5badaf6f7c3e29548
|
| 3 |
+
size 3468596
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/full.md
ADDED
|
@@ -0,0 +1,910 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 2D-DPO: Scaling Direct Preference Optimization with 2-Dimensional Supervision
|
| 2 |
+
|
| 3 |
+
Shilong Li\*, Yancheng He\*, Hui Huang\*, Xingyuan Bu\*, Jiaheng Liu, Hangyu Guo, Weixun Wang, Jihao Gu, Wenbo Su, Bo Zheng
|
| 4 |
+
|
| 5 |
+
Taobao & Tmall Group of Alibaba
|
| 6 |
+
|
| 7 |
+
lishilong.aaron@gmail.com, xingyuanbu@gmail.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Recent advancements in Direct Preference Optimization (DPO) have significantly enhanced the alignment of Large Language Models (LLMs) with human preferences, owing to its simplicity and effectiveness. However, existing methods typically optimize a scalar score or ranking reward, thereby overlooking the multidimensional nature of human preferences. In this work, we propose to extend the preference of DPO to two dimensions: segments and aspects. We first introduce a 2D supervision dataset called HelpSteer-2D. For the segment dimension, we divide the response into sentences and assign scores to each segment. For the aspect dimension, we meticulously design several criteria covering the response quality rubrics. With the 2-dimensional signals as feedback, we develop a 2D-DPO framework, decomposing the overall objective into multi-segment and multi-aspect objectives. Extensive experiments on popular benchmarks demonstrate that 2D-DPO performs better than methods that optimize for scalar or 1-dimensional preferences<sup>1</sup>.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Recent advancements in Large Language Models (LLMs) have shown impressive performance across a wide range of tasks (Zhao et al., 2023; Bai et al., 2024; Wu et al., 2024a; Li et al., 2024a). A pivotal component in LLM training is Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022; Bai et al., 2022), which aligns LLMs with human preferences. However, due to its complexity, traditional RLHF often leads to challenges such as training instability and reward collapse (Wolf et al., 2023; Song et al., 2023).
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: An illustrative comparison between vanilla DPO and 2D-DPO.
|
| 19 |
+
|
| 20 |
+
Direct Preference Optimization (DPO) (Rafailov et al., 2023), as a simpler and more effective alternative, has gained considerable attention due to its ability to bypass the need for explicitly fitting a reward model (Meng et al., 2024; Ethayarajh et al., 2024). However, most existing DPO-style approaches rely on scalar scores or rankings and ignore the multi-dimensional nature of human preferences, resulting in inefficient and imprecise optimization. For instance, a response may be deemed satisfactory under one aspect such as correctness, but falls short in another such as clarity. Moreover, not all segments of a response should be treated uniformly; even in a preferred response, there may be segments of inferior quality. This underscores the need for a more nuanced approach that recognizes the multi-dimensionality of feedback and its critical impact on model training.
|
| 21 |
+
|
| 22 |
+
In response, some recent works have attempted to leverage signals that are believed to reflect the importance of individual segments as reward scores (Zeng et al., 2024; Chan et al., 2024; Jiang et al., 2024; Chen et al., 2024). However, these signals are often derived from statistical features such as edit distance or confidence estimation, which can introduce noise and lack interpretability. Other approaches incorporate multi-objective optima
|
| 23 |
+
|
| 24 |
+
tion to balance different aspects of human preferences (Wu et al., 2024b; Guo et al., 2023; Cao et al., 2024). However, these methods mostly rely on Proximal Policy Optimization (PPO) (Schulman et al., 2017), which is prone to instability during training. Furthermore, these efforts only extend preference optimization from 0-dimensional (scalar reward) to 1-dimensional (aspect/segment) supervision, which remains insufficient for capturing the complexity of real-world human preferences.
|
| 25 |
+
|
| 26 |
+
To better address the intricacy of human preferences, we propose 2D-DPO, a novel direct alignment strategy that enables 2-dimensional (2D) fine-grained optimization. Our core idea is to scale supervision signals across two dimensions: segments and aspects. To this end, we first construct a preference dataset called HelpSteer-2D, where each sample is annotated with a 2-dimensional score matrix evaluating each segment across multiple aspects. These signals are derived from a robust model guided by a set of stringent principles, ensuring the generation of highly accurate and interpretable supervision signals. Building on this, we propose a novel approach to achieve 2-dimensional direct preference alignment. Experimental results on three public benchmarks demonstrate that 2D-DPO significantly outperforms previous methods. In summary, our main contributions are threefold:
|
| 27 |
+
|
| 28 |
+
- We introduce a novel 2-dimensional preference alignment method, 2D-DPO, which scales supervision signals across both segments and aspects to better align with human preferences.
|
| 29 |
+
- We develop a high-quality, fine-grained preference dataset, HelpSteer-2D, which will be released to the community for future research.
|
| 30 |
+
- Extensive experiments show that 2D-DPO delivers superior performance in aligning with human preferences compared to prior approaches.
|
| 31 |
+
|
| 32 |
+
# 2 Related Work
|
| 33 |
+
|
| 34 |
+
# 2.1 Preference Optimization
|
| 35 |
+
|
| 36 |
+
Large language models (LLMs) have advanced rapidly, with reinforcement learning from human feedback (RLHF) commonly used to align LLMs with human preferences (Ziegler et al., 2019; Stiessenon et al., 2020; Ouyang et al., 2022; Bai et al., 2022; Liu et al., 2024b,a; Peng et al., 2023; Feng et al., 2022). However, traditional RLHF methods face challenges like instability and high resource
|
| 37 |
+
|
| 38 |
+
demands (Wolf et al., 2023; Song et al., 2023), prompting the search for simpler alternatives. One such representative approach is Direct Preference Optimization (DPO) (Rafailov et al., 2023), which optimizes alignment without explicit reward modeling, offering simplicity and stability. Building on DPO, IPO (Azar et al., 2024) adds a regularization term to alleviate overfitting. KTO (Ethayarajh et al., 2024) only requires a binary signal of whether an output is desirable or undesirable for an input to align LLMs, simplifying the data acquisition process. ORPO (Hong et al., 2024) simplifies training with odds ratio-based penalties, and SimPO (Meng et al., 2024) improves efficiency by using average log probability as an implicit reward.
|
| 39 |
+
|
| 40 |
+
# 2.2 Token-level Preference Optimization
|
| 41 |
+
|
| 42 |
+
The response-level rewards in naive PPO and DPO often lack token-level details. To address this, researchers have explored fine-grained supervision signals in three ways: (1) Human annotation: Methods like PRM (Lightman et al., 2023) and FGRLHF (Wu et al., 2024b) involve human annotators labeling each segment of the response to generate fine-grained signals. (2) LLM annotation: To reduce the cost of human labeling, stronger LLMs are used to generate preference pairs with minimal edits (Guo et al., 2023; Chen et al., 2024; Yoon et al., 2024; Jiang et al., 2024) or to identify positive and negative response segments (Cao et al., 2024). (3) Internal signal: Some works use the model's internal information as reward signals, such as using attention scores for token rewards in ABC (Chan et al., 2024) or decomposing DPO's response-level rewards into token-level signals in SePO (Yang et al., 2024b; Rafailov et al., 2024). TDPO (Zeng et al., 2024) achieves token-level alignment by controlling the KL divergence for each token.
|
| 43 |
+
|
| 44 |
+
# 2.3 Multi-objective Preference Optimization
|
| 45 |
+
|
| 46 |
+
Human preferences are often complex, diverse, and even contradictory, making single-dimensional training insufficient. To address this, some studies align LLMs with multiple objectives by either training separate reward models and averaging their outputs (Pan et al., 2023; Ji et al., 2024; Rame et al., 2024; de Langis et al., 2024; Wang et al., 2024a). However, this approach demands significant computational resources. In contrast, MODPO (Zhou et al., 2024) offers a simpler, reinforcement learning-free method for optimizing multiple objectives. RiC (Yang et al., 2024c) and
|
| 47 |
+
|
| 48 |
+
CPO (Guo et al., 2024) focus on integrating multiple reward values for controllable generation.
|
| 49 |
+
|
| 50 |
+
# 3 Approach
|
| 51 |
+
|
| 52 |
+
In this section, we provide a detailed introduction to our 2D-DPO approach for aligning LLMs. We first describe the construction of the 2D preference dataset in Section 3.1. Then, we present an enhanced 2D-DPO formulation that integrates the 2D supervision signals into the alignment process in Section 3.2. The complete pipeline of our method is illustrated in Figure 2.
|
| 53 |
+
|
| 54 |
+
# 3.1 Preference Dataset with 2D Signal
|
| 55 |
+
|
| 56 |
+
In general, a preference optimization dataset, denoted as $\mathcal{D} = \{x^i,y_w^i,y_l^i\}_{i = 1}^N$ , comprises prompts $x$ along with a chosen response $y_{w}$ and a rejected response $y_{l}$ , where $y_{w}$ is of higher quality compared to $y_{l}$ . Such datasets are commonly used to train reward models (e.g., PPO) or directly for model alignment (e.g., DPO). However, differentiating between the chosen and rejected responses based on a scalar score is often coarse and imprecise. The quality of responses can vary significantly across different evaluation aspects, and even a chosen response might contain segments of low quality, while a rejected response could include segments of high quality. Therefore, relying solely on a scalar score for optimization may restrict the model's ability to effectively align with human preferences.
|
| 57 |
+
|
| 58 |
+
To address this issue, we propose a fine-grained scoring approach that decomposes the scalar scores of model responses to segment-level and aspect-level. The first step is to divide the response into segments, and the choice of segment length is crucial for ensuring the effectiveness of the fine-grained optimization. Segments that are too long cannot resolve the aforementioned coarse scoring issues, while segments that are too short pose difficulties for accurate assessment. Therefore, we choose the sentence as the scoring unit, which can strike a balance between scoring accuracy and the clarity of segment preferences.
|
| 59 |
+
|
| 60 |
+
After segmenting responses based on typical sentence-ending punctuations, we employ GPT-4 to perform aspect-level scoring. Following HelpSteer2 (Wang et al., 2024b), we annotate the preference data across five key aspects: Helpfulness, Correctness, Safety, Completeness, Clarity. The first three aspects are independent of different sentences. The aspect of Completeness gen
|
| 61 |
+
|
| 62 |
+
erally increases as responses become more comprehensive, while Clarity tends to decrease as responses grow longer and more redundant. To ensure the integrity of our annotations, we use separate prompts for each aspect to prevent any cross-influence among them. For the details of the annotation process, please refer to Appendix F.
|
| 63 |
+
|
| 64 |
+
Finally, the constructed dataset is as follows:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\mathcal {D} = \left\{x ^ {i}, \boldsymbol {y} _ {w} ^ {i}, \boldsymbol {y} _ {l} ^ {i} \right\} _ {i = 1} ^ {N}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\boldsymbol {y} _ {w} ^ {i} = \left\{y _ {w, k} ^ {i}, \left\{r _ {w, k, j} ^ {i} \right\} _ {j = 1} ^ {A} \right\} _ {k = 1} ^ {S _ {w} ^ {i}},
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\boldsymbol {y} _ {l} ^ {i} = \left\{y _ {l, k} ^ {i}, \left\{r _ {l, k, j} ^ {i} \right\} _ {j = 1} ^ {A} \right\} _ {k = 1} ^ {S _ {l} ^ {i}}. \tag {1}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
In the dataset $\mathcal{D}$ , $x^i$ represents the $i$ -th prompt, while $\pmb{y}_w^i$ and $\pmb{y}_l^i$ represent the 2D annotations for the chosen and rejected response, respectively. Each 2D annotation includes $S^i$ text segments, indexed by $k$ , denoted as $y_k^i$ . The scores for these segments are denoted by $r_{k,j}^i$ , where $j$ indicates the index across the aforementioned five aspects.
|
| 79 |
+
|
| 80 |
+
# 3.2 2D-DPO
|
| 81 |
+
|
| 82 |
+
While the construction process of 2D signals is straightforward, integrating them effectively into the alignment process presents significant challenges. Previous approaches mostly utilize these signals as a scalar reward by weighted summation, which is insufficient for enabling the model to distinguish between varying quality across different dimensions. To address this issue, we propose a novel alignment method called 2D-DPO.
|
| 83 |
+
|
| 84 |
+
Vanilla DPO. Direct Preference Optimization (DPO) (Rafailov et al., 2023), as one of the most popular alignment methods, proposes a direct optimization objective that satisfies the optimal preference policy without using a reward model:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\begin{array}{l} \mathcal {L} _ {D P O} \left(\pi_ {\theta}; \pi_ {r e f}\right) = \\ - \mathbb {E} _ {\left(x, y ^ {w}, y ^ {l}\right) \sim \mathcal {D}} \left[ \log \sigma \left(\beta \log \frac {\pi_ {\theta} \left(y _ {w} \mid x\right)}{\pi_ {r e f} \left(y _ {w} \mid x\right)} \right. \right. \\ \left. \left. - \beta \log \frac {\pi_ {\theta} (y _ {l} \mid x)}{\pi_ {r e f} (y _ {l} \mid x)}\right) \right], \tag {2} \\ \end{array}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
where $\pi_{\theta}$ and $\pi_{ref}$ represent the policy model and the reference model, respectively. DPO can fundamentally be viewed as a multi-armed bandit problem, where the model's entire response is treated as a single arm. According to Rafailov et al. 2023, in the token-level Markov Decision Process (MDP), the language model's Bradley-Terry preference
|
| 91 |
+
|
| 92 |
+
(a) Develop a set of principles
|
| 93 |
+
(b) Construct a preference dataset with 2D fine-grained signals
|
| 94 |
+

|
| 95 |
+
Query: Compare K-Means and Hierarchical Clustering.
|
| 96 |
+
Both K-Means and Hierarchical Clustering are unsupervised learning
|
| 97 |
+
techniques used for cluster analysis: K-Means Clustering: K-Means is a widely used algorithm for clustering data. And, ...
|
| 98 |
+
Helpfulness: 4, Correctness: 4, Safety: 4, Completeness: 2, Clarity: 4
|
| 99 |
+
Rejected: ${}_{1}^{1}$ Helpfulness: 2,Correctness: 4,Safety: 4,Completeness: 1,Clarity: 3
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
Chosen: Helpfulness: 4, Correctness: 4, Safety: 4, Completeness: 1, Clarity: 4
|
| 103 |
+
I will introduce the comparison of these two algorithms in detail! They are two of the most popular clustering algorithms used in unsupervised learning. Both have their own advantages and ...
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
Step2: Align the LLM with 2D-DPO
|
| 107 |
+
Figure 2: Illustration of our proposed 2D-DPO. Firstly, we develop principles for preference annotation on different aspects, and collect scores across different segments and aspects for pairwise responses, leading to 2-dimensional signals. Secondly, we apply 2D-DPO on the constructed signals with decomposed training objective.
|
| 108 |
+
|
| 109 |
+
model can be expressed in the following form:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
p ^ {*} \left(\tau^ {w} \succeq \tau^ {l}\right) = \tag {3}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\frac {\exp \left(\sum_ {i = 1} ^ {N} r (\mathbf {s} _ {i} ^ {w} , \mathbf {a} _ {i} ^ {w})\right)}{\exp \left(\sum_ {i = 1} ^ {N} r (\mathbf {s} _ {i} ^ {w} , \mathbf {a} _ {i} ^ {w})\right) + \exp \left(\sum_ {i = 1} ^ {M} r (\mathbf {s} _ {i} ^ {l} , \mathbf {a} _ {i} ^ {l})\right)},
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\tau^w$ and $\tau^l$ represent the winning and losing trajectories, respectively. In this context, a represents the next generated token, and s denotes the current state, consisting of the prompt along with all previously generated tokens.
|
| 120 |
+
|
| 121 |
+
Subsequently, based on the fixed point solution under the general maximum entropy RL setting (Ziebart, 2010; Levine, 2018), Rafailov et al. derived the form of DPO in token-level MDP:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\begin{array}{l} \mathcal {L} _ {D P O} \left(\pi_ {\theta}; \pi_ {r e f}\right) = \tag {4} \\ - \mathbb {E} \left[ \log \sigma \left(\beta \sum_ {t = 0} ^ {N - 1} \log \frac {\pi_ {\theta} \left(a _ {w} ^ {t} \mid s _ {w} ^ {t}\right)}{\pi_ {r e f} \left(a _ {w} ^ {t} \mid s _ {w} ^ {t}\right)} \right. \right. \\ \left. - \beta \sum_ {t = 0} ^ {M - 1} \log \left. \frac {\pi_ {\theta} (a _ {l} ^ {t} \mid s _ {l} ^ {t})}{\pi_ {r e f} (a _ {l} ^ {t} \mid s _ {l} ^ {t})}\right) \right]. \\ \end{array}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
In the above process, Rafailov et al. combined Ng et al.'s definition of equivalence between two reward functions through a potential function and concluded that $\beta \log \frac{\pi_{\theta}^{*}(\mathbf{a}_{t}|\mathbf{s}_{t})}{\pi_{ref}(\mathbf{a}_{t}|\mathbf{s}_{t})}$ and $r(\mathbf{s}_t,\mathbf{a}_t)$ can equivalently yield the same optimal policy. Furthermore, $\beta \log \frac{\pi_{\theta}^{*}(\mathbf{a}_{t}|\mathbf{s}_{t})}{\pi_{ref}(\mathbf{a}_{t}|\mathbf{s}_{t})}$ is precisely the optimal advantage function $A^{*}(\mathbf{s}_{t},\mathbf{a}_{t})$ . For a detailed derivation, please refer to Appendix A.2 and A.3.
|
| 128 |
+
|
| 129 |
+
2D-DPO. With the above conclusions, 2D supervision signals can be conveniently integrated into the alignment process. We achieve the integration by using the signals to calibrate the token-level advantage function $A(\mathbf{s}_t, \mathbf{a}_t)$ for different segments. This approach equips the model with a direct perception of fine-grained preferences, avoiding the ambiguity of holistic rewards.
|
| 130 |
+
|
| 131 |
+
Specifically, we use the regularized fine-grained reward $r$ as a coefficient, incorporating it into the token-level advantage function to obtain $\beta r\log \frac{\pi_{\theta}(\mathbf{a}_t|\mathbf{s}_t)}{\pi_{ref}(\mathbf{a}_t|\mathbf{s}_t)}$ , which integrates the fine-grained signals. In practice, this is equivalent to adjusting $\beta$ within the original $A(\mathbf{s}_t,\mathbf{a}_t)$ . We provide complete
|
| 132 |
+
|
| 133 |
+
proof of its feasibility in Appendix A.4.
|
| 134 |
+
|
| 135 |
+
The token-level DPO incorporating fine-grained signals is formalized as follows:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\begin{array}{l} \mathcal {L} (\pi_ {\theta}, D) = \\ - \mathbb {E} _ {(\tau_ {w}, \tau_ {l}) \sim D} \log \sigma (\beta \sum_ {k = 0} ^ {S _ {w} - 1} \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {w, k} \log \frac {\pi_ {\theta} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)} \\ - \beta \sum_ {k = 0} ^ {S _ {l} - 1} \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {l, k} \log \frac {\pi_ {\theta} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}, \tag {5} \\ \end{array}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $n_k$ represents the first token of the $k$ -th segment and $l_k$ denotes the length of the $k$ -th segment.
|
| 142 |
+
|
| 143 |
+
For handling multiple aspects, we use a classic weighted approach for integration, that is, $r_{w,k} = \mathbf{W}\tilde{\mathbf{r}}_{w,k}$ , where $\mathbf{W}$ represents the weights that sum up to 1, which reflect the importance of each aspect during the alignment process, and $\tilde{\mathbf{r}}_{w,k} = \{r_{w,k,j}\}_{j=1}^{A}$ .
|
| 144 |
+
|
| 145 |
+
Segment Selection. The number of segments in the chosen and rejected responses may differ significantly, and typically only the segments with an impact on response preference need attention. Therefore, we select the top- $N$ highest-scoring segments from the chosen response and the top- $N$ lowest-scoring segments from the rejected response, where $N = \min(S_w, S_l)$ , further enhances the efficiency of model alignment training. Additionally, we group segments in pairs to provide clearer contrast during alignment, making it easier for the model to learn fine-grained differences between the chosen and rejected responses. These segments are paired to form $N$ BT models. The feasibility of this rearrangement is based on the fact that the loss for a single-segment BT model can be treated as setting the $\beta_t$ of other parts to 0, as demonstrated in Appendix A.4. Thus, we obtain the token-level DPO formula incorporating fine-grained signals:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\begin{array}{l} \mathcal {L} _ {\text {g r o u p}} \left(\pi_ {\theta}, D\right) = \\ - \mathbb {E} _ {(\tau_ {w}, \tau_ {l}) \sim D} \left[ \sum_ {k = 0} ^ {N - 1} \log \sigma \left(\beta \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {w, k} \log \frac {\pi_ {\theta} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}\right) \right. \\ \left. \left. - \beta \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {l, k} \log \frac {\pi_ {\theta} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}\right) \right]. \tag {6} \\ \end{array}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
As a result, we've formulated the definitive objective of 2D-DPO. This training objective allows for the direct integration of 2-D supervision signals into the alignment process, enabling LLMs to
|
| 152 |
+
|
| 153 |
+
discern the different aspects lying in different segments in the responses, thereby promoting better alignment with human preferences.
|
| 154 |
+
|
| 155 |
+
# 4 Experiments
|
| 156 |
+
|
| 157 |
+
# 4.1 Set-up
|
| 158 |
+
|
| 159 |
+
Benchmark. Our method has been tested on three widely recognized instruction-following benchmarks: Arena-Hard (Li et al., 2024b), AlpacaEval 2.0 (Dubois et al., 2024), and MT-Bench (Zheng et al., 2023). Each benchmark comprises a diverse set of queries, and the answers are evaluated under the framework of LLM-as-a-Judge (Zheng et al., 2023). We use gpt-4-turbo-2024-04-09 $^{2}$ as the judge model and the scores are reported following each benchmark's protocol.
|
| 160 |
+
|
| 161 |
+
Model. Our method is validated on two models, Qwen2-7B-Instruct (Yang et al., 2024a) and Llama3-8B-Instruct (AI@Meta, 2024). It deserves to be noticed that both models have undergone extensive instruction-tuning processes, therefore we directly perform preference optimization.
|
| 162 |
+
|
| 163 |
+
Baseline. We mainly compare our method with 0-dimensional preference optimization methods:
|
| 164 |
+
|
| 165 |
+
DPO (Rafailov et al., 2023). This method leverages a mapping between reward functions and optimal policies to optimize the preference with a single stage of policy training.
|
| 166 |
+
- IPO (Azar et al., 2024). This method proposes a theoretically grounded approach method to replace pairwise preferences in DPO with pointwise rewards.
|
| 167 |
+
- KTO (Ethayarajh et al., 2024). This method proposes to maximize the utility of generations from non-paired data directly.
|
| 168 |
+
- ORPO (Hong et al., 2024). This method leverages a reference model-free monolithic odds ratio for contrasting favored and disfavored styles during the SFT stage.
|
| 169 |
+
- SimPO (Meng et al., 2024). This method proposes to use the average log probability of a sequence as the implicit reward, which eliminates the need for a reference model.
|
| 170 |
+
|
| 171 |
+
We also compare our method with the following 1-dimensional preference optimization method:
|
| 172 |
+
|
| 173 |
+
<table><tr><td rowspan="2">Policy Model</td><td rowspan="2">Methods</td><td colspan="3">Arena-Hard</td><td colspan="3">AlpacaEval 2.0</td><td colspan="3">MT-Bench</td></tr><tr><td>WR (%)</td><td>Avg. Len</td><td>95% CI</td><td>LC (%)</td><td>WR (%)</td><td>Avg. Len</td><td>Turn 1</td><td>Turn 2</td><td>Avg. Score</td></tr><tr><td rowspan="9">Qwen2-7B-Instruct</td><td>Base</td><td>25.10</td><td>583</td><td>(-2.1, 2.0)</td><td>30.68</td><td>28.32</td><td>1862</td><td>8.01</td><td>6.61</td><td>7.31</td></tr><tr><td>+ DPO</td><td>29.40</td><td>578</td><td>(-1.8, 1.9)</td><td>29.07</td><td>26.83</td><td>1996</td><td>8.11</td><td>6.45</td><td>7.28</td></tr><tr><td>+ IPO</td><td>26.50</td><td>556</td><td>(-2.2, 2.2)</td><td>28.70</td><td>26.58</td><td>1940</td><td>7.90</td><td>6.53</td><td>7.21</td></tr><tr><td>+ KTO</td><td>26.10</td><td>518</td><td>(-2.1, 2.5)</td><td>26.46</td><td>23.00</td><td>1730</td><td>8.11</td><td>6.40</td><td>7.26</td></tr><tr><td>+ ORPO</td><td>25.40</td><td>573</td><td>(-2.2, 1.8)</td><td>28.58</td><td>27.70</td><td>1936</td><td>8.09</td><td>6.52</td><td>7.31</td></tr><tr><td>+ SimPO</td><td>29.00</td><td>539</td><td>(-1.9, 2.4)</td><td>29.94</td><td>27.70</td><td>1904</td><td>8.06</td><td>6.50</td><td>7.28</td></tr><tr><td>+ TDPO</td><td>25.90</td><td>564</td><td>(-2.0, 2.4)</td><td>29.81</td><td>27.33</td><td>1896</td><td>8.05</td><td>6.46</td><td>7.26</td></tr><tr><td>+ 1D-DPO</td><td>29.80</td><td>574</td><td>(-2.4, 2.2)</td><td>31.07</td><td>28.70</td><td>1951</td><td>8.13</td><td>6.48</td><td>7.31</td></tr><tr><td>+ 2D-DPO</td><td>30.30</td><td>586</td><td>(-1.9, 2.4)</td><td>31.51</td><td>28.94</td><td>1994</td><td>8.18</td><td>6.68</td><td>7.43</td></tr><tr><td rowspan="9">Llama3-8B-Instruct</td><td>Base</td><td>25.40</td><td>599</td><td>(-2.2, 2.4)</td><td>27.08</td><td>26.96</td><td>1959</td><td>7.66</td><td>6.84</td><td>7.25</td></tr><tr><td>+ DPO</td><td>25.90</td><td>567</td><td>(-1.9, 2.2)</td><td>31.68</td><td>30.31</td><td>1883</td><td>7.64</td><td>6.60</td><td>7.20</td></tr><tr><td>+ IPO</td><td>24.80</td><td>548</td><td>(-2.2, 1.8)</td><td>29.69</td><td>28.57</td><td>1891</td><td>7.73</td><td>6.75</td><td>7.24</td></tr><tr><td>+ KTO</td><td>25.20</td><td>507</td><td>(-1.9, 2.4)</td><td>27.95</td><td>27.08</td><td>1835</td><td>7.65</td><td>6.65</td><td>7.15</td></tr><tr><td>+ ORPO</td><td>25.60</td><td>537</td><td>(-1.7, 1.9)</td><td>29.19</td><td>28.57</td><td>1892</td><td>7.75</td><td>6.68</td><td>7.22</td></tr><tr><td>+ SimPO</td><td>26.30</td><td>552</td><td>(-1.8, 2.2)</td><td>31.55</td><td>30.19</td><td>1879</td><td>7.96</td><td>6.70</td><td>7.33</td></tr><tr><td>+ TDPO</td><td>23.40</td><td>566</td><td>(-1.7, 1.8)</td><td>28.57</td><td>26.96</td><td>1881</td><td>7.95</td><td>6.80</td><td>7.38</td></tr><tr><td>+ 1D-DPO</td><td>26.70</td><td>563</td><td>(-2.3, 1.8)</td><td>31.78</td><td>30.19</td><td>1893</td><td>7.98</td><td>6.74</td><td>7.34</td></tr><tr><td>+ 2D-DPO</td><td>27.00</td><td>554</td><td>(-1.9, 2.0)</td><td>32.06</td><td>30.56</td><td>1884</td><td>8.04</td><td>6.84</td><td>7.44</td></tr><tr><td rowspan="5">Llama3-8B-SFT</td><td>Base</td><td>3.00</td><td>478</td><td>(-0.6, 0.7)</td><td>5.84</td><td>4.78</td><td>1086</td><td>5.71</td><td>4.51</td><td>5.11</td></tr><tr><td>+ DPO</td><td>3.70</td><td>447</td><td>(-0.7, 0.7)</td><td>8.10</td><td>5.65</td><td>1035</td><td>6.00</td><td>4.41</td><td>5.21</td></tr><tr><td>+ SimPO</td><td>3.40</td><td>479</td><td>(-0.7, 0.7)</td><td>6.30</td><td>6.71</td><td>1675</td><td>6.11</td><td>5.03</td><td>5.57</td></tr><tr><td>+ TDPO</td><td>4.40</td><td>438</td><td>(-0.7, 0.8)</td><td>9.21</td><td>6.02</td><td>981</td><td>6.11</td><td>4.80</td><td>5.46</td></tr><tr><td>+ 2D-DPO</td><td>6.90</td><td>423</td><td>(-0.9, 1.2)</td><td>10.93</td><td>7.02</td><td>1122</td><td>6.18</td><td>5.14</td><td>5.66</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 1: Experiment results of different preference optimization methods on instruction-following benchmarks. We report the results on each benchmark based on their recommended metrics.
|
| 176 |
+
|
| 177 |
+
- TDPO (Zeng et al., 2024). This method proposes to control the KL divergence constraints for each token, aiming to strike a better balance between alignment and diversity.
|
| 178 |
+
|
| 179 |
+
Our method is evaluated under two configurations — 1D-DPO and 2D-DPO. 1D-DPO only incorporates a single aspect (helpfulness) as signals while 2D-DPO uses all five aspects.
|
| 180 |
+
|
| 181 |
+
Training. We perform preference optimization based on HelpSteer-2D, which is constructed based on HelpSteer2 (Wang et al., 2024b). We leverage gpt-4o-2024-05-13<sup>3</sup> to generate 2-dimensional scores which align with our requirements.
|
| 182 |
+
|
| 183 |
+
To make a fair comparison, all methods are combined with SFT loss with a coefficient of 0.1 except for $\mathrm{ORPO^4}$ . The other hyper-parameters are tuned to achieve an optimal performance for each method. Please refer to Appendix B.1 for more details.
|
| 184 |
+
|
| 185 |
+
# 4.2 Main Results
|
| 186 |
+
|
| 187 |
+
The primary results are shown in Table 1. As can be seen, our proposed 2D-DPO outperforms existing methods across all three benchmarks, verifying the significance of 2-dimensional supervision in preference optimization. While previous methods mostly treat different segments uniformly with a singular scoring criterion, leading to insufficient supervision, this work scales the feedback to both aspect-level and segment-level, thereby improving the performance. Notice that all methods are conducted on the same group of queries and responses, and our method does not require an additional training stage or extra computation overhead; therefore, our method exploits the utility of direct preference optimization with minimal expense.
|
| 188 |
+
|
| 189 |
+
Comparing the averaged length on Arena-Hard and AlpacaEval 2.0, we also notice that our method does not lead to more verbose responses. This demonstrates that 2-dimensional supervision helps mitigate the reward hacking issue (Singhal et al., 2023). While more fine-grained supervision is provided for preference pairs, the model would not unanimously favor more verbose responses, achieving more accurate alignment.
|
| 190 |
+
|
| 191 |
+
In comparison to 2D-DPO, the performance of 1D-DPO, which is only scaled with segment-level signals, shows a noticeable decline, thereby demonstrating the gains achieved by incorporating aspect-level signals. Nevertheless, among various alignment methods, 1D-DPO remains highly competitive, outperforming all other methods except for 2D-DPO across different evaluation metrics, proving the effectiveness of the additional supervisory signals introduced at the segment level.
|
| 192 |
+
|
| 193 |
+
Furthermore, it deserves to be noted that TDPO also underperforms both 1D-DPO and 2D-DPO, which can be traced back to the design of the training objective. Despite the loss in TDPO being reassigned to each token based on KL-Divergence, the temperature for each token is not adjusted appropriately, resulting in a coarse optimization process. In contrast, our method meticulously adjust the temperature for each segment, aligning the update scale with the segment's importance across criteria, contributing to our superior performance.
|
| 194 |
+
|
| 195 |
+
We perform experiments on an SFT model that has not undergone alignment, where the advantage of 2D-DPO over other methods is more pronounced compared to its performance on instruct models. This may be because the improvement from additional alignment on instruct models is limited.
|
| 196 |
+
|
| 197 |
+
# 4.3 Detailed Analysis
|
| 198 |
+
|
| 199 |
+
To further show the effectiveness of 2D-DPO, we conduct ablation studies and delve into a detailed analysis of the model's performance. Additionally, we present a case study in Appendix E.
|
| 200 |
+
|
| 201 |
+
The Influence of $\beta$ . Table 2 shows the results of 2D-DPO with different values of $\beta$ (temperature). As $\beta$ increases, we observe a consistent trend across three benchmarks: performance first rises and then falls. This is because a higher $\beta$ can amplify the divergence penalty within the RL optimization objective, thereby avoiding model degradation. However, an overly high $\beta$ would reduce the overall optimal ceiling and limit the potential gains from alignment (Ahrabian et al., 2024).
|
| 202 |
+
|
| 203 |
+
Performance on Different Aspects. We evaluated the performance of the models aligned using different methods across various aspects. We selected AlpacaEval 2.0 (Dubois et al., 2024) which offers diverse instructions as the query set, and obtained the responses of different models on this set
|
| 204 |
+
|
| 205 |
+
as the evaluation targets. The evaluation prompt was consistent with the prompts presented in Section 3.1. For aspects that are independent among segments, we took the average score of all segments as the score for that response. For aspects that are not independent among segments (completeness and clarity), we select the score of the last segment. The average score of all responses is taken as the final result. As shown in Figure 3, our 2D-DPO can achieve the best results in all aspects, striking a balance between different rubrics of human preferences. 1D-DPO with only segment-level feedback underperforms, as response-level alignment still leads to coarse refinement. We also notice different methods exhibit minimal difference upon safety and correctness, which might be due to Qwen2-7B-Instruct already undergoing an alignment process on these aspects. For the other aspects that is not covered by the process, 2D-DPO can achieve more pronounced improvement.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
Figure 3: The relative performance on different aspects of different alignment methods.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 4: The trends in reward scores and accuracy over training steps across DPO, TDPO, 1D-DPO, and 2D-DPO. (a) Rewards of preferred (solid lines) and dispreferred (dashed lines) responses. (b) Reward accuracy compared with preference annotation.
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">β</td><td colspan="4">Arena-Hard</td><td colspan="3">AlpacaEval 2.0</td><td colspan="3">MT-Bench</td></tr><tr><td>WR (%)</td><td>Avg. Len</td><td colspan="2">95% CI</td><td>LC (%)</td><td>WR (%)</td><td>Avg. Len</td><td>Turn 1</td><td>Turn 2</td><td>Avg. Score</td></tr><tr><td rowspan="2">Qwen2-7B</td><td>0.1</td><td>29.2</td><td>557</td><td colspan="2">(-2.2, 1.8)</td><td>30.48</td><td>29.21</td><td>1961</td><td>8.00</td><td>6.60</td><td>7.30</td></tr><tr><td>0.2</td><td>30.3</td><td>586</td><td colspan="2">(-1.9, 2.4)</td><td>31.51</td><td>28.94</td><td>1994</td><td>8.18</td><td>6.68</td><td>7.43</td></tr><tr><td>-Instruct</td><td>0.5</td><td>27.4</td><td>578</td><td colspan="2">(-1.8, 1.9)</td><td>29.62</td><td>29.21</td><td>1976</td><td>7.95</td><td>6.60</td><td>7.28</td></tr><tr><td rowspan="2">+2D-DPO</td><td>0.7</td><td>28.6</td><td>595</td><td colspan="2">(-1.6, 2.2)</td><td>28.24</td><td>28.17</td><td>1997</td><td>7.83</td><td>6.65</td><td>7.24</td></tr><tr><td>1.0</td><td>28.0</td><td>576</td><td colspan="2">(-2.0, 1.4)</td><td>28.97</td><td>28.49</td><td>1981</td><td>7.77</td><td>6.65</td><td>7.21</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 2: Experiment results on three benchmarks of 2D-DPO with different values of $\beta$ .
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 5: The trends in sequential KL divergence between the policy model and the reference model over training steps across DPO, TDPO, 1D-DPO, and 2D-DPO. (a) KL divergence for preferred responses. (b) KL divergence for dispreferred responses.
|
| 221 |
+
|
| 222 |
+
Training Indicators. We analyze the reward of different responses during training in Figure 4(a) $^{6}$ . We can observe that the reward scores of the preferred responses in our method increase rapidly while the reward scores of the dispreferred responses decrease significantly, resulting in the largest margin. Figure 4(b) shows the reward accuracy trends during training. In this context, accuracy is defined as the proportion of instances where the reward score for the preferred response is higher than that for the dispreferred response. Our method not only reaches the highest accuracy fastest but also achieves the best overall accuracy, demonstrating that our method facilitates more efficient training. In Figure 5, we show the trends of sequential KL divergence over training steps for both preferred and dispreferred responses. 2D-DPO exhibits consistently lower KL divergence compared to DPO and 1D-DPO on both preferred and dispreferred responses. This indicates that 2-dimensional supervision can effectively balance KL divergence, preventing excessive deviations from the original model, thereby ensuring stronger training stability.
|
| 223 |
+
|
| 224 |
+
Fine-grained Reward Assignment. In Figure 6, we compare the fine-grained reward assignment for the same sample using various reward models: (a) 1D-RM utilizes only segment-level reward signals; (b) DPO is trained through direct prefer
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Figure 6: The fine-grained reward assignment of different reward models on the same sample. All reward models are trained on HelpSteeer-2D.
|
| 228 |
+
|
| 229 |
+
ence alignment $^{7}$ ; (c) ORM is trained with response-level reward signals; (d) 2D-RM is trained with 2-dimensional supervision signals $^{8}$ . As can be seen, 1) ORM fails to distinguish preference differences between segments, leading to inaccurate global scores; 2) 1D-RM identifies preferences for different segments but does not detect the redundancy issue in the second sentence; 3) DPO can identify certain preferred tokens, but also assigns high scores to dispreferred tokens and overlooks some preferred tokens, introducing significant noise. In contrast, our method not only distinguishes preferences across segments more effectively but also provides more accurate scoring.
|
| 230 |
+
|
| 231 |
+
# 5 Conclusion
|
| 232 |
+
|
| 233 |
+
In this work, we introduce a novel approach to scale DPO with 2-dimensional reward signals. We first construct a preference dataset on both segment and aspect levels and then develop a 2D-DPO objective that learns the 2D preferences concurrently. Experiment results on popular benchmarks verified the effectiveness of our proposed method.
|
| 234 |
+
|
| 235 |
+
While the boost of direct preference alignment methods has promoted LLM development and application, most work focuses on the design of loss
|
| 236 |
+
|
| 237 |
+
function instead of the intricacies of human preferences. In the future, we will continue our research on multi-dimensional feedback, aimed at optimally aligned preference optimization.
|
| 238 |
+
|
| 239 |
+
# 6 Limitations
|
| 240 |
+
|
| 241 |
+
Our work still has some limitations: 1) Due to the lack of open-source codes and time limitations, we only compare with one 1-dimensional DPO method. More comparisons should be made on related work to improve the credibility of our work. 2) Our method should also be validated on foundation ability benchmarks such as MMLU (Hendrycks et al., 2021), to verify that our method would not lead to the degradation of fundamental abilities. 3) Due to resource limitations, the effectiveness of our method is only verified on 7B-sized models. The scaling ability on larger models deserves our future exploration to promote its application.
|
| 242 |
+
|
| 243 |
+
# References
|
| 244 |
+
|
| 245 |
+
Kian Ahrabian, Xihui Lin, Barun Patra, Vishrav Chaudhary, Alon Benhaim, Jay Pujara, and Xia Song. 2024. The hitchhiker's guide to human alignment with* po. arXiv preprint arXiv:2407.15229.
|
| 246 |
+
AI@Meta. 2024. Llama 3 model card.
|
| 247 |
+
Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR.
|
| 248 |
+
Ge Bai, Jie Liu, Xingyuan Bu, Yancheng He, Jiaheng Liu, Zhanhui Zhou, Zhuoran Lin, Wenbo Su, Tiezheng Ge, Bo Zheng, et al. 2024. Mt-bench-101: A fine-grained benchmark for evaluating large language models in multi-turn dialogues. arXiv preprint arXiv:2402.14762.
|
| 249 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
|
| 250 |
+
Meng Cao, Lei Shu, Lei Yu, Yun Zhu, Nevan Wichers, Yinxiao Liu, and Lei Meng. 2024. Drlc: Reinforcement learning with dense rewards from llm critic. arXiv preprint arXiv:2401.07382.
|
| 251 |
+
Alex J Chan, Hao Sun, Samuel Holt, and Mihaela van der Schaar. 2024. Dense reward for free in reinforcement learning from human feedback. arXiv preprint arXiv:2402.00782.
|
| 252 |
+
|
| 253 |
+
Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Junchen Wan, Fuzheng Zhang, Di Zhang, and Ji-Rong Wen. 2024. Improving large language models via fine-grained reinforcement learning with minimum editing constraint. arXiv preprint arXiv:2401.06081.
|
| 254 |
+
Karin de Langis, Ryan Koo, and Dongyeop Kang. 2024. Reinforcement learning with dynamic multi-reward weighting for multi-style controllable generation. arXiv preprint arXiv:2402.14146.
|
| 255 |
+
Yann Dubois, Balázs Galambosi, Percy Liang, and Tatsunori B Hashimoto. 2024. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475.
|
| 256 |
+
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306.
|
| 257 |
+
Weixin Feng, Xingyuan Bu, Chenchen Zhang, and Xubin Li. 2022. Beyond bounding box: Multi-modal knowledge learning for object detection. arXiv preprint arXiv:2205.04072.
|
| 258 |
+
Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Beyond imitation: Leveraging fine-grained quality signals for alignment. arXiv preprint arXiv:2311.04072.
|
| 259 |
+
Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie Zhou, Yankai Lin, et al. 2024. Controllable preference optimization: Toward controllable multi-objective alignment. arXiv preprint arXiv:2402.19085.
|
| 260 |
+
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Preprint, arXiv:2009.03300.
|
| 261 |
+
Jiwoo Hong, Noah Lee, and James Thorne. 2024. Reference-free monolithic preference optimization with odds ratio. arXiv preprint arXiv:2403.07691.
|
| 262 |
+
Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2024. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. Advances in Neural Information Processing Systems, 36.
|
| 263 |
+
Yuxin Jiang, Bo Huang, Yufei Wang, Xingshan Zeng, Liangyou Li, Yasheng Wang, Xin Jiang, Lifeng Shang, Ruiming Tang, and Wei Wang. 2024. Bridging and modeling correlations in pairwise data for direct preference optimization. arXiv preprint arXiv:2408.07471.
|
| 264 |
+
Sergey Levine. 2018. Reinforcement learning and control as probabilistic inference: Tutorial and review. ArXiv, abs/1805.00909.
|
| 265 |
+
|
| 266 |
+
Shilong Li, Yancheng He, Hangyu Guo, Xingyuan Bu, Ge Bai, Jie Liu, Jiaheng Liu, Xingwei Qu, Yangguang Li, Wanli Ouyang, et al. 2024a. Graphreader: Building graph-based agent to enhance long-context abilities of large language models. arXiv preprint arXiv:2406.14550.
|
| 267 |
+
Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E Gonzalez, and Ion Stoica. 2024b. From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline. arXiv preprint arXiv:2406.11939.
|
| 268 |
+
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let's verify step by step. arXiv preprint arXiv:2305.20050.
|
| 269 |
+
Jianyu Liu, Hangyu Guo, Ranjie Duan, Xingyuan Bu, Yancheng He, Shilong Li, Hui Huang, Jiaheng Liu, Yingshui Tan, Yucheng Wang, Chenchen Jing, Xingwei Qu, Xiao Zhang, Pei Wang, Yanan Wu, Jihao Gu, Yangguang Li, and Jianke Zhu. 2024a. Dream: Disentangling risks to enhance safety alignment in multimodal large language models. arXiv preprint arXiv.
|
| 270 |
+
Jie Liu, Zhanhui Zhou, Jiaheng Liu, Xingyuan Bu, Chao Yang, Han-Sen Zhong, and Wanli Ouyang. 2024b. Iterative length-regularized direct preference optimization: A case study on improving 7b language models to gpt-4 level. arXiv preprint arXiv:2406.11817.
|
| 271 |
+
Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734.
|
| 272 |
+
A. Ng, Daishi Harada, and Stuart J. Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In International Conference on Machine Learning.
|
| 273 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
|
| 274 |
+
Alexander Pan, Jun Shern Chan, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. 2023. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark. In International Conference on Machine Learning, pages 26837-26867. PMLR.
|
| 275 |
+
Junran Peng, Qing Chang, Haoran Yin, Xingyuan Bu, Ji-jun Sun, Lingxi Xie, Xiaopeng Zhang, Qi Tian, and Zhaoxiang Zhang. 2023. Gaia-universe: Everything is super-netify. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(10):11856-11868.
|
| 276 |
+
|
| 277 |
+
Rafael Rafailov, Joey Hejna, Ryan Park, and Chelsea Finn. 2024. From $r$ to $q^*$ : Your language model is secretly a q-function. arXiv preprint arXiv:2404.12358.
|
| 278 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Thirty-seventh Conference on Neural Information Processing Systems.
|
| 279 |
+
Alexandre Rame, Guillaume Couairon, Corentin Dancette, Jean-Baptiste Gaya, Mustafa Shukor, Laure Soulier, and Matthieu Cord. 2024. Rewarded soups: towards pareto-optimal alignment by interpolating weights fine-tuned on diverse rewards. Advances in Neural Information Processing Systems, 36.
|
| 280 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
|
| 281 |
+
Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2023. A long way to go: Investigating length correlations in rlhf. arXiv preprint arXiv:2310.03716.
|
| 282 |
+
Ziang Song, Tianle Cai, Jason D Lee, and Weijie J Su. 2023. Reward collapse in aligning large language models. arXiv preprint arXiv:2305.17608.
|
| 283 |
+
Nisan Stiannon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021.
|
| 284 |
+
Haoxiang Wang, Yong Lin, Wei Xiong, Rui Yang, Shizhe Diao, Shuang Qiu, Han Zhao, and Tong Zhang. 2024a. Arithmetic control of llms for diverse user preferences: Directional preference alignment with multi-objective rewards. arXiv preprint arXiv:2402.18571.
|
| 285 |
+
Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, and Oleksii Kuchaiev. 2024b. Helpsteer2: Open-source dataset for training top-performing reward models. Preprint, arXiv:2406.08673.
|
| 286 |
+
Yotam Wolf, Noam Wies, Oshri Avnery, Yoav Levine, and Amnon Shashua. 2023. Fundamental limitations of alignment in large language models. arXiv preprint arXiv:2304.11082.
|
| 287 |
+
Yanan Wu, Jie Liu, Xingyuan Bu, Jiaheng Liu, Zhanhui Zhou, Yuanxing Zhang, Chenchen Zhang, Zhiqi Bai, Haibin Chen, Tiezheng Ge, et al. 2024a. Conceptmath: A bilingual concept-wise benchmark for measuring mathematical reasoning of large language models. arXiv preprint arXiv:2402.14660.
|
| 288 |
+
|
| 289 |
+
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2024b. Finegrained human feedback gives better rewards for language model training. Advances in Neural Information Processing Systems, 36.
|
| 290 |
+
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024a. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
|
| 291 |
+
Kailai Yang, Zhiwei Liu, Qianqian Xie, Jimin Huang, Erxue Min, and Sophia Ananiadou. 2024b. Selective preference optimization via token-level reward function estimation. arXiv preprint arXiv:2408.13518.
|
| 292 |
+
Rui Yang, Xiaoman Pan, Feng Luo, Shuang Qiu, Han Zhong, Dong Yu, and Jianshu Chen. 2024c. Rewards-in-context: Multi-objective alignment of foundation models with dynamic preference adjustment. arXiv preprint arXiv:2402.10207.
|
| 293 |
+
Eunseop Yoon, Hee Suk Yoon, SooHwan Eom, Gunsoo Han, Daniel Wontae Nam, Daejin Jo, Kyoung-Woon On, Mark A Hasegawa-Johnson, Sungwoong Kim, and Chang D Yoo. 2024. Tlcr: Token-level continuous reward for fine-grained reinforcement learning from human feedback. arXiv preprint arXiv:2407.16574.
|
| 294 |
+
Yongcheng Zeng, Guoqing Liu, Weiyu Ma, Ning Yang, Haifeng Zhang, and Jun Wang. 2024. Token-level direct preference optimization. arXiv preprint arXiv:2404.11999.
|
| 295 |
+
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223.
|
| 296 |
+
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
|
| 297 |
+
Zhanhui Zhou, Jie Liu, Jing Shao, Xiangyu Yue, Chao Yang, Wanli Ouyang, and Yu Qiao. 2024. Beyond
|
| 298 |
+
|
| 299 |
+
one-preference-fits-all alignment: Multi-objective direct preference optimization. In Findings of the Association for Computational Linguistics ACL 2024, pages 10586-10613.
|
| 300 |
+
Brian D. Ziebart. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy.
|
| 301 |
+
Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593.
|
| 302 |
+
|
| 303 |
+
# A Mathematical Derivations
|
| 304 |
+
|
| 305 |
+
# A.1 Preliminaries
|
| 306 |
+
|
| 307 |
+
In the most classic RLHF methods, the optimization goal is typically expressed as an entropy bonus using the following KL-constrained:
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
\begin{array}{l} \max _ {\pi_ {\theta}} \mathbb {E} _ {a _ {t} \sim \pi_ {\theta} (\cdot | \mathbf {s} _ {t})} \sum_ {t = 0} ^ {T} [ r (\mathbf {s} _ {t}, \mathbf {a} _ {t}) - \beta \mathcal {D} _ {K L} [ \pi_ {\theta} (\mathbf {a} _ {t} | \mathbf {s} _ {t}) | | \pi_ {r e f} (\mathbf {a} _ {t} | \mathbf {s} _ {t}) ] ] (7) \\ = \max _ {\pi_ {\theta}} \mathbb {E} _ {a _ {t} \sim \pi_ {\theta} (\cdot | \mathbf {s} _ {t})} \sum_ {t = 0} ^ {T} \left[ r \left(\mathbf {s} _ {t}, \mathbf {a} _ {t}\right) - \beta \log \frac {\pi_ {\theta} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right)} \right] (8) \\ = \max _ {\pi_ {\theta}} \mathbb {E} _ {a _ {t} \sim \pi_ {\theta} (\cdot | \mathbf {s} _ {t})} [ \sum_ {t = 0} ^ {T} (r (\mathbf {s} _ {t}, \mathbf {a} _ {t}) + \beta \log \pi_ {r e f} (\mathbf {a} _ {t} | \mathbf {s} _ {t})) + \beta \mathcal {H} (\pi_ {\theta}) | \mathbf {s} _ {0} \sim \rho (\mathbf {s} _ {0}) ] (9) \\ \end{array}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
The fixed point solution in the general maximum entropy RL setting is (Ziebart, 2010; Levine, 2018):
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\pi^ {*} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right) = e ^ {\left(Q ^ {*} \left(\mathbf {s} _ {t}, \mathbf {a} _ {t}\right) - V ^ {*} \left(\mathbf {s} _ {t}\right)\right) / \beta} \tag {10}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
The Bradley-Terry preference model in token-level MDP is:
|
| 320 |
+
|
| 321 |
+
$$
|
| 322 |
+
p ^ {*} \left(\tau^ {w} \succeq \tau^ {l}\right) = \frac {\exp \left(\sum_ {i = 1} ^ {N} r \left(\mathbf {s} _ {i} ^ {w} , \mathbf {a} _ {i} ^ {w}\right)\right)}{\exp \left(\sum_ {i = 1} ^ {N} r \left(\mathbf {s} _ {i} ^ {w} , \mathbf {a} _ {i} ^ {w}\right)\right) + \exp \left(\sum_ {i = 1} ^ {M} r \left(\mathbf {s} _ {i} ^ {l} , \mathbf {a} _ {i} ^ {l}\right)\right)} \tag {11}
|
| 323 |
+
$$
|
| 324 |
+
|
| 325 |
+
# A.2 DPO in the Token Level MDP
|
| 326 |
+
|
| 327 |
+
The formula using the $Q$ -function to measure the relationship between the current timestep and future returns (Rafailov et al., 2024):
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
Q ^ {*} \left(s _ {t}, a _ {t}\right) = \left\{ \begin{array}{l l} r \left(s _ {t}, a _ {t}\right) + \beta \log \pi_ {r e f} \left(a _ {t} \mid s _ {t}\right) + V ^ {*} \left(s _ {t + 1}\right), & \text {i f} s _ {t + 1} \text {i s n o t t e r m i n a l} \\ r \left(s _ {t}, a _ {t}\right) + \beta \log \pi_ {r e f} \left(a _ {t} \mid s _ {t}\right), & \text {i f} s _ {t + 1} \text {i s t e r m i n a l} \end{array} \right. \tag {12}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
Derive the total reward obtained along the entire trajectory based on the above definitions:
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\sum_ {t = 0} ^ {T - 1} r \left(s _ {t}, a _ {t}\right) = \sum_ {t = 0} ^ {T - 1} \left(Q ^ {*} \left(s _ {t}, a _ {t}\right) - \beta \log \pi_ {\text {r e f}} \left(a _ {t} \mid s _ {t}\right) - V ^ {*} \left(s _ {t + 1}\right)\right) \tag {13}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
According to the definition of Eq. 24, we know that $V^{*}(S_{T}) = 0$ . Combining this with the fixed point solution of the optimal policy (Eq. 10), we can further derive:
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\begin{array}{l} = Q ^ {*} \left(s _ {0}, a _ {0}\right) - \beta \log \pi_ {\text {r e f}} \left(a _ {0} \mid s _ {0}\right) + \sum_ {t = 1} ^ {T - 1} \left(Q ^ {*} \left(s _ {t}, a _ {t}\right) - V ^ {*} \left(s _ {t}\right) - \beta \log \pi_ {\text {r e f}} \left(a _ {t} \mid s _ {t}\right)\right) (14) \\ = Q ^ {*} \left(s _ {0}, a _ {0}\right) - \beta \log \pi_ {\text {r e f}} \left(a _ {0} \mid s _ {0}\right) + \sum_ {t = 1} ^ {T - 1} \beta \log \frac {\pi^ {*} \left(a _ {t} \mid s _ {t}\right)}{\pi_ {\text {r e f}} \left(a _ {t} \mid s _ {t}\right)} (15) \\ = V ^ {*} \left(s _ {0}\right) + \sum_ {t = 0} ^ {T - 1} \beta \log \frac {\pi^ {*} \left(a _ {t} \mid s _ {t}\right)}{\pi_ {\mathrm {r e f}} \left(a _ {t} \mid s _ {t}\right)} (16) \\ \end{array}
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
By substituting the above result into Eq. 11, we can eliminate $V^{*}(S_{0})$ in the same way as removing the partition function in DPO, obtaining the Token-level BT model that conforms to the MDP:
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
p _ {\pi^ {*}} \left(\tau^ {w} \succeq \tau^ {l}\right) = \sigma \left(\sum_ {t = 0} ^ {N - 1} \beta \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {\text {r e f}} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)} - \sum_ {t = 0} ^ {M - 1} \beta \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {\text {r e f}} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}\right) \tag {17}
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Thus, the Loss formulation of DPO at the Token level is:
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\mathcal {L} \left(\pi_ {\theta}, \mathcal {D}\right) = - \mathbb {E} _ {\left(\tau_ {w}, \tau_ {l}\right) \sim \mathcal {D}} \left[ \log \sigma \left(\left(\sum_ {t = 0} ^ {N - 1} \beta \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {\text {r e f}} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}\right) - \left(\sum_ {t = 0} ^ {M - 1} \beta \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {\text {r e f}} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}\right)\right) \right] \tag {18}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
# A.3 The Token-level optimal advantage function of DPO
|
| 358 |
+
|
| 359 |
+
By log-linearizing the fixed point solution of the optimal policy at the token level (Eq. 10), we obtain:
|
| 360 |
+
|
| 361 |
+
$$
|
| 362 |
+
\beta \log \pi^ {*} (\mathbf {a} _ {t} \mid \mathbf {s} _ {t}) = Q ^ {*} (\mathbf {s} _ {t}, \mathbf {a} _ {t}) - V ^ {*} (\mathbf {s} _ {t}) \tag {19}
|
| 363 |
+
$$
|
| 364 |
+
|
| 365 |
+
Then, combining with Eq. 24:
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\beta \log \frac {\pi^ {*} (\mathbf {a} _ {t} \mid \mathbf {s} _ {t})}{\pi_ {\text {r e f}} (\mathbf {a} _ {t} \mid \mathbf {s} _ {t})} = r (\mathbf {s} _ {t}, \mathbf {a} _ {t}) + V ^ {*} (\mathbf {s} _ {t + 1}) - V ^ {*} (\mathbf {s} _ {t}). \tag {20}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
Thus, we can establish the relationship between $\beta \log \frac{\pi^{*}(\mathbf{a}_{t}|\mathbf{s}_{t})}{\pi_{\mathrm{ref}}(\mathbf{a}_{t}|\mathbf{s}_{t})}$ and $r(\mathbf{s}_t,\mathbf{a}_t)$ . and according to Rafailov et al. 2023's definition:
|
| 372 |
+
|
| 373 |
+
Definition 1 Two reward functions $r(\mathbf{s}_t, \mathbf{a}_t)$ and $r'(\mathbf{s}_t, \mathbf{a}_t)$ are equivalent if there exists a potential function $\Phi(\mathbf{s})$ , such that $r'(\mathbf{s}_t, \mathbf{a}_t) = r(\mathbf{s}_t, \mathbf{a}_t) + \Phi(\mathbf{s}_{t+1}) - \Phi(\mathbf{s}_t)$ .
|
| 374 |
+
|
| 375 |
+
We can conclude that the optimal advantage function is $\beta \log \frac{\pi^{*}(\mathbf{a}_{t}|\mathbf{s}_{t})}{\pi_{\mathrm{ref}}(\mathbf{a}_{t}|\mathbf{s}_{t})}$ .
|
| 376 |
+
|
| 377 |
+
# A.4 Proving that the $\beta$ of DPO can vary across tokens in the token-level MDP
|
| 378 |
+
|
| 379 |
+
When $\beta$ is considered as a variable dependent on $t$ , Eq. 9 is transformed into:
|
| 380 |
+
|
| 381 |
+
$$
|
| 382 |
+
\max _ {\pi_ {\theta}} \mathbb {E} _ {a _ {t} \sim \pi_ {\theta} (\cdot | \mathbf {s} _ {t})} \sum_ {t = 0} ^ {T} \left[ \left(r \left(\mathbf {s} _ {t}, \mathbf {a} _ {t}\right) + \beta_ {t} \log \pi_ {r e f} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right)\right) - \beta_ {t} \log \pi_ {\theta} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right) \right] \tag {21}
|
| 383 |
+
$$
|
| 384 |
+
|
| 385 |
+
where $\beta_{t}$ depends solely on $\mathbf{a}_t$ and $\mathbf{s}_t$ . Then, according to Maximum Entropy Reinforcement Learning with Fixed Dynamics (Levine, 2018), the above formula can be rewritten in a form that includes the KL divergence:
|
| 386 |
+
|
| 387 |
+
$$
|
| 388 |
+
= \mathbb {E} _ {\mathbf {s} _ {t}} \left[ \right. - \beta_ {t} D _ {K L} \left( \right.\pi_ {\theta} (\mathbf {a} _ {t} | \mathbf {s} _ {t}) \left\| \right. \frac {1}{\exp (V (\mathbf {s} _ {t}))} \exp \left(\frac {r (\mathbf {s} _ {t} , \mathbf {a} _ {t}) + \beta_ {t} \log \pi_ {r e f} (\mathbf {a} _ {t} | \mathbf {s} _ {t})}{\beta_ {t}}\right)\left. \right) + V (\mathbf {s} _ {t}) \left. \right] \tag {22}
|
| 389 |
+
$$
|
| 390 |
+
|
| 391 |
+
where $V(\mathbf{s}_t) = \beta_t\log \int_{\mathcal{A}}[\exp \frac{r(\mathbf{s}_t,\mathbf{a}_t)}{\beta_t}\pi_{ref}(\mathbf{a}_t|\mathbf{s}_t)]d\mathbf{a}_t$ . We know that when the KL divergence term is minimized, meaning the two distributions are the same, the above expectation reaches its maximum value. That is:
|
| 392 |
+
|
| 393 |
+
$$
|
| 394 |
+
\pi_ {\theta} (\mathbf {a} _ {t} | \mathbf {s} _ {t}) = \frac {1}{\exp (V (\mathbf {s} _ {t}))} \exp \left(\frac {r (\mathbf {s} _ {t} , \mathbf {a} _ {t}) + \beta_ {t} \log \pi_ {r e f} (\mathbf {a} _ {t} | \mathbf {s} _ {t})}{\beta_ {t}}\right) \tag {23}
|
| 395 |
+
$$
|
| 396 |
+
|
| 397 |
+
Based on this, we define that:
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
Q ^ {*} \left(s _ {t}, a _ {t}\right) = \left\{ \begin{array}{l l} r \left(s _ {t}, a _ {t}\right) + \beta_ {t} \log \pi_ {r e f} \left(a _ {t} \mid s _ {t}\right) + V ^ {*} \left(s _ {t + 1}\right), & \text {i f} s _ {t + 1} \text {i s n o t t e r m i n a l} \\ r \left(s _ {t}, a _ {t}\right) + \beta_ {t} \log \pi_ {r e f} \left(a _ {t} \mid s _ {t}\right), & \text {i f} s _ {t + 1} \text {i s t e r m i n a l} \end{array} \right. \tag {24}
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
Thus we can obtain the solution for the optimal policy:
|
| 404 |
+
|
| 405 |
+
$$
|
| 406 |
+
\pi_ {\theta} \left(\mathbf {a} _ {t} \mid \mathbf {s} _ {t}\right) = e ^ {\left(Q \left(\mathbf {s} _ {t}, \mathbf {a} _ {t}\right) - V \left(\mathbf {s} _ {t}\right)\right) / \beta_ {t}} \tag {25}
|
| 407 |
+
$$
|
| 408 |
+
|
| 409 |
+
Thus, based on the fixed point solution with a varying $\beta$ in Eq. 25, we can continue the derivation in section A.2 to obtain the token-level MDP of DPO with vary $\beta$ values for different tokens and perform a similar derivation as in Appendix A.3.
|
| 410 |
+
|
| 411 |
+
Finally, it can be concluded that $\beta_{t}\log \frac{\pi^{*}(\mathbf{a}_{t}|\mathbf{s}_{t})}{\pi_{\mathrm{ref}}(\mathbf{a}_{t}|\mathbf{s}_{t})}$ can serve as the token-level advantage function.
|
| 412 |
+
|
| 413 |
+
# A.5 Gradient Analysis
|
| 414 |
+
|
| 415 |
+
Here's the gradient analysis of token-level DPO (Eq. 5) incorporating fine-grained signals. We define:
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
R _ {w, k} = \beta \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {w, k} \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}, \quad R _ {w} = \sum_ {k = 0} ^ {S _ {w} - 1} R _ {w, k} \tag {26}
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
R _ {l, k} = \beta \sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {l, k} \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}, \quad R _ {l} = \sum_ {k = 0} ^ {S _ {l} - 1} R _ {l, k}. \tag {27}
|
| 423 |
+
$$
|
| 424 |
+
|
| 425 |
+
Then, Eq. 5 can be transformed into:
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
\mathcal {L} \left(\pi_ {\theta}, D\right) = - \mathbb {E} _ {\left(\tau_ {w}, \tau_ {l}\right) \sim D} \log \sigma \left(R _ {w} - R _ {l}\right). \tag {28}
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
Then, differentiate the above equation:
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\nabla_ {\theta} \mathcal {L} (\pi_ {\theta}, D) = - \mathbb {E} _ {(\tau_ {w}, \tau_ {l}) \sim D} [ \sigma (R _ {l} - R _ {w}) \cdot (\nabla_ {\theta} R _ {w} - \nabla_ {\theta} R _ {l}) ]. \tag {29}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
Expanding the above equation, we get:
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
\nabla_ {\theta} \mathcal {L} \left(\pi_ {\theta}, D\right) = - \mathbb {E} _ {\left(\tau_ {w}, \tau_ {l}\right) \sim D} [ \beta \cdot \sigma \left(\left(\sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {w, k} \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right)}\right) - \left(\sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {l, k} \log \frac {\pi^ {*} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}{\pi_ {r e f} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)}\right)\right) \tag {30}
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
$$
|
| 444 |
+
\cdot \left(\sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {w, k} \nabla_ {\theta} \log \pi_ {\theta} ^ {*} \left(\mathbf {a} _ {t} ^ {w} \mid \mathbf {s} _ {t} ^ {w}\right) - \left(\sum_ {t = n _ {k}} ^ {n _ {k} + l _ {k}} r _ {l, k} \nabla_ {\theta} \log \pi_ {\theta} ^ {*} \left(\mathbf {a} _ {t} ^ {l} \mid \mathbf {s} _ {t} ^ {l}\right)\right) \right]. \tag {31}
|
| 445 |
+
$$
|
| 446 |
+
|
| 447 |
+
We can see that the gradient difference between the chosen and rejected segments is entirely determined by $r_{w,k}$ and $r_{l,k}$ . Specifically, segments in the chosen set that score higher have larger gradients and are more optimized, while those with lower scores have smaller gradients and are optimized less. The same applies to the rejected response. This allows the model to selectively increase the generation probability of good parts in the chosen response and decrease it for poor parts in the rejected response. Poor parts of the chosen response and better parts of the rejected response receive less optimization. From a gradient perspective, token-level DPO incorporating fine-grained signals can perform targeted optimization on chosen and rejected responses, achieving higher alignment performance.
|
| 448 |
+
|
| 449 |
+
# B Implementation Details
|
| 450 |
+
|
| 451 |
+
# B.1 Hyper-parameters
|
| 452 |
+
|
| 453 |
+
For all the compared methods, we set $\beta$ to 0.2, and the final loss includes $0.1\mathrm{x}$ of the SFT loss except for ORPO. To ensure a fair comparison, in our method, the $\beta$ is adaptively adjusted during training by calculating the average score $r$ of all segments within a batch to achieve equivalence with $\beta = 0.2$ . For the specific methods: The $\gamma$ of SimPO is set to 0.5. In TDPO, we use TDPO $_2$ with $\alpha$ set to 0.5.
|
| 454 |
+
|
| 455 |
+
For the 2D-DPO's weights $\mathbf{W}$ , we follow Wang et al. 2024b and use a heuristic search, setting the weights for the five aspects Helpfulness, Correctness, Safety, Completeness, Clarity to [0.3, 0.4, 0.1, 0.1, 0.1]. For 1D-DPO, we only used Helpfulness, which measures overall performance, meaning the weights are [1, 0, 0, 0, 0].
|
| 456 |
+
|
| 457 |
+
# B.2 Training Setup
|
| 458 |
+
|
| 459 |
+
We trained all models on 8 A100-80GB SXM GPUs. The per_device_train_batch_size was set to 1, gradient Accumulation_steps to 8, and we used bfloat16 precision. The initial learning rate was set to 1e-7 with cosine decay. Each method was trained for 700 steps.
|
| 460 |
+
|
| 461 |
+
# B.3 Core Codes
|
| 462 |
+
|
| 463 |
+
The core code of 2D-DPO is as follows:
|
| 464 |
+
|
| 465 |
+
```python
|
| 466 |
+
def 2D_DPO_loss(
|
| 467 |
+
self,
|
| 468 |
+
policy_chosen_logps: "torch.Tensor",
|
| 469 |
+
policy_rejected_logps: "torch.Tensor",
|
| 470 |
+
reference_chosen_logps: "torch.Tensor",
|
| 471 |
+
reference_rejected_logps: "torch.Tensor",
|
| 472 |
+
chosen Scores: "torch.Tensor",
|
| 473 |
+
rejected Scores: "torch.Tensor" ) -> Tuple["torch.Tensor","torch.Tensor","torch.Tensor']:
|
| 474 |
+
chosen_rewards $=$ policy_chosen_logps.to(self.acceleratordevice)-
|
| 475 |
+
reference_chosen_logps.to(self.acceleratordevice)
|
| 476 |
+
rejected_rewards $=$ policy_rejected_logps.to(self.acceleratordevice)-
|
| 477 |
+
reference_rejected_logps.to(self.acceleratordevice)
|
| 478 |
+
chosen_score $=$ chosen_score[:,::-1] /4+1
|
| 479 |
+
rejected_score $=$ rejected_score[:,::-1] /4+1
|
| 480 |
+
def get_chunked_idxs(scores):
|
| 481 |
+
chunked_idx $\equiv$ [] for idx in range(scores.shape[-1]): if idx $= = 0$ pre_score $=$ scores[:idx] else: if (scores[:idx] != pre_score).any(): chunked_idx.append(idx) pre_score $=$ scores[:idx] chunked_idx.append(scores.shape[-1]) return chunked_idx
|
| 482 |
+
def compute_que_beta(beta, chosen_score, rejected_score, min_chunk_num): equ_beta_chosen $=$ beta / (chosen_score-sum(-1)/min_chunk_num) equ_beta_rejected $=$ beta / (rejected_score-sum(-1)/min_chunk_num) return equ_beta_chosen, equ_beta_rejected
|
| 483 |
+
bs $=$ chosen_score.shape[0]
|
| 484 |
+
losses $= 0$
|
| 485 |
+
for idx in range(bs): chosen_chunked_idx $\equiv$ get_chunked_idxs(chosen_score[sidx]) rejected_chunked_idx $\equiv$ get_chunked_idxs(rejected_score[sidx]) chosen_total Scores $\equiv$ [] rejected_total Scores $\equiv$ [] for i in range(len(chosen_chunked_idx)-1): chosen_score_g $\equiv$ (self.W * chosen_score[sidx,:chosen_chunked_idx[i]]).sum(-1) chosen_total Scores.append(chosen_score_g) for i in range(len(rejected_chunked_idx)-1): rejected_score_g $\equiv$ (self.W * rejected_score[sidx,:), rejected_chunked_idx[i]).sum(-1) rejected_total Scores.append(rejected_score_g) if len(chosen_total Scores $= = 0$ or len(rejected_total Scores $= = 0$ .. continue chosen_total Scores $\equiv$ torch.stack(chosen_total Scores) rejected_total Scores $\equiv$ torch.stack(rejected_total Scores) min_chunk_num $\equiv$ min(len(chosen_chunked_idx)-1,len(rejected_chunked_idx)-1) top_chosen Indices $\equiv$ torch.argsort(chosen_total Scores, dim=0,descending= True)[:min_chunk_num] top_rejected Indices $\equiv$ torch.argsort(rejected_total Scores, dim=0)[: min_chunk_num] top_chosen Scores $\equiv$ chosen_total Scores [top_chosen Indices] top_rejected Scores $\equiv$ rejected_total Scores [top_rejected Indices]
|
| 486 |
+
```
|
| 487 |
+
|
| 488 |
+
```python
|
| 489 |
+
equu_beta_chosen, equu_beta_rejected = compute_que_beta(self.beta, top_chosen Scores, top_rejected Scores, min_chunk_num)
|
| 490 |
+
for i in range(min_chunk_num):
|
| 491 |
+
chosen_chunk_idx = top_chosen Indices[i].item()
|
| 492 |
+
rejected_chunk_idx = top_rejected Indices[i].item()
|
| 493 |
+
chosen_rewards_g = chosen_rewards[ idx, chosen_chunked_idx]
|
| 494 |
+
chosen_chunk_idx]: chosen_chunked_idx[chosen_chunk_idx + 1].sum(-1)
|
| 495 |
+
chosenScores_g = chosen_total Scores[chosen_chunk_idx]
|
| 496 |
+
rejected_rewards_g = rejected_rewards[ idx, rejected_chunked_idx]
|
| 497 |
+
rejected_chunk_idx]: rejected_chunked_idx[rejected_chunk_idx + 1].sum(-1)
|
| 498 |
+
rejectedScores_g = rejected_total Scores[rejected_chunk_idx]
|
| 499 |
+
logits = equa_beta_chosen * chosen Scores_g * chosen_rewards_g - equa_beta_rejected * rejectedScores_g * rejected_rewards_g
|
| 500 |
+
losses += -F.logsigmoid(logits)
|
| 501 |
+
losses = losses / bs
|
| 502 |
+
return losses
|
| 503 |
+
```
|
| 504 |
+
|
| 505 |
+
# C HelpSteer-2D Data Distribution and Statistics
|
| 506 |
+
|
| 507 |
+
# C.1 Data Annotation
|
| 508 |
+
|
| 509 |
+
Question: How can I customize it to send my jump count to other players that also have the Weak Aura
|
| 510 |
+
|
| 511 |
+
Response: 14 To customize the code to send your jump count to other players that also have the Weak Aura, you will need to modify the code to store the jump count in a shared data store. # segment 1 One option could be to use a database or a server-side scripting language to store and manage the data. # segment 2 Here is one example of how you could modify the code to send the jump count to other players: # segment 3
|
| 512 |
+
|
| 513 |
+
```javascript
|
| 514 |
+
1. First, update the onJump function to store the jump count in a variable that is accessible to other players: # segment 4
|
| 515 |
+
- var jumpCount = 0; function onJump() { jumpCount++; } *** # segment 5
|
| 516 |
+
```
|
| 517 |
+
|
| 518 |
+
```javascript
|
| 519 |
+
2. Next, create a server-side script that can retrieve the jump count from the database or other data store: # segment 6
|
| 520 |
+
```
|
| 521 |
+
Assuming you are using Node.js or another server-side language
|
| 522 |
+
var http = require('http');
|
| 523 |
+
var queryString = require('query-string');
|
| 524 |
+
# segment 7
|
| 525 |
+
Connect to the database
|
| 526 |
+
var db = new mysql('localhost', 'username', 'password', 'database_name');
|
| 527 |
+
# segment 8
|
| 528 |
+
Retrieve the jump count from the database
|
| 529 |
+
http.get('http://yourdomain.com/getJumpCount', function (response, body) { var params =
|
| 530 |
+
queryString.count(body); var jumpCount = params.count; # segment 9
|
| 531 |
+
```
|
| 532 |
+
|
| 533 |
+
```javascript
|
| 534 |
+
Update the jump count in the database or other data store db.query("UPDATE players SET jumpCount = jumpCount + ") + jumpCount + " WHERE username = 'yourusername' ", function (error, results, fields) { if (error) throw error; }); }); #segment 10
|
| 535 |
+
```
|
| 536 |
+
|
| 537 |
+
3. Finally, modify the HTML to include a form that allows players to submit their jump count: #segement 11
|
| 538 |
+
|
| 539 |
+
```txt
|
| 540 |
+
```
|
| 541 |
+
<form action="http://yourdomain.com/sendJumpCount" method="POST">
|
| 542 |
+
<label for="jumpCount">Jump count:</label>
|
| 543 |
+
<input type="number" name="jumpCount" id="jumpCount">
|
| 544 |
+
<input type="submit" value="Send">
|
| 545 |
+
</form>
|
| 546 |
+
}
|
| 547 |
+
# segment 12
|
| 548 |
+
```
|
| 549 |
+
|
| 550 |
+
This code uses a server-side script to retrieve the jump count from the database or other data store and update the jump count for the player with the Weak Aura. #segment 13
|
| 551 |
+
|
| 552 |
+
```txt
|
| 553 |
+
You can modify the database query to update the jump count for specific players or to store the jump count in a different data store. #segment 14
|
| 554 |
+
```
|
| 555 |
+
|
| 556 |
+
Figure 7: An example of splitting the model's response.
|
| 557 |
+
|
| 558 |
+
<table><tr><td>-</td><td>Helpfulness(%)</td><td>Correctness(%)</td><td>Safety(%)</td><td>Completeness(%)</td><td>Clarity(%)</td></tr><tr><td>Accuracy</td><td>87.3</td><td>94.9</td><td>99.4</td><td>84.7</td><td>91.1</td></tr></table>
|
| 559 |
+
|
| 560 |
+
Table 3: Annotation accuracy of GPT-4 across different aspects.
|
| 561 |
+
|
| 562 |
+
<table><tr><td>-</td><td>Helpfulness(%)</td><td>Correctness(%)</td><td>Safety(%)</td><td>Completeness(%)</td><td>Clarity(%)</td></tr><tr><td>Accuracy</td><td>74.5</td><td>87.0</td><td>99.6</td><td>81.5</td><td>71.5</td></tr></table>
|
| 563 |
+
|
| 564 |
+
Table 4: Accuracy of the reward model in annotating 2D reward signals.
|
| 565 |
+
|
| 566 |
+
As discussed in section3.1, we adopt a fine-grained scoring approach that decomposes the scalar scores of model responses into segment-level and aspect-level on the HelpSteer2 dataset. For each data point, we annotated reward scores for each segment based on multiple aspects, producing a two-dimensional reward score array. First, we used Python regular expressions to segment model responses. Special rules were applied for specific data types, such as code and tables, to ensure segmentation aligned with human reading patterns. An example of the segmented data is shown in Figure 7. Second, we employed GPT-4 to evaluate each segment across multiple aspects. Each aspect can be briefly described as follows:
|
| 567 |
+
|
| 568 |
+
1. **Helpfulness:** Assesses whether the model understands the user's query and provides a useful response.
|
| 569 |
+
2. Correctness: Evaluates whether the response contains factual inaccuracies or hallucinations, either contradicting prior conversation context or real-world facts.
|
| 570 |
+
3. Safety: Measures the presence of harmful content, including hate speech, bullying, harassment, or inappropriate material.
|
| 571 |
+
4. Completeness: Reflects the degree to which the user's intent is fulfilled. For example, if a user asks three sub-questions, the completeness score should increase incrementally as each sub-question is answered.
|
| 572 |
+
5. Clarity: Assesses the response's clarity and conciseness. Ambiguous, confusing, or overly repetitive responses receive lower scores.
|
| 573 |
+
|
| 574 |
+
All aspects were rated on a 5-point Likert scale, except for safety, which used a 4-point scale (4 - Safe; 0 - Unsafe). The 5-point Likert scale was defined as follows: 0 - Strongly Disagree; 1 - Disagree; 2 - Neutral; 3 - Agree; 4 - Strongly Agree. Each score level corresponds to specific evaluation criteria detailed in the annotation prompt.
|
| 575 |
+
|
| 576 |
+
# C.2 Annotation Consistency with Human Labels
|
| 577 |
+
|
| 578 |
+
To assess the consistency between GPT-4 annotations and human labels, we randomly sample 500 data points, each annotated by five human evaluators. The correctness of each annotation was determined using a majority voting mechanism: if more than two annotators considered GPT-4's annotation incorrect, it was marked as incorrect; otherwise, it was deemed correct. The accuracy results, shown in table 3, indicate that all aspects surpass $80\%$ accuracy, with the Safety aspect reaching as high as $99.4\%$ . These results suggest that GPT-4's annotation reliability is generally high.
|
| 579 |
+
|
| 580 |
+
# C.3 Reward Distribution
|
| 581 |
+
|
| 582 |
+
In Figure 8, we present the distribution of reward scores for both preferred and dispreferred responses. The results show a clear distinction between the two distributions, with preferred responses predominantly receiving higher scores, particularly in the maximum range (4). In contrast, dispreferred responses tend to be more frequent in the lower score ranges compared to preferred responses. Figure 9 further illustrates the disparity between preferred and dispreferred responses by showcasing the distribution of minimum and maximum scores for each aspect.
|
| 583 |
+
|
| 584 |
+
# C.4 Reward Model Performance
|
| 585 |
+
|
| 586 |
+
We trained a reward model that annotates 2D preference signals to label more data at a low cost and conduct iterative DPO training. Specifically, we use the Qwen2-72B as the backbone. The hidden state of
|
| 587 |
+
|
| 588 |
+

|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
|
| 592 |
+

|
| 593 |
+
|
| 594 |
+

|
| 595 |
+
|
| 596 |
+

|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
Figure 8: Statistics of reward distribution on HelpSteer-2D dataset.
|
| 600 |
+
|
| 601 |
+

|
| 602 |
+
|
| 603 |
+

|
| 604 |
+
|
| 605 |
+

|
| 606 |
+
|
| 607 |
+

|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
|
| 611 |
+

|
| 612 |
+
|
| 613 |
+

|
| 614 |
+
|
| 615 |
+

|
| 616 |
+
|
| 617 |
+

|
| 618 |
+
|
| 619 |
+

|
| 620 |
+
|
| 621 |
+

|
| 622 |
+
|
| 623 |
+

|
| 624 |
+
|
| 625 |
+

|
| 626 |
+
|
| 627 |
+

|
| 628 |
+
|
| 629 |
+

|
| 630 |
+
|
| 631 |
+

|
| 632 |
+
|
| 633 |
+

|
| 634 |
+
|
| 635 |
+

|
| 636 |
+
|
| 637 |
+

|
| 638 |
+
|
| 639 |
+

|
| 640 |
+
Figure 9: Statistics of the distribution of the highest/lowest values of each response on HelpSteer-2D dataset.
|
| 641 |
+
|
| 642 |
+

|
| 643 |
+
|
| 644 |
+

|
| 645 |
+
|
| 646 |
+

|
| 647 |
+
|
| 648 |
+

|
| 649 |
+
|
| 650 |
+
the last layer for each segment's final token is passed through a linear layer to generate a prediction score, utilizing MSE loss for training. In table 4, we report the accuracy of the reward model. The accuracy is determined by rounding the scores assigned by the reward model and comparing them with the true labels. The high level of performance suggests that the reward model is effective in annotating 2D signals, indicating its potential as a partial substitute for manual annotation. This work lays the groundwork for
|
| 651 |
+
|
| 652 |
+
further research in online learning.
|
| 653 |
+
|
| 654 |
+
D Performance of Iterative 2D-DPO
|
| 655 |
+
|
| 656 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Iteration</td><td colspan="3">Arena-Hard</td><td colspan="3">AlpacaEval 2.0</td><td colspan="3">MT-Bench</td></tr><tr><td>WR (%)</td><td>Avg. len</td><td>95% CI</td><td>LC (%)</td><td>WR (%)</td><td>Avg. len</td><td>Turn 1</td><td>Turn 2</td><td>Avg. Score</td></tr><tr><td rowspan="4">Qwen2-7B-Instruct +2D-DPO</td><td>0</td><td>25.10</td><td>583</td><td>(-2.1, 2.0)</td><td>30.68</td><td>28.32</td><td>1862</td><td>8.01</td><td>6.61</td><td>7.31</td></tr><tr><td>1</td><td>28.30</td><td>572</td><td>(-2.0, 2.3)</td><td>31.06</td><td>29.32</td><td>1983</td><td>8.06</td><td>6.58</td><td>7.32</td></tr><tr><td>2</td><td>29.80</td><td>585</td><td>(-1.8, 2.2)</td><td>31.93</td><td>29.94</td><td>1972</td><td>8.13</td><td>6.54</td><td>7.34</td></tr><tr><td>3</td><td>30.70</td><td>592</td><td>(-1.7, 1.9)</td><td>32.55</td><td>30.43</td><td>1992</td><td>8.20</td><td>6.72</td><td>7.46</td></tr></table>
|
| 657 |
+
|
| 658 |
+
Table 5: Experimental results of Iterative 2D-DPO using a 2D-reward model to generate scores.
|
| 659 |
+
|
| 660 |
+
In addition to analyzing the accuracy of the 2D-reward model, we also utilized it to generate 2D scores for iterative training. We used a random selection of 1000 instructions from the 2D-aligned dataset as the instruction set and sampled from the model with a Temperature of 0.7 and top_p of 0.8. For each instruction, we sampled 4 responses. When selecting the chosen and rejected responses, we used weighted representative scores of 5 aspects. Specifically, for Helpfulness and Correctness, we took the average score across all segments. For Safety, we selected the minimum value. For Completeness and Clarity, we used the score from the last segment. The weights remained the same as before, with [0.3, 0.4, 0.1, 0.1, 0.1]. The response with the highest weighted score was chosen as the chosen and the lowest as the rejected, and then 2D-DPO training was performed using the same hyperparameters as in previous experiments.
|
| 661 |
+
|
| 662 |
+
The experimental results are shown in Table 5, where it can be observed that as the iteration increases, the model's performance across different benchmarks tends to improve. This demonstrates the potential of the 2D-reward model in iterative and online training.
|
| 663 |
+
|
| 664 |
+
# E Eexamples of Controllable Training
|
| 665 |
+
|
| 666 |
+
Our method can achieve different style generation by setting different aspect weights. In Figure 10, we show the answers of two models trained with different aspect weights. Response A is produced by a model with the following weights: Helpfulness: 0.3, Correctness: 0.3, Safety: 0.1, Completeness: 0.1, Clarity: 0.2, while Response B is produced with weights set to: Helpfulness: 0.3, Correctness: 0.3, Safety: 0.1, Completeness: 0.2, Clarity: 0.1. Model A, with a higher weight on clarity(0.2) and a lower weight on completeness(0.1), produces relatively more concise responses, whereas model B produces relatively more detailed responses. This flexibility in adjusting weight parameters allows for fine-tuning the model's outputs to achieve specific alignment goals.
|
| 667 |
+
|
| 668 |
+
# F Prompt Used for Annotating 2D Fine-Grained Signals
|
| 669 |
+
|
| 670 |
+
<table><tr><td>prompt</td><td>What are the practical gift ideas for Valentine's Day gift for girlfriend?</td></tr><tr><td>Response A</td><td>Here are some practical Valentine's Day gift ideas:1. Personalized jewelry (necklace, bracelet); 2. A spa day or massage voucher; 3. A bouquet of her favorite flowers; 4. Perfume she loves or has been eyeing; 5. A high-quality makeup set or skincare products; 6. A romantic dinner at her favorite restaurant; 7. A weekend getaway to a cozy bed and breakfast; 8. A heartfelt love letter or poetry book; 9. A stylish handbag or wallet; 10. A subscription box tailored to her interests (books, beauty, etc.).</td></tr><tr><td>Response B</td><td>Practical gifts for Valentine's Day can be just as thoughtful and romantic as more traditional options, especially if they cater to your girlfriend's daily needs or hobbies. Here are some practical yet heartfelt gift ideas:1. Tech Accessories:Noise-canceling headphones for work, travel, or relaxation.A high-quality laptop bag or tablet case.A smartwatch or fitness tracker if she's into health and wellness.2. Clothing and Accessories:A warm winter coat or a versatile jacket suitable for her climate.Stylish and comfortable shoes for everyday wear.Quality undergarments from a brand she loves.3. Health and Wellness:A gym membership or personal training sessions if she enjoys fitness.A yoga mat or equipment for her home workouts.A water bottle with a built-in infuser for flavored water....When choosing a practical gift, consider what will make her life easier, more enjoyable, or more fulfilling. Practical gifts can show that you pay attention to her needs and support her daily activities and passions.</td></tr><tr><td>prompt</td><td>How does one determine their self worth?</td></tr><tr><td>Response A</td><td>Determining self-worth involves recognizing your inherent value as a person. It can be cultivated through self-compassion, personal growth, and focusing on internal qualities rather than external measures. Self-worth comes from within and is about valuing your self for who you are, not what you do or have.</td></tr><tr><td>Response B</td><td>Determining your self-worth is fundamentally about recognizing your inherent value as a person. It involves a few key steps:Firstly, self-awareness is crucial. Spend time reflecting on your personal values, your strengths, and your passions.Secondly, challenge the negative narratives you might tell yourself. Often, we are our own harshest critics. When you catch yourself thinking negatively about your abilities o r worth, pause and reframe those thoughts into something more constructive and positivethirdly, seek out positive influences. Surround yourself with people who uplift you and recognize your worth. Their encouragement can reinforce your own belief in yourself.By focusing on these aspects, you can build a solid foundation of self-worth that supporrs your mental health and overall well-being.</td></tr></table>
|
| 671 |
+
|
| 672 |
+
Figure 10: Examples of different style generation by the models trained by different aspect weights.
|
| 673 |
+
|
| 674 |
+
Now you are an intelligent annotator, and your task is to evaluate the quality of the responses of two intelligent assistants, and evaluate them sentence by sentence on the dimension of helpfulness/understanding.
|
| 675 |
+
|
| 676 |
+
Requirements:
|
| 677 |
+
|
| 678 |
+
1. You should evaluate the model's responses on a per-sentence basis using a 5-point Likert scale according to the already segmented sentences. The response tags the segmented sentences in the format of <ID>sentence content</ID>, where ID is the sentence's sequence number.
|
| 679 |
+
2. You need to refer to the content of the previous rounds if it's a multi-round conversation.
|
| 680 |
+
|
| 681 |
+
Scores and corresponding evaluation criteria:
|
| 682 |
+
|
| 683 |
+
4 - The response is extremely helpful and completely aligned with the spirit of what the prompt was asking for.
|
| 684 |
+
|
| 685 |
+
3 - The response is mostly helpful and mainly aligned with what the user was looking for, but there is still some room for improvement.
|
| 686 |
+
2 - The response is partially helpful but misses the overall goal of the user's query/input in some way. The response did not fully satisfy what the user was looking for.
|
| 687 |
+
|
| 688 |
+
1 - The response is borderline unhelpful and mostly does not capture what the user was looking for, but it is still usable and helpful in a small way.
|
| 689 |
+
0 - The response is not useful or helpful at all. The response completely missed the essence of what the user wanted.
|
| 690 |
+
|
| 691 |
+
For example:
|
| 692 |
+
|
| 693 |
+
user:
|
| 694 |
+
|
| 695 |
+
Conversation History: {history}
|
| 696 |
+
|
| 697 |
+
Response 1 to be evaluated: {response1}
|
| 698 |
+
|
| 699 |
+
Response 2 to be evaluated: {response2}
|
| 700 |
+
|
| 701 |
+
assistant:
|
| 702 |
+
|
| 703 |
+
Response 1 Evaluation: ...
|
| 704 |
+
|
| 705 |
+
$< 1 >$ Evaluation: Completely aligned with the user's query introducing the topic effectively.
|
| 706 |
+
Helpfulness/Understanding Rating: 4
|
| 707 |
+
$< 2 >$ Evaluation: Straightforward lead into the listing of tools, setting up the context perfectly.
|
| 708 |
+
|
| 709 |
+
Helpfulness/Understanding Rating: 4
|
| 710 |
+
|
| 711 |
+
<3> ...
|
| 712 |
+
|
| 713 |
+
Response 2 Evaluation:
|
| 714 |
+
|
| 715 |
+
$< 1>$ Evaluation: Sets a clear and focused introduction to listing tools.
|
| 716 |
+
Helpfulness/Understanding Rating: 4
|
| 717 |
+
$< 2 >$ Evaluation: Accurate and introduces Python effectively.
|
| 718 |
+
Helpfulness/Understanding Rating: 4
|
| 719 |
+
<3> ...
|
| 720 |
+
|
| 721 |
+
In the above example, the numbers correspond to the numbers of the sentences being evaluated.
|
| 722 |
+
|
| 723 |
+
Please follow the response format for the above example. Let's get started!
|
| 724 |
+
|
| 725 |
+
Figure 11: The prompt for annotating helpfulness scores.
|
| 726 |
+
|
| 727 |
+
Now you are an intelligent annotator, and your task is to evaluate the quality of the responses of two intelligent assistant, and evaluate them sentence by sentence on the dimension of correctness.
|
| 728 |
+
|
| 729 |
+
# Requirements:
|
| 730 |
+
|
| 731 |
+
1. You should evaluate the model's responses on a per-sentence basis using a 5-point Likert scale according to the already segmented sentences. The response tags the segmented sentences in the format of <ID> sentence content</ID>, where ID is the sentence's sequence number.
|
| 732 |
+
2. You need to refer to the content of the previous rounds if it's a multi-round conversation.
|
| 733 |
+
|
| 734 |
+
Scores and corresponding evaluation criteria:
|
| 735 |
+
|
| 736 |
+
4 - The response is completely correct and accurately meets the requirements of the prompt, with no necessary details missing and no false, misleading, or hallucinated information.
|
| 737 |
+
3 - The response is mostly accurate, with only a small amount of missing information. It does not contain misleading information or hallucinations.
|
| 738 |
+
2 - The response contains both correct and incorrect information. It may miss some details, contain misleading information or slight hallucinations, but is more or less aligned with what the prompt asks for.
|
| 739 |
+
1 - The response has some correct elements but is mostly wrong or incomplete. The response may contain multiple hallucinations, false information, misleading information, or irrelevant information.
|
| 740 |
+
0 - The response is completely incorrect. All information provided is wrong, false, or hallucinated.
|
| 741 |
+
|
| 742 |
+
For example:
|
| 743 |
+
|
| 744 |
+
user:
|
| 745 |
+
|
| 746 |
+
Conversation History: {history}
|
| 747 |
+
|
| 748 |
+
Response 1 to be evaluated: {response1}
|
| 749 |
+
|
| 750 |
+
Response 2 to be evaluated: {response2}
|
| 751 |
+
|
| 752 |
+
assistant:
|
| 753 |
+
|
| 754 |
+
Response 1 Evaluation:
|
| 755 |
+
|
| 756 |
+
$< 1>$ Evaluation: The artists are correctly identified as Wiz Khalifa and Charlie Puth.
|
| 757 |
+
|
| 758 |
+
Correctness Rating: 4
|
| 759 |
+
|
| 760 |
+
<2> ...
|
| 761 |
+
|
| 762 |
+
Response 2 Evaluation:
|
| 763 |
+
|
| 764 |
+
$< 1>$ Evaluation: This sentence is incorrect; Taylor Swift is not involved in the song.
|
| 765 |
+
|
| 766 |
+
Correctness Rating: 0
|
| 767 |
+
|
| 768 |
+
<2> ...
|
| 769 |
+
|
| 770 |
+
In the above example, the numbers correspond to the numbers of the sentences being evaluated.
|
| 771 |
+
|
| 772 |
+
Please follow the response format for the above example. Let's get started!
|
| 773 |
+
|
| 774 |
+
Figure 12: The prompt for annotating correctness scores.
|
| 775 |
+
|
| 776 |
+
Now you are an intelligent annotator, and your task is to evaluate the quality of the responses of two intelligent assistant, and evaluate them sentence by sentence on the dimension of safety. Requirements:
|
| 777 |
+
|
| 778 |
+
1. You should evaluate the model's responses on a per-sentence basis using a 2-point Likert scale according to the already segmented sentences. The response tags the segmented sentences in the format of <ID>content</ID>, where ID is the sentence's sequence number.
|
| 779 |
+
2. For the safety assessment of each sentence, there are only two options: pass(safe) and fail(unsafe), with 4 points and 0 points respectively.
|
| 780 |
+
3. You need to refer to the content of the previous rounds if it's a multi-round conversation.
|
| 781 |
+
|
| 782 |
+
Scores and corresponding evaluation criteria:
|
| 783 |
+
|
| 784 |
+
4 - Pass (Safe) - The response does not include any of the unsafe content described below.
|
| 785 |
+
0 - Fail (Unsafe) - Harmful Content (response includes hate speech, bullying, harassment, sexually explicit content, or violence), illegal activities (response promotes any illegal activities or harmful actions to individuals' or groups' mental, physical, or emotional health), profanity (response contains any curse words), bias and stereotyping (response promotes and/or perpetuate biases and/or stereotypical responses on select societal groups)
|
| 786 |
+
|
| 787 |
+
For example:
|
| 788 |
+
|
| 789 |
+
user:
|
| 790 |
+
|
| 791 |
+
Conversation History: {history}
|
| 792 |
+
|
| 793 |
+
Response 1 to be evaluated: {response1}
|
| 794 |
+
|
| 795 |
+
Response 2 to be evaluated: {response2}
|
| 796 |
+
|
| 797 |
+
assistant:
|
| 798 |
+
|
| 799 |
+
Response 1 Evaluation:
|
| 800 |
+
|
| 801 |
+
$< 1>$ Evaluation: Neutral and safe.
|
| 802 |
+
|
| 803 |
+
Safety Rating: 4
|
| 804 |
+
|
| 805 |
+
<2> ...
|
| 806 |
+
|
| 807 |
+
Response 2 Evaluation:
|
| 808 |
+
|
| 809 |
+
$< 1 >$ Evaluation: Neutral and safe response.
|
| 810 |
+
|
| 811 |
+
Safety Rating: 4
|
| 812 |
+
|
| 813 |
+
<2> ...
|
| 814 |
+
|
| 815 |
+
In the above example, the numbers correspond to the numbers of the sentences being evaluated.
|
| 816 |
+
|
| 817 |
+
Please follow the response format for the above example. Let's get started!
|
| 818 |
+
|
| 819 |
+
Figure 13: The prompt for annotating safety scores.
|
| 820 |
+
|
| 821 |
+
Now you are an intelligent annotator, and your task is to evaluate the quality of the responses of two intelligent assistant, and evaluate them sentence by sentence on the dimension of completeness.
|
| 822 |
+
|
| 823 |
+
# Requirements:
|
| 824 |
+
|
| 825 |
+
1. You should evaluate the model's responses on a per-sentence basis using a 5-point Likert scale according to the already segmented sentences. The response tags the segmented sentences in the format of <ID>content</ID>, where ID is the sentence's sequence number.
|
| 826 |
+
2. The completeness rating of a response should reflect the progress made in task completion or content fulfillment. This means that as a response becomes more complete (by sentence), its completeness rating should either increase or stay the same, never decrease.
|
| 827 |
+
3. You need to refer to the content of the previous rounds if it's a multi-round conversation.
|
| 828 |
+
|
| 829 |
+
Scores and corresponding evaluation criteria:
|
| 830 |
+
|
| 831 |
+
4 - Up to this sentence, the provided response is very complete, without missing any necessary details. If the prompt asked the assistant to perform a task, the task has been fully completed and resolved in the response.
|
| 832 |
+
3 - Up to this sentence, the provided response is nearly complete. If the prompt asked the assistant to perform a task, the task has been mostly successfully completed.
|
| 833 |
+
2 - Up to this sentence, the provided response contains about half of the content but may still lack certain details. If the prompt asked the assistant to perform a task, the task has been attempted with moderate success but still has significant room for improvement.
|
| 834 |
+
1 - Up to this sentence, the provided response contains only a small amount of relevant content and is mostly incomplete. If the prompt asked the assistant to perform a task, the task has been attempted with low success.
|
| 835 |
+
0 - Up to this sentence, the content of the response is completely unrelated to the prompt. If the prompt asked the assistant to perform a task, the task was either not attempted at all, or an incorrect task was attempted in the response.
|
| 836 |
+
|
| 837 |
+
For example:
|
| 838 |
+
|
| 839 |
+
user:
|
| 840 |
+
|
| 841 |
+
Conversation History: {history}
|
| 842 |
+
|
| 843 |
+
Response 1 to be evaluated: {response1}
|
| 844 |
+
|
| 845 |
+
Response 2 to be evaluated: {response2}
|
| 846 |
+
|
| 847 |
+
assistant: assistant:
|
| 848 |
+
|
| 849 |
+
Response 1 Evaluation:
|
| 850 |
+
|
| 851 |
+
$< 1>$ Evaluation: Provides a comprehensive definition of machine learning.
|
| 852 |
+
|
| 853 |
+
Completeness Rating: 1
|
| 854 |
+
|
| 855 |
+
<2> ...
|
| 856 |
+
|
| 857 |
+
In the above example, the numbers correspond to the numbers of the sentences being evaluated.
|
| 858 |
+
|
| 859 |
+
Please follow the response format for the above example. Let's get started!
|
| 860 |
+
|
| 861 |
+
Figure 14: The prompt for annotating completeness scores.
|
| 862 |
+
|
| 863 |
+
Now you are an intelligent annotator, and your task is to evaluate the quality of the responses of two intelligent assistants, and evaluate them sentence by sentence on the dimension of clarity/conciseness.
|
| 864 |
+
|
| 865 |
+
# Requirements:
|
| 866 |
+
|
| 867 |
+
1. You should evaluate the model's responses on a per-sentence basis using a 5-point Likert scale according to the already segmented sentences. The response tags the segmented sentences in the format of <ID>content</ID>, where ID is the sentence's sequence number.
|
| 868 |
+
2. You need to refer to the content of the previous rounds if it's a multi-round conversation.
|
| 869 |
+
|
| 870 |
+
Scores and corresponding evaluation criteria:
|
| 871 |
+
|
| 872 |
+
4 - Very Clear and Concise: The response is completely clear, unambiguous, and succinct, with no redundant information, repetition or self-contradiction.
|
| 873 |
+
3 - Clear and Concise: The response is mostly clear and easy to understand. There might be slight ambiguities or minor redundancy but overall, it is succinct.
|
| 874 |
+
2 - Moderately Clear or Slightly Redundant: The response is basically clear but requires extra explanation/thought, or contains some unnecessary length or repetition, or contains minor contradictions.
|
| 875 |
+
1 - Unclear or Redundant: The response is insufficiently clear, with obvious ambiguities, frequently requiring rephrasing, or contains considerable redundancy or repetition.
|
| 876 |
+
0 - Very Unclear or Very Redundant: The response is extremely vague and difficult to understand, filled with ambiguities, or excessively long with a lot of unnecessary information or repetition, or has serious/numerous contradictions.
|
| 877 |
+
|
| 878 |
+
For example:
|
| 879 |
+
|
| 880 |
+
user:
|
| 881 |
+
|
| 882 |
+
Conversation History: {history}
|
| 883 |
+
|
| 884 |
+
Response 1 to be evaluated: {response1}
|
| 885 |
+
|
| 886 |
+
Response 2 to be evaluated: {response2}
|
| 887 |
+
|
| 888 |
+
assistant:
|
| 889 |
+
|
| 890 |
+
Response 1 Evaluation:
|
| 891 |
+
|
| 892 |
+
$< 1>$ Evaluation: Clear introduction, succinct.
|
| 893 |
+
|
| 894 |
+
Clarity/Conciseness Rating: 4
|
| 895 |
+
|
| 896 |
+
<2> ...
|
| 897 |
+
|
| 898 |
+
Response 2 Evaluation:
|
| 899 |
+
|
| 900 |
+
$< 1 >$ Evaluation: Clear but slightly vague.
|
| 901 |
+
|
| 902 |
+
Clarity/Conciseness Rating: 3
|
| 903 |
+
|
| 904 |
+
<2> ...
|
| 905 |
+
|
| 906 |
+
In the above example, the numbers correspond to the numbers of the sentences being evaluated.
|
| 907 |
+
|
| 908 |
+
Please follow the response format for the above example. Let's get started!
|
| 909 |
+
|
| 910 |
+
Figure 15: The prompt for annotating clarity scores.
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:26b1e846acd00c1fade66bfc4a29a227241780e71ffa5da0d68db9efce0bc129
|
| 3 |
+
size 1484371
|
2025/2D-DPO_ Scaling Direct Preference Optimization with 2-Dimensional Supervision/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/f7a30fc9-6d0c-4d08-b734-9d11e9be50e8_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82a5e6724805bb5d85d573d5eea0564425669a5ee6a2f5376cdc914d91d034a3
|
| 3 |
+
size 3169358
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/full.md
ADDED
|
@@ -0,0 +1,446 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Closer Look into Mixture-of-Experts in Large Language Models
|
| 2 |
+
|
| 3 |
+
Ka Man Lo *
|
| 4 |
+
|
| 5 |
+
University of Macau
|
| 6 |
+
|
| 7 |
+
Zeyu Huang*
|
| 8 |
+
|
| 9 |
+
University of Edinburgh
|
| 10 |
+
|
| 11 |
+
Zihan Qiu *
|
| 12 |
+
|
| 13 |
+
Tsinghua University
|
| 14 |
+
|
| 15 |
+
Zili Wang
|
| 16 |
+
|
| 17 |
+
INF Technology
|
| 18 |
+
|
| 19 |
+
Jie Fu
|
| 20 |
+
|
| 21 |
+
Shanghai AI Lab
|
| 22 |
+
|
| 23 |
+
# Abstract
|
| 24 |
+
|
| 25 |
+
Mixture-of-experts (MoE) is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for each token, MoE architecture could increase the model size without sacrificing computational efficiency, achieving a better trade-off between performance and training costs. However, the underlying mechanism of MoE still lacks further exploration, and its modularization degree remains questionable. In this paper, we make an initial attempt to understand the inner workings of MoE-based large language models. Concretely, we comprehensively study the parametric and behavioral features of four popular MoE-based models and reveal some intriguing observations, including 1) Neurons act like fine-grained experts; 2) The router of MoE usually selects experts with larger output norms; 3) The expert diversity increases as the layer increases, while the last layer is an outlier, which is further validated by an initial experiment. Based on the observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. We hope this work could shed light on future research on the MoE framework and other modular architectures. Code is available at https://github.com/kamanphoebe/Look-into-MoEs.
|
| 26 |
+
|
| 27 |
+
# 1 Introduction
|
| 28 |
+
|
| 29 |
+
The advent of Large Language Models (LLMs) revolutionized the field of Natural Language Processing. LLM researchers are continually pushing the boundaries of Language Models by scaling up both model size and the column of training data, significantly enhancing the capabilities of these models. This escalation in training cost and complexity
|
| 30 |
+
|
| 31 |
+
necessitates innovative solutions to better balance between pre-training efficiency and model performance. One emerging solution to this end is the Mixture-of-Experts (MoE) (Shazeer et al., 2017) architecture. The MoE framework facilitates the computational efficiency of the model by dynamically routing inputs to a subset of experts, allowing for substantial model scaling while maintaining training costs and leading to numerous influential advancements in the field (Reid et al., 2024; Jiang et al., 2024; Dai et al., 2024; Team, 2024).
|
| 32 |
+
|
| 33 |
+
Beyond efficiency, another attractive trait of MoE architecture is its modular design and learning paradigm. This modularization allows for flexible and potentially more generalizable handling of diverse data and tasks within a single model by assigning them to specialized experts. Despite its widespread adoption, it remains an open question whether current MoE-based LLMs truly leverage this modularity in knowledge distribution and expert behaviors. In other words, is MoE a simple ensemble of homogeneous experts or a modular combination of heterogeneous experts? Answering this question comprehensively is non-trivial. Therefore, in this paper, we take the first step by investigating four popular MoE-based LLMs (Mixtral 8x7B (Jiang et al., 2024), Mixtral 8x22B, DeepSeekMoE (Dai et al., 2024), and $\mathrm{Grok - 1}^1$ ) from two critical perspectives: model parameters and model behaviors. We aim to explore common and distinct features and behaviors among different experts, further shedding light on the inner mechanisms of MoE-based models.
|
| 34 |
+
|
| 35 |
+
Specifically, we examine the correlation between experts' parameters, gates, and their output features given text inputs. Before diving into deeper analyses, we briefly summarize some of our empirical conclusions (detailed in § 6) and observations:
|
| 36 |
+
|
| 37 |
+
- Neurons in the Feed-Forward Network (FFN)
|
| 38 |
+
|
| 39 |
+
layer are fine-grained experts. Both the gate embedding matrix and the expert projection matrix $W_{\mathrm{act}}$ perform the choosing operation: the former determines the expert selection while the latter controls the neuron activation. We observe that the similarity heat maps exhibit correlations, suggesting that, from the perspective of $W_{\mathrm{act}}$ , the expert neurons can be considered as "tiny" experts, each represented by a single neuron.
|
| 40 |
+
|
| 41 |
+
- Increasing the number of experts in deeper layers while reducing it in the last layer. This is experimented in Fig. 5. Our observations indicate that the similarities between the parameters and outputs of the experts consistently decrease with increasing layer number, followed by a sudden increase in the last layer.
|
| 42 |
+
- Using the norm as the routing mechanism is a reasonable choice. For both Mixtral 8x7B and DeepSeekMoE, we observe that the gate typically selects experts with larger output norms.
|
| 43 |
+
- When analyzing the correlation between experts, measuring the similarities between weight matrices is, to some extent, equivalent to assessing the average similarities of expert outputs.
|
| 44 |
+
- Training MoE from scratch promotes greater expert diversity than specific initialization schemes. This stems from the observations that stronger correlations (e.g., higher similarities) between parameters and behaviors in Mixtral experts. In contrast, DeepSeekMoE and Grok1, which are trained from scratch, do not show these correlations.
|
| 45 |
+
|
| 46 |
+
# 2 Preliminary: Mixture-of-Experts
|
| 47 |
+
|
| 48 |
+
Mixture-of-Experts models enhance transformers by replacing the original FFNs with $N$ parallel FFNs combined with a router. These $N$ FFNs are called experts and denoted as $E_{n}$ for $n\in [1,N]$ . The gate $g(\cdot ;W_g,k)$ , parameterized by $W_{g}$ and an integer $\mathbf{k}$ , assigns the input $x$ to a score distribution over the experts, $g(x;W_g,\mathrm{k})\in \mathbb{R}^N$ . Typically, the gate $g$ consists of a simple linear layer followed by a softmax and a Top-k function.
|
| 49 |
+
|
| 50 |
+
Given $x \in \mathbb{R}^{d_{\mathrm{hid}}}$ , the output $y \in \mathbb{R}^{d_{\mathrm{hid}}}$ is the weighted sum of the outputs from all experts:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
y = \sum_ {n \in N} g _ {n} (x; W _ {g}, \mathrm {k}) E _ {n} (x)
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
When $k$ for Top-k is smaller than $N$ , only a subset of experts is involved in the computation. This
|
| 57 |
+
|
| 58 |
+
is known as Sparse Mixture-of-Experts (SMoE).
|
| 59 |
+
|
| 60 |
+
The experts $E_{n}$ of the models investigated in this paper follow the style in LLaMA (Touvron et al., 2023), which consists of three linear layers and operates as (the subscript $n$ is omitted for brevity):
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
E (x) = W _ {\text {d o w n}} \left(W _ {\text {u p}} x \odot \sigma \left(W _ {\text {a c t}} x\right)\right) \tag {1}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $\odot$ represents element-wise multiplication and $\sigma$ represents the activation function. Given the three projection matrices $W_{\mathrm{up}}, W_{\mathrm{act}} \in \mathbb{R}^{d_{\mathrm{mid}} \times d_{\mathrm{hid}}}$ and $W_{\mathrm{down}} \in \mathbb{R}^{d_{\mathrm{hid}} \times d_{\mathrm{mid}}}$ , we define a neuron as the combination of the row vectors $W_{\mathrm{up}}[i, :]$ and $W_{\mathrm{act}}[i, :]$ , along with the column vector $W_{\mathrm{down}}[:, i]$ . Thus, each expert contains $d_{\mathrm{mid}}$ neurons, each with size $d_{\mathrm{hid}}$ .
|
| 67 |
+
|
| 68 |
+
# 3 Overview
|
| 69 |
+
|
| 70 |
+
Our experiments are conducted on several open-source MoE models, namely Mixtral 8x7B, Mixtral $8\mathrm{x}22\mathrm{B}^2$ , DeepSeekMoE, and Grok-1. We choose these models due to their widespread use and impressive performance across various domains. Additionally, they exhibit complementary characteristics across several key attributes, enabling a robust comparative analysis using control variables. Details are discussed in Appendix A. To further study the similarities and differences between a standard transformer and a MoE model, we include Mistral 7B (Jiang et al., 2023) as one of our investigated models. Basic information about these models, along with the abbreviations used throughout our paper, is summarized in Tab. 1 and Tab. 4. The analysis is divided into two sections: one focusing on model parameters (static) and the other on model behaviors in response to text input (dynamic).
|
| 71 |
+
|
| 72 |
+
Unless otherwise stated (§ 5.1), cosine similarity is employed for all experiments involving similarity measurements. While we acknowledge the existence of other metrics, we primarily use cosine similarity as it is a widely adopted approach (Sun et al., 2024; Zhang et al., 2021).
|
| 73 |
+
|
| 74 |
+
# 4 Analysis of Static Parameters
|
| 75 |
+
|
| 76 |
+
From a high-level perspective, a model's knowledge is encoded in its parameters, making the investigation of weight matrices a natural approach. In this section, we study the correlation between the parameters of: i) MoE experts (and FFNs for
|
| 77 |
+
|
| 78 |
+
<table><tr><td>Model</td><td>Abbreviation</td><td># MoE layers</td><td># experts</td><td>Top-k</td><td>Hidden size (dhid)</td><td>Intermediate size (dmid)</td></tr><tr><td>Mixtral 8x7B</td><td>Mixtral</td><td>32</td><td>8</td><td>2</td><td>4096</td><td>14336</td></tr><tr><td>Mixtral 8x22B</td><td>Mixtral-22</td><td>56</td><td>8</td><td>2</td><td>6144</td><td>16384</td></tr><tr><td>Mistral 7B</td><td>Mistral</td><td>32</td><td>N/A</td><td>N/A</td><td>4096</td><td>14336</td></tr><tr><td>DeepSeekMoE</td><td>DeepSeek</td><td>27</td><td>64 routed + 2 shared</td><td>6</td><td>2048</td><td>1408</td></tr><tr><td>Grok-1</td><td>Grok</td><td>64</td><td>8</td><td>2</td><td>6144</td><td>32768</td></tr></table>
|
| 79 |
+
|
| 80 |
+
Table 1: Basic information of models used for analysis. The abbreviations are used throughout our paper.
|
| 81 |
+
|
| 82 |
+
Mistral), and ii) gate embeddings; which are two vital components of the MoE architecture.
|
| 83 |
+
|
| 84 |
+
# 4.1 Weight Matrices of Experts
|
| 85 |
+
|
| 86 |
+
MoE models replace FFNs in standard transformers with experts. Following Geva et al. (2020); Qiu et al. (2024b), the projection matrices of the experts can be regarded as keys and values: the column vectors of $W_{\mathrm{down}}$ represent potential outputs; the row vectors of $W_{\mathrm{up}}$ produce weights for each possible output; the row vectors of $W_{\mathrm{act}}$ determine whether to activate the corresponding neurons. Thus, examining the weight matrices provides a straightforward way to understand the expert behaviors. We analyze both the matrix and neuron levels to gain insights from different perspectives.
|
| 87 |
+
|
| 88 |
+
# 4.1.1 Matrix-level
|
| 89 |
+
|
| 90 |
+
In this part, we explore the similarity of the three projection matrices $W_{\mathrm{up}}$ , $W_{\mathrm{act}}$ , and $W_{\mathrm{down}}$ among all experts in each layer. The similarity is calculated based on the flattened matrices and is illustrated in Fig. 1. We denote "F" as the Mistral FFN and "SE" as the DeepSeek shared expert. Note that the figures for different models do not share the same color bar.
|
| 91 |
+
|
| 92 |
+
Common ${}^{3}$ . The heat maps of the three matrices exhibit similar patterns. Directly flattening the large weight matrices leads to high-dimensional vectors, so we use principal components analysis (PCA) to reduce these vectors to two-dimensional space. The resulting figures also show that, for Mixtral and DeepSeek, the expert distribution across the three weight matrices is generally comparable. Details on the PCA results are presented in Append C.1.
|
| 93 |
+
|
| 94 |
+
Mixtrals and Mistral. The cosine similarities between Mixtral experts ( $S_{\mathrm{ee}}$ ) primarily range from 0.2 to 0.4, while the similarities between the experts and the Mistral FFN ( $S_{\mathrm{ef}}$ ) are about 0.6. Yet the values tend to be lower in the deeper layers ( $22^{\mathrm{nd}}-30^{\mathrm{th}}$ for Mixtral and $35^{\mathrm{th}}-50^{\mathrm{th}}$ for Mixtral-22). A "dark cross" can be observed in some layers and
|
| 95 |
+
|
| 96 |
+
corresponds to outliers in the 2D space projected by PCA, indicating that the associated expert is relatively distinct from the others. Interestingly, this cross appears most frequently in Expert 3 for Mistral, suggesting that this expert may have learned some unique attributes. It is noteworthy that the cross usually extends across the entire heat map, including the last row of the FFN. Thus, when an Mistral expert differs from other experts, it is also less similar to the Mistral FFN.
|
| 97 |
+
|
| 98 |
+
DeepSeek and Grok. The shared experts of DeepSeek are implemented as a single MLP block with a larger hidden size than the routed experts, preventing direct comparison of their flattened vectors; thus, we omit them from this experiment. Fig. 1 demonstrates that the similarities between the DeepSeek routed experts and Grok experts are close to zero. While Mixtrals' training method remains unrevealed, it is known that DeepSeek and Grok are trained from scratch. This suggests that Mixtrals may have been trained using special schemes, resulting in less diverse experts compared to those trained from scratch (Wu et al., 2022).
|
| 99 |
+
|
| 100 |
+
# 4.1.2 Neuron-level
|
| 101 |
+
|
| 102 |
+
In § 4.1, we measure the parameter similarity between experts at the matrix level. However, the calculation of cosine similarity is position-dependent. If the neurons of two experts are similar but in different orders, the similarity of their weight matrices will be significantly lower than expected. To address this, we propose two approaches to investigate the correlation at the neuron level: averaging and reordering. Averaging simply averages the rows (for $W_{\mathrm{up}}$ and $W_{\mathrm{act}}$ ) or the columns (for $W_{\mathrm{down}}$ ) of the weight matrices, and then calculates the cosine similarity of the resulting vectors across experts. For reordering, we apply the Jonker-Volgenant algorithm (Jonker and Volgenant, 1988), which is typically used for solving linear assignment problems, to find the optimal order of neurons so that the cosine similarity between two experts is maximized.
|
| 103 |
+
|
| 104 |
+
We describe the results of the reordering method
|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
Figure 1: Matrix-level similarity heat maps of expert weight matrices. Each layer contains three heat maps, corresponding to $W_{\mathrm{up}}$ , $W_{\mathrm{act}}$ , and $W_{\mathrm{down}}$ , respectively. The tick numbers refer to the expert indices. "F" denotes the Mistral FFN.
|
| 108 |
+
|
| 109 |
+
below and provide the details of the averaging approach in Appendix D. Additionally, the projection of neurons into low-dimensional spaces using PCA can be found in Appendix C.2. Due to the heavy computation, we only select several layers for the reordering calculation. Note that the matrices are reordered separately. We measure Kendall's $\tau$ coefficient between the index sequences before and after reordering, whose value increases when the two sequences exist strong agreement. Tab. 2 depicts the common similarity growth after reordering and the average Kendall's coefficient $\bar{\tau}$ over the selected layers. The order of Mixtral neurons changes little (resulting in a large $\tau$ ), and hence nearly unchanged similarities. Despite the substantial similarity increase for DeepSeek and Grok after reordering, their overall values remain around 1e-2.
|
| 110 |
+
|
| 111 |
+
<table><tr><td>Model</td><td>Order of Growth</td><td>τ</td></tr><tr><td>Mixtral</td><td>1e-3</td><td>0.75</td></tr><tr><td>DeepSeek</td><td>100</td><td>-0.0002</td></tr><tr><td>Grok</td><td>100</td><td>-0.0003</td></tr></table>
|
| 112 |
+
|
| 113 |
+
Table 2: Reordering results of expert neurons.
|
| 114 |
+
|
| 115 |
+
# 4.2 Gate Embedding
|
| 116 |
+
|
| 117 |
+
The gate embedding of our investigated MoE models is implemented as a linear layer $W_{g}$ with size $\mathbb{R}^{N} \times \mathbb{R}^{d_{\mathrm{hid}}}$ , where $N$ is the number of experts. The gate serves as a crucial component of MoE, making it essential to study its attributes to understand MoE functionality better. In addition, since each row vector in the gate embedding determines expert selection, some correspondence may exist between $W_{g}$ and the expert weights.
|
| 118 |
+
|
| 119 |
+
To investigate this, we measure the similarities between the gate embedding vectors $W_{g}[n,:]$ for $n \in [1,N]$ . For computational simplicity, we compare them with the neuron-level averaging (instead of the reordering) heat maps of experts presented in Append D, with qualitative analyses detailed in Append E. Specifically, we found that, for all four MoE models, the patterns in the heat maps of v and of expert neurons $W_{\mathrm{act}}[i,:]$ are partially alike in some layers (i.e., the same coordinates in both heat maps exhibit relatively higher or lower values simultaneously).
|
| 120 |
+
|
| 121 |
+
Therefore, we further conduct a quantitative analysis of their similarity values. In particular, we perform linear regression on the paired similarity
|
| 122 |
+
|
| 123 |
+
dataset $(X,Y)$ , where $X$ denotes the similarities of $W_{g}[n,:]$ , and $Y$ denotes the neuron-level similarities of $W_{\mathrm{up}}$ , $W_{\mathrm{act}}$ , or $W_{\mathrm{down}}$ . Tab. 3 describes the average square of Pearson correlation coefficients over all layers $(R_{\mathrm{avg}}^{2})$ , while Tab. 5 lists the Pearson correlation coefficient $(R)$ for each layer. As shown in Tab. 3, the correlation between the similarities of the gate vectors and those of $W_{\mathrm{act}}$ is significantly stronger than that with $W_{\mathrm{up}}$ and $W_{\mathrm{down}}$ . For the $(X,Y_{\mathrm{act}})$ pair, although Mixtral and DeepSeek have similar $R_{\mathrm{avg}}^{2}$ values, the $R^{2}$ of Mixtrals fluctuate between 0.1 and 0.7, while the $R^{2}$ of DeepSeek remains close to 0.4. Furthermore, we can see from Tab. 5 that $(X,Y_{\mathrm{act}})$ for both Mixtral and DeepSeek show positive correlations, whereas $(X,Y_{\mathrm{act}})$ for Grok turn to negative correlations starting from the intermediate (after $25^{\mathrm{th}}$ ) layers. We note that the function of $W_{g}$ and $W_{\mathrm{act}}$ is analogous: the former determines expert selection while the latter is responsible for choosing which neurons to activate. Therefore, they may learn similar knowledge to effectively perform the choosing operation, which explains the observed correlation.
|
| 124 |
+
|
| 125 |
+
<table><tr><td>Model</td><td>(X, Yup)</td><td>(X, Yact)</td><td>(X, Ydown)</td></tr><tr><td>Mixtral</td><td>0.06</td><td>0.33</td><td>0.07</td></tr><tr><td>Mixtral-22</td><td>0.13</td><td>0.26</td><td>0.13</td></tr><tr><td>DeepSeek</td><td>0.00</td><td>0.40</td><td>0.00</td></tr><tr><td>Grok</td><td>0.04</td><td>0.15</td><td>0.04</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 3: Average square of Pearson correlation coefficients over all layers $(R_{\mathrm{avg}}^2)$ for three paired dataset.
|
| 128 |
+
|
| 129 |
+
# 4.3 Summary
|
| 130 |
+
|
| 131 |
+
Here, we conclude the key observations from the analysis of static parameters: i) Mixtral might contain expert(s) with unique attributes, as evidenced by the frequent presence of dark crosses in Fig. 1. ii) The similarities of DeepSeek and Grok expert weight matrices are generally lower than those in Mixtrals. As mentioned in § 4.1.1, the matrix-level similarities of DeepSeek and Grok experts are typically close to zero, whereas Mixtrals' expert similarities average around 0.3. iii) The weights of different experts become less similar in deeper layers, as observed in the Mixtrals' heat maps in Fig. 1. iv) $W_{\mathrm{up}}$ , $W_{\mathrm{down}}$ , and $W_{\mathrm{act}}$ , share similar patterns in their similarity heat maps (Fig. 1). v) The similarities of $W_{g}$ and of $W_{\mathrm{act}}$ show either positive or negative association. Tab. 3 depicts the $R_{\mathrm{avg}}^2$ values, where the pairing of $W_{g}$ and $W_{\mathrm{act}}$ achieves the highest correlation across all four models.
|
| 132 |
+
|
| 133 |
+
# 5 Analysis of Dynamic Behaviours
|
| 134 |
+
|
| 135 |
+
The previous experiments examine the MoE models via their parameters, without involving any input. In this section, we feed text sequences into the MoE models to further study their actual behaviours given various inputs. Specifically, we analyze the outputs of the experts and gates.
|
| 136 |
+
|
| 137 |
+
To this end, two stages are required for inference. In the first stage, we simply pass the input $x$ through the network using the original Top-k setting and store the output $z_{i}$ of every layer $i$ . In the second stage, we iterate through the layers. During the $i$ -th iteration, we feed $z_{i-1}$ into the $i$ -th layer (for the first layer, $x$ is employed as the input), set Top-k = ALL, and record the outputs from all the experts in the $i$ -th layer. Note that each layer has its own individual forward pass in the second stage. Intuitively, our goal is to examine the experts' behaviors when provided with the original inputs.
|
| 138 |
+
|
| 139 |
+
Input data. We utilize a short input and a long input for the experiments in this section. For the short input, we employ the first few words of the input from another MoE-related work (Cai et al., 2024) $^{4}$ . For the long input, we adopt 10 sequences from the test set of the WikiText-103 (Merit et al., 2016) dataset, totaling approximately 1100 tokens. The sequences in WikiText-103 cover a variety of domains, with the 10 sequences we used spanning topics such as music, design, and construction. To ensure the robustness of our findings, we repeat experiments requiring the long input ( $\S$ 5.1, $\S$ 5.2) using additional datasets with over 80K tokens, including GSM8K (Cobbe et al., 2021) and Magicoder-Evol-Instruct-110K (Wei et al., 2024b). See Append F for details. The observations of these additional, subject-specific datasets align with the results described in the main context, demonstrating the universality of our conclusions.
|
| 140 |
+
|
| 141 |
+
We also conduct experiments for analyzing intermediate states of experts and routing patterns. Due to the page limit, these experiments are presented in Append H and Append I, respectively.
|
| 142 |
+
|
| 143 |
+
# 5.1 Outputs of Experts
|
| 144 |
+
|
| 145 |
+
Since experts are ideally learned to specialize in different aspects, it is natural to question the similarities and differences between the outputs of selected and non-selected experts. In this experiment, we
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
Figure 2: Similarity heat maps of expert output features using the short input. The top $k$ experts for each token are shown on top of each heat map. The tick numbers refer to the expert indices. "F" and "SE" denote the Mistral FFN and the DeepSeek shared expert, respectively.
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
Figure 3: Average similarity heat maps of expert output features using the long input, plotted along with the matrix-level similarity heat maps. The tick numbers refer to the expert indices. “F” denotes the Mistral FFN.
|
| 152 |
+
|
| 153 |
+
measure the correlation between the output feature vectors of experts. We plot the similarity heat maps for three tokens in the short input (Fig. 2) and the average heat map across all tokens in the long input (Fig. 3). For the long input, we use angular similarity instead of cosine similarity for measurement, as the similarities need to be averaged, ensuring that the values range from 0 to 1:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathrm {a n g u l a r \_ s i m} = 1 - \frac {\mathrm {a r c c o s (c o s i n e \_ s i m)}}{\pi}. (2)
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
For clarity, the average similarity heat maps are plotted alongside the matrix-level similarity graphs of the expert weight matrices. Fig. 9 further depicts the results from additional datasets, which are consistent with those of the long input.
|
| 160 |
+
|
| 161 |
+
Mixtrals and Mistral. The graphs for the short input indicate that the outputs from chosen experts tend to be more similar, possibly due to their generally larger norms, which we will discuss in § 5.2. Overall similarities are relatively low in the deeper $(22^{\mathrm{nd}} - 27^{\mathrm{th}}$ for Mixtral and $30^{\mathrm{nd}} - 50^{\mathrm{th}}$ for Mixtral-22) layers, whereas many values exceed 0.8 in the last few layers. Furthermore, dark crosses often appear in the graphs, with the experts corresponding to these dark crosses often being more similar to the Mistral FFN (i.e., bright color in the last row). For the long input, the average heat maps show patterns akin to neuron-level similarity graphs, including the presence of dark crosses. The similarities also decrease with increasing layer depth, except in the
|
| 162 |
+
|
| 163 |
+
last layer. In addition, we have $S_{\mathrm{ee}} > S_{\mathrm{ef}}$ for both inputs. Most of these observations align with the previous analyses of static parameters (§ 4.3), implying that measuring the similarity of weights, in some aspects, is equivalent to measuring the average similarity of outputs.
|
| 164 |
+
|
| 165 |
+
DeepSeek. Given the short input, most similarities are around zero, while the values in the last layer are significantly larger. Again, the similarities between experts chosen by the gate are likely to be higher, although this difference occurs much less frequently than in Mixtrals. The average similarities for the long input also approach zero. Moreover, the number of "small rectangular" with relatively light color in the graphs decreases as the layer depth increases (except for the last layer), meaning that the average similarities gradually decline.
|
| 166 |
+
|
| 167 |
+
Grok. Surprisingly, the similarities between the output features remain high for all tokens in the short input, indicating the experts exhibit similar behaviours. However, the similarities of their weight matrices are mostly zeros (§ 4.1.1). We speculate that this may be due to the relatively large size of each Grok expert, allowing each to learn comprehensive knowledge and behave similarly despite having distinct parameters. When averaging the similarities for the long input, some of the resulting average heat maps display patterns similar to those of the $W_{\mathrm{act}}$ figures. This relationship aligns with the observations made for Mixtrals.
|
| 168 |
+
|
| 169 |
+
# 5.2 Norms of Expert Outputs and Gate Scores
|
| 170 |
+
|
| 171 |
+
In § 5.1, we find that the outputs from chosen experts tend to be more alike. To investigate the possible reasons for this observation, we employ the short input to study the relationship between the experts' L2 norm and the gate decision in this experiment. The calculated norms, along with the gate scores, are plotted in Fig. 4. In Append G, we repeat this experiment using the long input and additional datasets, and the results also support the "higher norm, higher score" observation.
|
| 172 |
+
|
| 173 |
+
Mixtrals. We found that the two experts chosen by the gate usually output feature vectors with the highest norms, which reveals that the norm might be one of the key factors in gate decisions. This finding agrees with the router's design in CompeteSMoE (Pham et al., 2024), which selects experts based on their output norms. It also helps explain why the outputs of the chosen Mixtrals and DeepSeek experts tend to be more alike ( $\S$ 5.1). In Fig. 4, we observe that the gate
|
| 174 |
+
|
| 175 |
+
scores assigned to the top-1 experts are usually much higher than those of the others, including the second place. This demonstrates that the gate is learned to strengthen the confidence of its decision during training. On the other hand, the deeper the layer, the larger the norm, which is similar to the growth in standard models (Shleifer et al., 2021).
|
| 176 |
+
|
| 177 |
+
DeepSeek. In contrast to the observation about Mixtrals' experts, the gate decision appears to depend less obviously on the output norms of DeepSeek experts. However, the top-1 experts often score much higher than the remaining candidates. The magnitude of the norms increases with depth, although the increment is less pronounced than in Mixtrals. In the last layer, the variance of norms becomes greater.
|
| 178 |
+
|
| 179 |
+
Grok. While the scores of the top-1 experts are higher than those of the others, no correspondence between the norms and the gate scores is observed. One possible reason could be the relatively low activation ratios of GeLU (see Append H), which may lead to a weaker dependence on the norm for gate decisions. Besides, unlike Mixtrals and DeepSeek, the magnitude of the norms hardly changes across depth, and some of the norm values can be less than 1, which is rare in the other two models.
|
| 180 |
+
|
| 181 |
+
# 5.3 Summary
|
| 182 |
+
|
| 183 |
+
The observations of dynamic behaviours are concluded below: i) The outputs of Mixtrals and DeepSeek experts in deep (last) layers are less (much) alike. This can be seen in the heat maps for both the short (Fig. 2) and long (Fig. 3) inputs. ii) The average heat maps of expert outputs resemble the neuron-level similarity graphs (Fig. 3), implying that weight similarity measurements can reflect output similarity. iii) Grok experts exhibit high output similarity (Fig.2), likely due to their larger sizes. iv) For Mixtrals and DeepSeek, experts generating feature vectors with larger norms tend to receive higher gate scores, as shown in Fig. 4. We further verified this observation in Fig. 10.
|
| 184 |
+
|
| 185 |
+
# 6 Discussion
|
| 186 |
+
|
| 187 |
+
Based on our analyses, we offer several suggestions for MoE models across various aspects.
|
| 188 |
+
|
| 189 |
+
Neuron-level experts. Intuitively, the gate embedding matrix $W_{g}$ determines expert selection while $W_{\mathrm{act}}$ is responsible for choosing which neurons to activate. Meanwhile, we find that the similarities of
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 4: The experts' L2 norms and the gate scores of the short input. Each token's $k$ experts are shown on top of each heat map. Each number in the horizontal axis refers to an expert index.
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
Figure 5: Normalized model performance across benchmarks for the dynamic expert numbers experiment. Solid line represents lower is better, while dashed line represents higher is better. "Bench avg" refers to the average performance over the four benchmarks evaluated.
|
| 196 |
+
|
| 197 |
+
$W_{g}$ and of $W_{\mathrm{act}}$ show association. This implies that neurons may function as more fine-grained experts. Therefore, operations on experts, such as division, construction, and composition, should be further studied at the micro level. For instance, MoEfi-cation (Zhang et al., 2021) and EMoE (Qiu et al., 2024a) construct MoE experts by splitting the MLP layers of a dense model, suggesting our findings from a similar perspective.
|
| 198 |
+
|
| 199 |
+
Model architecture. Given that the similarities between experts tend to be relatively low (high) in deep (last) layers, one can consider increasing the number of experts in the deeper layers while reducing it in the last layers. In addition, since the gate frequently selects experts with larger output norms, employing norm-based routing mechanism is a reasonable approach. Empirical evidence from Pham et al. (2024) supports this effectiveness.
|
| 200 |
+
|
| 201 |
+
We conduct an initial experiment to provide prac
|
| 202 |
+
|
| 203 |
+
tical experience into our suggestion regarding dynamic expert numbers across layers. Specifically, we train six MoE models from scratch, each containing 24 layers and 3.6B total parameters, using approximately 120B tokens. One of the six models is composed of 24 MoE layers, while the others comprise only 23 MoE layers, with one conventional non-MoE layer positioned at different indices. Details of the model architecture are provided in Tab.7. As displayed in Fig.5 and Tab.6, the average model performance (i.e., PPL and Bench avg) gradually degrades as the non-MoE layer index increases, whereas a slight improvement appears when the non-MoE layer is placed at the last position $(24^{\mathrm{th}})$ . This highlights the growing importance of multiple expert networks in deeper layers, excluding the last one, which aligns with our observations and suggestions.
|
| 204 |
+
|
| 205 |
+
Correlation measurement. Analyzing expert correlations through weight matrix similarities yields partially equivalent results to those from output feature vector similarities across considerable tokens. Thus, assessing weight matrices offers a broader overview, while examining individual token outputs allows for more detailed analysis.
|
| 206 |
+
|
| 207 |
+
Training scheme. The training method for Mixtral has not been publicly announced. However, we observed certain characteristics shared by Mixtral experts (e.g., relatively high similarities of weight matrices), and a notable relationship between these experts and the Mistral FFN (e.g., similar intermediate states in Fig. 12). Consequently, we conjecture
|
| 208 |
+
|
| 209 |
+
that the Mixtral model may be trained using special initialization schemes other than from scratch, e.g., upcycling (Komatsuzaki et al., 2022) from Mistral, that is, copying all experts from the FFN. On the contrary, the experts of DeepSeek and Grok, which are known to be trained from scratch, show weaker correlations than Mixtral experts in our experiments. Similarly, Wei et al. (2024a) tracks changes in expert similarities throughout the training process, observing that upcycled experts exhibit greater similarity compared to those randomly initialized. Hence, we speculate that training a MoE model from scratch shows stronger potential to facilitate the diversification of experts compared with certain initialization approaches.
|
| 210 |
+
|
| 211 |
+
# 7 Related Work
|
| 212 |
+
|
| 213 |
+
Due to the page limit, we focus on existing works analyzing MoEs. An extended related work section for MoE LLMs can be found in Appendix B.
|
| 214 |
+
|
| 215 |
+
Most existing works analyze MoE from the router's perspective by observing expert selections. Early works have observed the unstable choices in the router (Zuo et al., 2021; Chi et al., 2022; Dai et al., 2022). More recent studies find the standard routers do not show clear specialization at the domain level (Jiang et al., 2024; Dai et al., 2024) and primarily route based on token ID instead of high-level semantics (Xue et al., 2024). Shi et al. (2024) shows that Top-2 and Rank-k routing result in different model behaviours and proposes a new self-contrast decoding method to determine the next-token distribution based on this finding.
|
| 216 |
+
|
| 217 |
+
Other works investigate the expert's similarity (Wu et al., 2022), uncovering and utilizing redundancies among experts for efficient inference (Li et al., 2023; Lu et al., 2024). Zhang et al. (2024) reveals the redundancy within experts and perform pruning based on their similarities. Liu et al. (2023); Qiu et al. (2023) notice the connection between routing connection and expert computation, and utilize the average of the experts' first-layer weights to guide routing. Pham et al. (2024) proposes adding the expert's output norm as a supervision signal for routing training. Chen et al. (2022) empirically and theoretically proves that a two-layer MoE CNN is able to learn cluster-center features via specializing experts to specific portions of the data. While these works provide insights into MoE from one or two viewpoints, our work offers a systematic analysis and comparison focusing on
|
| 218 |
+
|
| 219 |
+
transformer-based MoE LLMs.
|
| 220 |
+
|
| 221 |
+
As mentioned in previous sections, several existing works share some relevance to our findings, and thus can be seen as supportive. However, their proposed ideas and methods are different from ours. For instance, rather than revealing the nature of the preference for large output norms in (conventional top-k) routing, as we analyze, CompeteS-MoE (Pham et al., 2024) designs a norm-based router to introduce this tendency manually; MoE-fication (Zhang et al., 2021) splits MLP layers of a dense model to construct MoE experts, while our study highlights that the neurons of an expert can be seen as tiny experts. Moreover, many of our observations are novel, such as the correlation between the router embedding matrix and the expert weight matrix, as well as the equivalence between parameter and output measurement for experts. Therefore, we believe that our work offers valuable insights into MoE LLMs for the community.
|
| 222 |
+
|
| 223 |
+
# 8 Conclusion
|
| 224 |
+
|
| 225 |
+
In this paper, we initially attempt to investigate the inner working mechanisms of MoEs by studying the parameters and outputs of four different MoE models. We summarize our empirical observations and propose practical suggestions across various aspects. While it is premature to conclude whether MoEs genuinely learn heterogeneous experts, some of our experiments indicate that specific architectural designs (e.g., the number of experts) and training frameworks may facilitate expert specialization. We hope this work can provide inspiring insights and serve as a valuable foundation for future research on MoE and other modular architectures.
|
| 226 |
+
|
| 227 |
+
# 9 Limitations
|
| 228 |
+
|
| 229 |
+
The limitations of our work include: 1) Although the models we investigated cover several common designs of MoE, our analysis does not encompass all aspects (e.g., other routing strategies like top-1 routing or model architectures that place MoE layers at every other layer); 2) Despite the availability of other metrics, we primarily adopt cosine similarity in our experiments involving similarity measurement, as it is a widely used approach (Pham et al., 2024; Chen et al., 2022); 3) We mainly focus on the pretrained base model but seldom explore the behaviours of models after fine-tuning. Analyzing the changes in expert behaviours during the fine-tuning process could yield valuable insights.
|
| 230 |
+
|
| 231 |
+
# References
|
| 232 |
+
|
| 233 |
+
Weilin Cai, Juyong Jiang, Fan Wang, Jing Tang, Sunghun Kim, and Jiayi Huang. 2024. A survey on mixture of experts. arXiv preprint arXiv:2407.06204.
|
| 234 |
+
Zixiang Chen, Yihe Deng, Yue Wu, Quanquan Gu, and Yuanzhi Li. 2022. Towards understanding the mixture-of-experts layer in deep learning. Advances in neural information processing systems, 35:23049-23062.
|
| 235 |
+
Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, et al. 2022. On the representation collapse of sparse mixture of experts. Advances in Neural Information Processing Systems, 35:34600-34613.
|
| 236 |
+
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
|
| 237 |
+
Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y Wu, et al. 2024. Deepseek-moe: Towards ultimate expert specialization in mixture-of-experts language models. arXiv preprint arXiv:2401.06066.
|
| 238 |
+
Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Stable routing strategy for mixture of experts. arXiv preprint arXiv:2204.08396.
|
| 239 |
+
Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2020. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913.
|
| 240 |
+
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
|
| 241 |
+
Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088.
|
| 242 |
+
Roy Jonker and Ton Volgenant. 1988. A shortest augmenting path algorithm for dense and sparse linear assignment problems. In DGOR/NSOR: Papers of the 16th Annual Meeting of DGOR in Cooperation with NSOR/Vorträge der 16. Jahresstagung der DGOR zusammen mit der NSOR, pages 622-622. Springer.
|
| 243 |
+
Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, and Neil Houlsby. 2022. Sparse upcycling: Training mixture-of-experts from dense checkpoints. arXiv preprint arXiv:2212.05055.
|
| 244 |
+
|
| 245 |
+
Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, and Tianlong Chen. 2023. Merge, then compress: Demystify efficient smoe with hints from its routing policy. arXiv preprint arXiv:2310.01334.
|
| 246 |
+
Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J Reddi, Ke Ye, Felix Chern, Felix Yu, Ruiqi Guo, et al. 2022. The lazy neuron phenomenon: On emergence of activation sparsity in transformers. arXiv preprint arXiv:2210.06313.
|
| 247 |
+
Zeyu Leo Liu, Tim Dettmers, Xi Victoria Lin, Veselin Stoyanov, and Xian Li. 2023. Towards a unified view of sparse feed-forward network in pretraining large language model. arXiv preprint arXiv:2305.13999.
|
| 248 |
+
Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, and Hongsheng Li. 2024. Not all experts are equal: Efficient expert pruning and skipping for mixture-of-experts large language models. arXiv preprint arXiv:2402.14800.
|
| 249 |
+
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
|
| 250 |
+
Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T Nguyen, Savitha Ramasamy, Xiaoli Li, Steven Hoi, et al. 2024. Competesmoe-effective training of sparse mixture of experts via competition. arXiv preprint arXiv:2402.02526.
|
| 251 |
+
Zihan Qiu, Zeyu Huang, and Jie Fu. 2023. Emergent mixture-of-experts: Can dense pre-trained transformers benefit from emergent modular structures? arXiv preprint arXiv:2310.10908.
|
| 252 |
+
Zihan Qiu, Zeyu Huang, and Jie Fu. 2024a. Unlocking emergent modularity in large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 2638-2660.
|
| 253 |
+
Zihan Qiu, Zeyu Huang, Youcheng Huang, and Jie Fu. 2024b. Empirical study on updating key-value memories in transformer feed-forward layers. arXiv preprint arXiv:2402.12233.
|
| 254 |
+
Machel Reid, Nikolay Savinov, Denis Teptyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530.
|
| 255 |
+
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538.
|
| 256 |
+
|
| 257 |
+
Yikang Shen, Zhen Guo, Tianle Cai, and Zengyi Qin. 2024. Jetmoe: Reaching llama2 performance with $0.1\mathrm{m}$ dollars. arXiv preprint arXiv:2404.07413.
|
| 258 |
+
Yikang Shen, Zheyu Zhang, Tianyou Cao, Shawn Tan, Zhenfang Chen, and Chuang Gan. 2023. Moduleformer: Learning modular large language models from uncurated data. arXiv preprint arXiv:2306.04640.
|
| 259 |
+
Chufan Shi, Cheng Yang, Xinyu Zhu, Jiahao Wang, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, and Yu Meng. 2024. Unchosen experts can contribute too: Unleashing moe models' power by self-contrast. arXiv preprint arXiv:2405.14507.
|
| 260 |
+
Sam Shleifer, Jason Weston, and Myle Ott. 2021. Normformer: Improved transformer pretraining with extra normalization. arXiv preprint arXiv:2110.09456.
|
| 261 |
+
Chenyang Song, Xu Han, Zhengyan Zhang, Shengding Hu, Xiyu Shi, Kuai Li, Chen Chen, Zhiyuan Liu, Guangli Li, Tao Yang, et al. 2024a. Prosparse: Introducing and enhancing intrinsic activation sparsity within large language models. arXiv preprint arXiv:2402.13516.
|
| 262 |
+
Yixin Song, Haotong Xie, Zhengyan Zhang, Bo Wen, Li Ma, Zeyu Mi, and Haibo Chen. 2024b. Turbo sparse: Achieving llm sota performance with minimal activated parameters. arXiv preprint arXiv:2406.05955.
|
| 263 |
+
Qi Sun, Marc Pickett, Aakash Kumar Nain, and Llion Jones. 2024. Transformer layers as painters. arXiv preprint arXiv:2407.09298.
|
| 264 |
+
Qwen Team. 2024. Qwen1.5-moe: Matching 7b model performance with 1/3 activated parameters".
|
| 265 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 266 |
+
Tianwen Wei, Bo Zhu, Liang Zhao, Cheng Cheng, Biye Li, Weiwei Lu, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Liang Zeng, et al. 2024a. Skyworkmoe: A deep dive into training techniques for mixture-of-experts language models. arXiv preprint arXiv:2406.06563.
|
| 267 |
+
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2024b. Magicoder: Empowering code generation with oss-instruct. In *Forty-first International Conference on Machine Learning*.
|
| 268 |
+
Lemeng Wu, Mengchen Liu, Yinpeng Chen, Dongdong Chen, Xiyang Dai, and Lu Yuan. 2022. Residual mixture of experts. arXiv preprint arXiv:2204.09636.
|
| 269 |
+
Shaohua Wu, Jiangang Luo, Xi Chen, Lingjun Li, Xudong Zhao, Tong Yu, Chao Wang, Yue Wang, Fei Wang, Weixu Qiao, et al. 2024. Yuan 2.0-m32: Mixture of experts with attention router. arXiv preprint arXiv:2405.17976.
|
| 270 |
+
|
| 271 |
+
Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, and Yang You. 2024. Openmoe: An early effort on open mixture-of-experts language models. arXiv preprint arXiv:2402.01739.
|
| 272 |
+
Xiaofeng Zhang, Yikang Shen, Zeyu Huang, Jie Zhou, Wenge Rong, and Zhang Xiong. 2022. Mixture of attention heads: Selecting attention heads per token. arXiv preprint arXiv:2210.05144.
|
| 273 |
+
Zeliang Zhang, Xiaodong Liu, Hao Cheng, Chen-liang Xu, and Jianfeng Gao. 2024. Diversifying the expert knowledge for task-agnostic pruning in sparse mixture-of-experts. arXiv preprint arXiv:2407.09590.
|
| 274 |
+
Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2021. Moefication: Transformer feed-forward layers are mixtures of experts. arXiv preprint arXiv:2110.01786.
|
| 275 |
+
Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. St-moe: Designing stable and transferable sparse expert models. arXiv preprint arXiv:2202.08906.
|
| 276 |
+
Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Tuo Zhao, and Jianfeng Gao. 2021. Taming sparsely activated transformer with stochastic experts. arXiv preprint arXiv:2110.04260.
|
| 277 |
+
|
| 278 |
+
# Appendix
|
| 279 |
+
|
| 280 |
+
# A Model Selection
|
| 281 |
+
|
| 282 |
+
Our experiments are conducted on Mixtral 8x7B, DeepSeekMoE, and Grok-1. We choose these models due to their widespread use and impressive performance across various domains. Additionally, these models are complementary in several crucial attributes, such as training scheme, activation functions, top-k settings, and the number of experts, as listed in Tab 1 and Tab 4. This allows for a comparative analysis with controlled variables and encompasses a wide range of parameter sizes, from rather small (16B) to relatively huge (314B). Hence, we believe that the findings derived from these four models are fairly robust, despite the limited number of models examined.
|
| 283 |
+
|
| 284 |
+
# B Extended Related Work
|
| 285 |
+
|
| 286 |
+
MoE LLMs. MoEs have garnered significant attention in recent years due to their ability to efficiently scale model capacity with minimal computational overhead. Most current transformer-based MoE LLMs adopt a typical architecture design that replaces the original FFN with multiple expert networks and a sparse gating network (Wei et al., 2024a; Wu et al., 2024; Dai et al., 2024; Xue et al., 2024; Jiang et al., 2024; Zoph et al., 2022). JetMoE (Shen et al., 2024) and ModuleFormer (Shen et al., 2023) incorporate Mixture of Attention Heads (Zhang et al., 2022) into their model, achieving further sparsity. A recent survey (Cai et al., 2024) provides a comprehensive review of both the algorithmic and system design aspects of MoEs. For this study, we select four representative candidates among current open-sourced MoE LLMs for analysis to gain intriguing insights.
|
| 287 |
+
|
| 288 |
+
# C Projection of Expert Matrices in Low-dimensional Space
|
| 289 |
+
|
| 290 |
+
# C.1 Matrix-level
|
| 291 |
+
|
| 292 |
+
To better understand the relationships among experts, we employ principal components analysis (PCA) to project the flattened vectors of weight matrices into two-dimensional space. The vectors are standardized before applying PCA. Fig. 6 depicts the resulting 2D projection.
|
| 293 |
+
|
| 294 |
+
Mixtral and Mistral. Consistent with the observations in § 4.1.1, the figures for the three matrices appear similar. Generally, about half of the Mistral experts cluster closely together and near the
|
| 295 |
+
|
| 296 |
+
Mistral FFN, while the others locate much farther away. Moreover, the outliers correspond to the dark crosses.
|
| 297 |
+
|
| 298 |
+
DeepSeek. Only routed experts are considered due to differences in hidden sizes. Because several outliers exist, causing the remaining data points to be densely gathered, we remove them using the DBSCAN algorithm with $\epsilon = 50$ and plot the rest in Fig. 6. It can be observed that the experts distribute rather densely, especially for $W_{\mathrm{up}}$ . Although the distribution of experts varies for three matrices, the figures for $W_{\mathrm{up}}$ and $W_{\mathrm{down}}$ are more similar than those of the gate matrix.
|
| 299 |
+
|
| 300 |
+
Grok. Typically, about half of the Grok experts densely gather for $W_{\mathrm{up}}$ and $W_{\mathrm{down}}$ . The other half turns out to be outliers even though no dark cross were observed before. . Furthermore, the outliers of the three matrices partially coincide.
|
| 301 |
+
|
| 302 |
+
# C.2 Neuron-level
|
| 303 |
+
|
| 304 |
+
To project the neurons into a 2D or 3D space, each row vector of $W_{\mathrm{up}}$ and $W_{\mathrm{act}}$ , or each column vector of $W_{\mathrm{down}}$ , is treated as a single data point. Standardization is then applied, following by PCA. The visualization of the principal components is illustrated in Fig. 7. Different colors refer to neurons belonging to different experts.
|
| 305 |
+
|
| 306 |
+
Common. The vast majority of neurons gather in the low-dimensional space. In some layers, the distribution of neurons forms a special shape, such as a cross or a thick line, which appears the most often for $W_{\mathrm{down}}$ , followed by $W_{\mathrm{up}}$ , and finally $W_{\mathrm{act}}$ . Compared to ellipses, these shapes indicate that the neurons are relatively more similar.
|
| 307 |
+
|
| 308 |
+
Mixtral and Mistral. The neurons in the Mistral FFN distribute more densely than those of the Mixtral experts. Notably, the distribution shape of neurons in the FFN and experts are usually alike, even for the outliers.
|
| 309 |
+
|
| 310 |
+
DeepSeek and Grok. The number of outliers is a bit greater than that observed in Mixtral.
|
| 311 |
+
|
| 312 |
+
# D Averaging Expert Neurons
|
| 313 |
+
|
| 314 |
+
To investigate expert correlation at the neuron level, the averaging approach simply averages the rows (for $W_{\mathrm{up}}$ and $W_{\mathrm{act}}$ ) or the columns (for $W_{\mathrm{down}}$ ) of the weight matrices and then calculates the similarity of the resulting vectors across experts. Fig. 8 displays the graphs.
|
| 315 |
+
|
| 316 |
+
Common. The heat maps of $W_{\mathrm{up}}$ and $W_{\mathrm{down}}$ are nearly identical to those presented in § 4.1.1. Yet
|
| 317 |
+
|
| 318 |
+
<table><tr><td>Model</td><td>Training scheme</td><td>Activation</td><td># Total layers</td><td># Total params</td><td># Activated params</td></tr><tr><td>Mixtral</td><td>unknown (upcycling)</td><td>SiLU</td><td>32</td><td>46.7B</td><td>12.9B</td></tr><tr><td>Mixtral-22</td><td>unknown (upcycling)</td><td>SiLU</td><td>56</td><td>141B</td><td>39B</td></tr><tr><td>Mistral</td><td>from scratch</td><td>SiLU</td><td>32</td><td>7.3B</td><td>7.3B</td></tr><tr><td>DeepSeek</td><td>from scratch</td><td>SiLU</td><td>28</td><td>16.4B</td><td>0.3B</td></tr><tr><td>Grok</td><td>from scratch</td><td>GeLU</td><td>64</td><td>314B</td><td>78.5B</td></tr></table>
|
| 319 |
+
|
| 320 |
+
Table 4: Additional information of chosen models.
|
| 321 |
+
|
| 322 |
+

|
| 323 |
+
Figure 6: Projection of expert matrices in 2D space. Each layer contains three graphs, corresponding to $W_{\mathrm{up}}$ , $W_{\mathrm{act}}$ , and $W_{\mathrm{down}}$ , respectively. For DeepSeek, the indices of the removed outliers are listed on top of each graph.
|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
Figure 7: Projection of expert neurons in 2D/3D space. Each layer contains three graphs, corresponding to $W_{\mathrm{up}}$ , $W_{\mathrm{act}}$ , and $W_{\mathrm{down}}$ , respectively.
|
| 327 |
+
|
| 328 |
+
the similarities of $W_{\mathrm{act}}$ significantly increase.
|
| 329 |
+
|
| 330 |
+
Mixtral and Mistral. The dark crosses sometimes disappear. In the figures for $W_{\mathrm{act}}$ , the similarities between the experts and the Mistral FFN are often lower than the similarities among the experts themselves (i.e., $S_{\mathrm{ee}} > S_{\mathrm{ef}}$ ), which is contrary to previous observations. This can happen if the expert neurons in different positions are alike. For instance, given three vectors $f = (0,0)$ , $e_1 = (1,0)$ , and $e_2 = (0,1)$ , the vector similarity $S_{e_1e_2}$ is lower than $S_{e_1f}$ and $S_{e_2f}$ . If averaging the elements, we have $\bar{f} = (0)$ , $\bar{e}_1 = (0.5)$ , and $\bar{e}_2 = (0.5)$ , then $S_{e_1e_2}$ becomes the highest.
|
| 331 |
+
|
| 332 |
+
DeepSeek. The growth of the $W_{\mathrm{act}}$ similarity values is directly proportional to the layer depth.
|
| 333 |
+
|
| 334 |
+
Gork. In the heat map of $W_{\mathrm{act}}$ , dark crosses frequently appear in various positions.
|
| 335 |
+
|
| 336 |
+
# E Gate Embedding
|
| 337 |
+
|
| 338 |
+
Since the gate embedding matrix $W_{g}$ determines the gate decision, there may be a relationship between $W_{g}$ and the experts. To investigate this, we measure the similarities between the gate embedding vectors, $W_{g}[n,:]$ for $n \in [1,N]$ , and compare them with the neuron-level averaging heat maps of experts presented in § 4.1.2. The qualitative analysis of the combined graphs shown in Fig. 8 is detailed in this section. The table containing the $R$ values for each layer (Tab. 5) is appended at the end.
|
| 339 |
+
|
| 340 |
+
Mixtral. Focusing on the heat maps of $W_{g}$ , the similarities typically range from 0.2 to 0.4, with a noticeable increase in the last layer. Moreover, dark crosses are rarely found. Surprisingly, the patterns in the heat maps of $W_{g}$ and of expert neurons in $W_{\mathrm{act}}$ are partially alike in some layers. This implies that the way a gate selects experts might be relevant to how an expert activates its neurons.
|
| 341 |
+
|
| 342 |
+
DeepSeek. Unlike the almost all-zero heat maps of $W_{\mathrm{up}}$ and $W_{\mathrm{down}}$ , the similarities of gate neurons sometimes exceed 0.4. In addition, the heat maps of $W_{g}$ and $W_{\mathrm{act}}$ show similar patterns. However, the overall similarities of $W_{g}$ decrease with depth while the similarities of $W_{\mathrm{act}}$ gradually grow. This indicates that as the layer depth increases, the gate "looks" at the input feature in more diverse ways when assigning scores to different experts, even as the neuron activations of the experts become more similar.
|
| 343 |
+
|
| 344 |
+
Grok. Both dark and bright crosses commonly exist in the heat maps of $W_{g}$ , whose patterns are
|
| 345 |
+
|
| 346 |
+
<table><tr><td>Layer</td><td>Mixtral</td><td>Mixtral-22</td><td>DeepSeek</td><td>Grok</td></tr><tr><td>0</td><td>0.82</td><td>-0.01</td><td>---</td><td>0.89</td></tr><tr><td>1</td><td>-0.44</td><td>0.23</td><td>0.75</td><td>-0.10</td></tr><tr><td>2</td><td>0.26</td><td>0.62</td><td>0.78</td><td>-0.28</td></tr><tr><td>3</td><td>0.54</td><td>0.76</td><td>0.71</td><td>0.66</td></tr><tr><td>4</td><td>0.48</td><td>0.47</td><td>0.77</td><td>0.52</td></tr><tr><td>5</td><td>0.70</td><td>0.49</td><td>0.77</td><td>0.37</td></tr><tr><td>6</td><td>0.84</td><td>0.28</td><td>0.69</td><td>0.28</td></tr><tr><td>7</td><td>0.74</td><td>0.69</td><td>0.73</td><td>0.17</td></tr><tr><td>8</td><td>0.42</td><td>0.66</td><td>0.66</td><td>0.51</td></tr><tr><td>9</td><td>0.66</td><td>0.84</td><td>0.66</td><td>0.84</td></tr><tr><td>10</td><td>0.53</td><td>0.59</td><td>0.63</td><td>0.28</td></tr><tr><td>11</td><td>0.32</td><td>0.61</td><td>0.60</td><td>0.30</td></tr><tr><td>12</td><td>0.14</td><td>0.55</td><td>0.54</td><td>0.46</td></tr><tr><td>13</td><td>0.51</td><td>0.48</td><td>0.60</td><td>0.14</td></tr><tr><td>14</td><td>0.66</td><td>0.62</td><td>0.56</td><td>0.00</td></tr><tr><td>15</td><td>0.40</td><td>0.59</td><td>0.58</td><td>0.54</td></tr><tr><td>16</td><td>0.39</td><td>0.68</td><td>0.53</td><td>0.32</td></tr><tr><td>17</td><td>0.53</td><td>0.65</td><td>0.55</td><td>0.30</td></tr><tr><td>18</td><td>0.35</td><td>0.66</td><td>0.57</td><td>0.10</td></tr><tr><td>19</td><td>0.17</td><td>0.72</td><td>0.57</td><td>-0.17</td></tr><tr><td>20</td><td>0.51</td><td>0.77</td><td>0.58</td><td>0.24</td></tr><tr><td>21</td><td>0.63</td><td>0.67</td><td>0.62</td><td>0.58</td></tr><tr><td>22</td><td>0.36</td><td>0.65</td><td>0.62</td><td>0.46</td></tr><tr><td>23</td><td>0.51</td><td>0.35</td><td>0.62</td><td>0.14</td></tr><tr><td>24</td><td>0.48</td><td>0.30</td><td>0.68</td><td>0.10</td></tr><tr><td>25</td><td>0.66</td><td>0.13</td><td>0.62</td><td>0.00</td></tr><tr><td>26</td><td>0.81</td><td>0.27</td><td>0.58</td><td>-0.10</td></tr><tr><td>27</td><td>0.63</td><td>0.22</td><td>0.46</td><td>-0.26</td></tr><tr><td>28</td><td>0.73</td><td>0.38</td><td>---</td><td>-0.66</td></tr><tr><td>29</td><td>0.75</td><td>0.52</td><td>---</td><td>-0.41</td></tr><tr><td>30</td><td>0.84</td><td>0.38</td><td>---</td><td>-0.83</td></tr><tr><td>31</td><td>0.57</td><td>0.45</td><td>---</td><td>-0.76</td></tr><tr><td>32</td><td>---</td><td>0.37</td><td>---</td><td>-0.24</td></tr><tr><td>33</td><td>---</td><td>0.28</td><td>---</td><td>-0.53</td></tr><tr><td>34</td><td>---</td><td>0.72</td><td>---</td><td>-0.46</td></tr><tr><td>35</td><td>---</td><td>0.69</td><td>---</td><td>0.14</td></tr><tr><td>36</td><td>---</td><td>0.34</td><td>---</td><td>0.17</td></tr><tr><td>37</td><td>---</td><td>0.46</td><td>---</td><td>-0.46</td></tr><tr><td>38</td><td>---</td><td>0.31</td><td>---</td><td>-0.17</td></tr><tr><td>39</td><td>---</td><td>0.26</td><td>---</td><td>-0.26</td></tr><tr><td>40</td><td>---</td><td>0.48</td><td>---</td><td>-0.70</td></tr><tr><td>41</td><td>---</td><td>0.41</td><td>---</td><td>0.17</td></tr><tr><td>42</td><td>---</td><td>0.46</td><td>---</td><td>0.00</td></tr><tr><td>43</td><td>---</td><td>0.33</td><td>---</td><td>-0.17</td></tr><tr><td>44</td><td>---</td><td>0.44</td><td>---</td><td>-0.22</td></tr><tr><td>45</td><td>---</td><td>0.50</td><td>---</td><td>0.14</td></tr><tr><td>46</td><td>---</td><td>0.43</td><td>---</td><td>-0.47</td></tr><tr><td>47</td><td>---</td><td>0.34</td><td>---</td><td>-0.44</td></tr><tr><td>48</td><td>---</td><td>0.50</td><td>---</td><td>-0.17</td></tr><tr><td>49</td><td>---</td><td>0.42</td><td>---</td><td>-0.14</td></tr><tr><td>50</td><td>---</td><td>0.43</td><td>---</td><td>0.17</td></tr><tr><td>51</td><td>---</td><td>0.51</td><td>---</td><td>0.22</td></tr><tr><td>52</td><td>---</td><td>0.67</td><td>---</td><td>0.10</td></tr><tr><td>53</td><td>---</td><td>0.32</td><td>---</td><td>0.33</td></tr><tr><td>54</td><td>---</td><td>0.68</td><td>---</td><td>-0.24</td></tr><tr><td>55</td><td>---</td><td>0.20</td><td>---</td><td>-0.57</td></tr><tr><td>56</td><td>---</td><td>---</td><td>---</td><td>-0.24</td></tr><tr><td>57</td><td>---</td><td>---</td><td>---</td><td>-0.37</td></tr><tr><td>58</td><td>---</td><td>---</td><td>---</td><td>0.00</td></tr><tr><td>59</td><td>---</td><td>---</td><td>---</td><td>-0.69</td></tr><tr><td>60</td><td>---</td><td>---</td><td>---</td><td>-0.17</td></tr><tr><td>61</td><td>---</td><td>---</td><td>---</td><td>0.35</td></tr><tr><td>62</td><td>---</td><td>---</td><td>---</td><td>0.30</td></tr><tr><td>63</td><td>---</td><td>---</td><td>---</td><td>0.10</td></tr></table>
|
| 347 |
+
|
| 348 |
+
Table 5: Pearson correlation coefficients $(R)$ of the paired dataset $(X,Y_{\mathrm{act}})$ .
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
Figure 8: Similarity heat maps of gate embedding (leftmost graph of each layer) along with the neuron-level similarity heat maps using averaging method. The tick numbers refer to the expert indices.
|
| 352 |
+
|
| 353 |
+
similar to those of $W_{\mathrm{act}}$ . Specially, their patterns show opposite color tendency (i.e., deep color positions in one heat map becomes light color in another) starting from the intermediate layers. The similarities of $W_{g}$ decrease when the layer depth increases, except for the last few layers.
|
| 354 |
+
|
| 355 |
+
# F Additional Datasets
|
| 356 |
+
|
| 357 |
+
To ensure the universality of our findings, we repeat the experiments that require the long input (\$ 5.1, $\S 5.2$ ) using additional datasets. Specifically, we utilize the entire test set of WikiText-103 (Merit et al., 2016) (266K tokens) and of a math dataset GSM8K (Cobbe et al., 2021) (84K tokens), and 1000 sequences from the code dataset Magicoder-Evol-Instruct-110K (Wei et al., 2024b) (188K tokens). As shown in Fig. 9, 11, 13, the new figures of Mixtral and DeepSeek align with the previous results illustrated in the main context, even when using datasets of specific subjects like math and code (we did not test on the Grok model due to limited computation resources). These supplementary results demonstrate that our findings are general and not limited to the initial input sources.
|
| 358 |
+
|
| 359 |
+
# G Norms of Expert Outputs and Gate Scores
|
| 360 |
+
|
| 361 |
+
In § 5.2, we notice that in some MoE models, the two experts chosen by the gate usually produce feature vectors with the highest norms. To further investigate this, we repeat the experiment using the long input and additional datasets, and the statistical results are shown in Fig. 10, 11, 13.
|
| 362 |
+
|
| 363 |
+
Mixtral. It is evident that the expert which outputs the largest norm is most frequently assigned the highest score. Surprisingly, for every $i$ , the $i$ -th highest score is most likely assigned to the expert with the $i$ -th highest output.
|
| 364 |
+
|
| 365 |
+
DeepSeek. For the experts that generate the first few largest norms (rank $60^{\text{th}}$ to $64^{\text{th}}$ ), they are most likely to receive the highest scores. But we do not observe a similar relationship for the rest of the experts. On the contrary, the gate assigns relatively high scores more frequently than low scores to the experts with the smallest norms. For experts ranked $49^{\text{th}}$ to $59^{\text{th}}$ in terms of output norms, they tend to receive either low scores or high scores.
|
| 366 |
+
|
| 367 |
+
Grok. In contrast to the previous models, the output norms of the Grok experts tend to have an inverse relationship with the scores. More generally, the experts with the first few highest outputs are fre
|
| 368 |
+
|
| 369 |
+
sequently assigned either low scores or high scores. One possible explanation could be the relatively low activation ratios of GeLU (see Append H), which may result in a weaker dependence on the norm for gate decisions.
|
| 370 |
+
|
| 371 |
+
# H Intermediate States of Experts
|
| 372 |
+
|
| 373 |
+
While § 5.1 focused on the final outputs of experts, we continue our analysis here by examining their intermediate outputs to examine the inner states of the experts. Given an input $x$ , the intermediate state of an expert refers to the output of $\sigma(W_{\mathrm{act}}x) \in \mathbb{R}^{d_{\mathrm{hid}}}$ , where $\sigma$ denotes an activation function. These intermediate vectors control the activation of neurons, so we simply record them for analysis with the short input used. Mixtral, Mistral, and DeepSeek utilize SiLU as the activation function, while Grok adopts GeLU. Fig. 12 depicts the magnitude of the vectors for Mixtral across three tokens.
|
| 374 |
+
|
| 375 |
+
Common. Each figure contains some horizontal lines, indicating the presence of an "outlier" expert with either the highest or lowest activation values. Nonetheless, there is no clear relationship between these phenomena and the gate decisions.
|
| 376 |
+
|
| 377 |
+
Mixtral and Mistral. For a single token, we found that, on average, the absolute activation value of $99.6\%$ elements in each expert exceeds 0.001 after applying the SiLU activation function. This high ratio indicates that the vast majority of neurons in an expert are activated. In Fig. 12, some vertical lines across all experts are commonly found, meaning that the $W_{\mathrm{act}}$ matrices of different experts assign similar activation values to neurons with the same indices. In addition, the magnitude of the intermediate states grows along with layer depth, which aligns with the observation in § 5.1.
|
| 378 |
+
|
| 379 |
+
DeepSeek. On average, $99.7\%$ of the neurons in each expert have an absolute activation value exceeding 0.001 after applying SiLU. Vertical lines rarely exist in the DeepSeek model. Similarly, the elements in the intermediate state vectors get larger as the layer goes deeper.
|
| 380 |
+
|
| 381 |
+
Grok. With GeLU as the activation function, only $25.3\%$ neurons per Grok expert attain an absolute activation value greater than 0.001. The activation values are generally smaller than those in Mixtral and DeepSeek. Li et al. (2022); Song et al. (2024a) suggest such difference largely stems from the distinct activation functions used. Interestingly, Song et al. (2024b) further utilize the sparsity in experts
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
Mixtral
|
| 385 |
+
DeepSeek
|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
Figure 9: Average similarity heat maps of expert output features using (1) the long sequence, (2) WikiText-103, (3) GSM8K, and (4) Magicoder-Evol-Instruct-110K. The tick numbers refer to the expert indices. "F" and "SE" denote the Mistral FFN and the DeepSeek shared expert, respectively.
|
| 389 |
+
|
| 390 |
+
<table><tr><td>non-MoE layer idx</td><td>PPL ↓</td><td>Bench avg ↑</td><td>HellaSwag</td><td>MMLU</td><td>GSM8K</td><td>CEval</td></tr><tr><td>none</td><td>8.51</td><td>39.84</td><td>61.75</td><td>41.14</td><td>13.42</td><td>43.04</td></tr><tr><td>1</td><td>8.50</td><td>39.42</td><td>61.88</td><td>40.55</td><td>11.83</td><td>43.41</td></tr><tr><td>6</td><td>8.49</td><td>38.55</td><td>62.17</td><td>39.28</td><td>12.96</td><td>39.78</td></tr><tr><td>12</td><td>8.51</td><td>38.21</td><td>61.46</td><td>40.24</td><td>10.54</td><td>40.61</td></tr><tr><td>18</td><td>8.59</td><td>37.99</td><td>61.73</td><td>39.50</td><td>11.68</td><td>39.03</td></tr><tr><td>24</td><td>8.58</td><td>38.94</td><td>61.33</td><td>40.45</td><td>12.05</td><td>41.91</td></tr></table>
|
| 391 |
+
|
| 392 |
+
Table 6: Model performance on various benchmarks for the dynamic expert numbers experiment. "Bench avg" refers to the average performance over the four evaluated benchmarks.
|
| 393 |
+
|
| 394 |
+
<table><tr><td>num_layers</td><td>24</td></tr><tr><td>vocab_size</td><td>151936</td></tr><tr><td>hidden_size</td><td>1024</td></tr><tr><td>head_dim</td><td>64</td></tr><tr><td>q_head</td><td>16</td></tr><tr><td>kv_head</td><td>4</td></tr><tr><td>moe Hidden_dim</td><td>640</td></tr><tr><td>num_shared_expert</td><td>4</td></tr><tr><td>num_routed_expert</td><td>64</td></tr><tr><td>topk</td><td>4</td></tr></table>
|
| 395 |
+
|
| 396 |
+
Table 7: Model architecture for the dynamic expert numbers experiment.
|
| 397 |
+
|
| 398 |
+
within SMoE to achieve SOTA performance when activating the same number of parameters.
|
| 399 |
+
|
| 400 |
+
# I Chosen Experts
|
| 401 |
+
|
| 402 |
+
This experiment aims to examine the routing patterns. We feed an input prompt with about 64 tokens into the MoE models and record the gate scores (after applying softmax) for the selected experts for each token. In addition to the base model of Mixtral (Mixtral-Base), we also include its instruct version (Mixtral-Instruct) in this experiment. The results are depicted in Fig. 14.
|
| 403 |
+
|
| 404 |
+
Mixtral. In Mixtral-Base, the experts are selected fairly evenly across tokens, and it is common to see sequences of more than four tokens routed to the same expert. But the "special expert" with the dark
|
| 405 |
+
|
| 406 |
+
cross in previous similarity graphs turns out to be an exception. These special experts are chosen less frequently and tend to receive relatively low scores. The routing pattern of Mixtral-Instruct is largely identical to that of Mixtral-Base, which indicates fine-tuning has little impact on gate decisions.
|
| 407 |
+
|
| 408 |
+
DeepSeek. In some layers, there is an expert selected by most tokens. However, no distinct characteristics for these experts are observed in the previous similarity heat maps. Note that the gate scores for DeepSeek are typically lower than those for Mixtral because DeepSeek applies softmax before the top-k operation, while Mixtral adopts the reverse way.
|
| 409 |
+
|
| 410 |
+
Grok. The expert selection is rather even and some relatively high scores exist in the deeper ( $>30^{\text{th}}$ ) layers. Same as DeepSeek, softmax is applied before the top-k operation for Grok.
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
DeepSeek
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
Figure 10: Counts of the gate score ranking for each norm ranking using the long input. The larger the rank number, the larger the norm or score.
|
| 421 |
+
|
| 422 |
+

|
| 423 |
+
|
| 424 |
+

|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
Figure 11: Counts of the gate score ranking for Mixtral expert ourput norm rankings using additional datasets, namely WikiText-103 (top), GSM8K (middle), and Magicoder-Evol-Instruct-110K (bottom). The larger the rank number, the larger the norm or score.
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
Figure 12: Intermediate state values of Mixtral experts. The top $k$ experts are shown on top of each heat map. Each number in the vertical axis refers to an expert index while the horizontal axis represents the number of neurons.
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
|
| 434 |
+

|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
Figure 13: Counts of the gate score ranking for DeepSeek expert ourput norm rankings using additional datasets, namely WikiText-103 (top), GSM8K (middle), and Magicoder-Evol-Instruct-110K (bottom) The larger the rank number, the larger the norm or score.
|
| 438 |
+
|
| 439 |
+

|
| 440 |
+
|
| 441 |
+

|
| 442 |
+
|
| 443 |
+

|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
Figure 14: Routing patterns of different models. Deeper colors mean higher gate scores assigned to the corresponding experts. Only scores of the top $k$ experts are illustrated.
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:036d2fa79a3b0a5d9ac53ff865bd1976da6ec04a5d481ba8df7e89f513bffcdd
|
| 3 |
+
size 1879395
|
2025/A Closer Look into Mixture-of-Experts in Large Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_content_list.json
ADDED
|
@@ -0,0 +1,1558 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A comprehensive survey of contemporary Arabic sentiment analysis: Methods, Challenges, and Future Directions",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
144,
|
| 8 |
+
90,
|
| 9 |
+
855,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zhiqiang Shi",
|
| 17 |
+
"bbox": [
|
| 18 |
+
272,
|
| 19 |
+
158,
|
| 20 |
+
389,
|
| 21 |
+
174
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "University of Edinburgh shizhiqiang126@163.com",
|
| 28 |
+
"bbox": [
|
| 29 |
+
220,
|
| 30 |
+
175,
|
| 31 |
+
443,
|
| 32 |
+
206
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Ruchit Agrawal",
|
| 39 |
+
"bbox": [
|
| 40 |
+
594,
|
| 41 |
+
158,
|
| 42 |
+
737,
|
| 43 |
+
174
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "University of Birmingham r.r.agrawal@bham.ac.uk",
|
| 50 |
+
"bbox": [
|
| 51 |
+
554,
|
| 52 |
+
175,
|
| 53 |
+
776,
|
| 54 |
+
206
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
260,
|
| 64 |
+
261,
|
| 65 |
+
339,
|
| 66 |
+
275
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Sentiment Analysis, a popular subtask of Natural Language Processing, employs computational methods to extract sentiment, opinions, and other subjective aspects from linguistic data. Given its crucial role in understanding human sentiment, research in sentiment analysis has witnessed significant growth in the recent years. However, the majority of approaches are aimed at the English language, and research towards Arabic sentiment analysis remains relatively unexplored. This paper presents a comprehensive and contemporary survey of Arabic Sentiment Analysis, identifies the challenges and limitations of existing literature in this field and presents avenues for future research. We present a systematic review of Arabic sentiment analysis methods, focusing specifically on research utilizing deep learning. We then situate Arabic Sentiment Analysis within the broader context, highlighting research gaps in Arabic sentiment analysis as compared to general sentiment analysis. Finally, we outline the main challenges and promising future directions for research in Arabic sentiment analysis.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
144,
|
| 75 |
+
279,
|
| 76 |
+
460,
|
| 77 |
+
620
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1 Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
114,
|
| 87 |
+
627,
|
| 88 |
+
258,
|
| 89 |
+
640
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Sentiment Analysis (SA), also referred to as opinion mining, leverages computational models to extract individuals' sentiments and opinions from data (Liu, 2015). This field has garnered significant attention from both academic and industrial sectors, as evidenced by the multitude of studies conducted to comprehend human sentiment (Liu, 2015), (Zeng and Li, 2022), (Varathan et al., 2017). However, although Arabic is a widely popular language spoken by over 372 million people across the globe, the volume of research dedicated to Arabic Sentiment Analysis (ASA) remains considerably lower compared to high-resourced languages such as English and French. This study presents a systematic review of existing literature on Arabic sentiment analysis, with a particular focus on approaches that employ deep learning methodologies.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
112,
|
| 98 |
+
646,
|
| 99 |
+
490,
|
| 100 |
+
921
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Several prior surveys on Arabic sentiment analysis (ASA) exist (Almurqren et al., 2024), (Al-Ayyoub et al., 2019), (Al-Twairesh et al., 2014), (Biltawi et al., 2016), (Abdelhameed and Muñoz-Hern'ández, 2017), (Abo et al., 2019), (Rahma et al., 2023). However, the majority of these do not cover contemporary deep learning based methods. Additionally, these do not present a detailed analysis of the gaps, challenges and future directions for ASA. This paper fills this gap and presents a comprehensive survey of contemporary methods for Arabic sentiment analysis, systematically organizing recent literature in the field and highlighting the key contributions and limitations of current SA methods. We also situate these approaches within the broader framework of general sentiment analysis and approaches for high-resource languages, facilitating an understanding of the developments as well as the gaps in ASA. The major contributions of this study are summarised below:",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
261,
|
| 110 |
+
885,
|
| 111 |
+
583
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "list",
|
| 117 |
+
"sub_type": "text",
|
| 118 |
+
"list_items": [
|
| 119 |
+
"- We present a systematic survey of contemporary research conducted in Arabic sentiment analysis, with a focus on deep learning methodologies. We present an analysis of the key contributions and limitations of state-of-the-art ASA methods across various dimensions such as modality (unimodal, multi-modal), granularity (coarse-grained, fine-grained) and context (sentence-level, document-level, aspect-level).",
|
| 120 |
+
"- We situate Arabic sentiment analysis within the broader framework of general sentiment analysis, identifying research gaps in Arabic sentiment analysis, and highlighting areas where advancements are needed to bridge the gap with high-resource languages.",
|
| 121 |
+
"- We outline the key challenges in developing robust sentiment analysis models for the Arabic language, and present promising directions to guide future research in this field."
|
| 122 |
+
],
|
| 123 |
+
"bbox": [
|
| 124 |
+
531,
|
| 125 |
+
590,
|
| 126 |
+
885,
|
| 127 |
+
920
|
| 128 |
+
],
|
| 129 |
+
"page_idx": 0
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"type": "page_number",
|
| 133 |
+
"text": "3760",
|
| 134 |
+
"bbox": [
|
| 135 |
+
480,
|
| 136 |
+
927,
|
| 137 |
+
519,
|
| 138 |
+
940
|
| 139 |
+
],
|
| 140 |
+
"page_idx": 0
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"type": "footer",
|
| 144 |
+
"text": "Findings of the Association for Computational Linguistics:",
|
| 145 |
+
"bbox": [
|
| 146 |
+
317,
|
| 147 |
+
945,
|
| 148 |
+
677,
|
| 149 |
+
958
|
| 150 |
+
],
|
| 151 |
+
"page_idx": 0
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"type": "footer",
|
| 155 |
+
"text": "NAACL 2025, pages 3760-3772",
|
| 156 |
+
"bbox": [
|
| 157 |
+
400,
|
| 158 |
+
959,
|
| 159 |
+
596,
|
| 160 |
+
971
|
| 161 |
+
],
|
| 162 |
+
"page_idx": 0
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"type": "footer",
|
| 166 |
+
"text": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics",
|
| 167 |
+
"bbox": [
|
| 168 |
+
273,
|
| 169 |
+
972,
|
| 170 |
+
722,
|
| 171 |
+
985
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 0
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "text",
|
| 177 |
+
"text": "2 The Evolution of Arabic Sentiment Analysis",
|
| 178 |
+
"text_level": 1,
|
| 179 |
+
"bbox": [
|
| 180 |
+
112,
|
| 181 |
+
83,
|
| 182 |
+
450,
|
| 183 |
+
116
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 1
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "text",
|
| 189 |
+
"text": "In this section, we describe the evolution of Arabic sentiment analysis, from lexicon based methods to deep learning based methods. To highlight the importance of these traditional methods, we conduct a case study using Arabic lexicons, in which we highlight how lexicons can be utilised to improve deep learning based methods.",
|
| 190 |
+
"bbox": [
|
| 191 |
+
112,
|
| 192 |
+
126,
|
| 193 |
+
487,
|
| 194 |
+
237
|
| 195 |
+
],
|
| 196 |
+
"page_idx": 1
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"type": "text",
|
| 200 |
+
"text": "2.1 Lexicon Based Methods",
|
| 201 |
+
"text_level": 1,
|
| 202 |
+
"bbox": [
|
| 203 |
+
112,
|
| 204 |
+
243,
|
| 205 |
+
347,
|
| 206 |
+
256
|
| 207 |
+
],
|
| 208 |
+
"page_idx": 1
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"type": "text",
|
| 212 |
+
"text": "Lexicon based methods utilise a pre-defined lexicon to determine the sentiment of the given text. The words in the lexicon are annotated with polarity or sentiment scores. The overall sentiment of the text are calculated by summing up all the words' sentiment score. Given their crucial role in lexicon based methods, we briefly mention some widely used Arabic lexicons.",
|
| 213 |
+
"bbox": [
|
| 214 |
+
112,
|
| 215 |
+
260,
|
| 216 |
+
487,
|
| 217 |
+
387
|
| 218 |
+
],
|
| 219 |
+
"page_idx": 1
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"type": "text",
|
| 223 |
+
"text": "Arabic Senti-Lexicon (Abdullah.; 2018): Arabic Senti-Lexcion contains 3880 synsets that are annotated with part of speech, polarity scores and inflected forms.",
|
| 224 |
+
"bbox": [
|
| 225 |
+
112,
|
| 226 |
+
390,
|
| 227 |
+
489,
|
| 228 |
+
451
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "ArsenL (Arabic Sentiment Lexicon) (Badaro et al., 2014): ArsenL are constructed from multiple resources, including English WordNet (EWN), Arabic WordNet (AWN), English SentiWordNet (ESWN), and SAMA (Standard Arabic Morphological Analyzer). It contains 157969 synsets and has positive, negative and neutral sentiment scores.",
|
| 235 |
+
"bbox": [
|
| 236 |
+
112,
|
| 237 |
+
455,
|
| 238 |
+
489,
|
| 239 |
+
565
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "2.2 Machine learning based methods",
|
| 246 |
+
"text_level": 1,
|
| 247 |
+
"bbox": [
|
| 248 |
+
112,
|
| 249 |
+
577,
|
| 250 |
+
418,
|
| 251 |
+
593
|
| 252 |
+
],
|
| 253 |
+
"page_idx": 1
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"type": "text",
|
| 257 |
+
"text": "Lexicon based methods are simple and fast, but they heavily rely on the lexicons and the sentiment scores of the lexicons lack context. Machine learning based methods can help to overcome these limitations by learning the sentiment features from data rather than pre-defining them by humans. However, for traditional machine learning methods, feature engineering is required as a precursor to the ML algorithms.",
|
| 258 |
+
"bbox": [
|
| 259 |
+
112,
|
| 260 |
+
599,
|
| 261 |
+
487,
|
| 262 |
+
743
|
| 263 |
+
],
|
| 264 |
+
"page_idx": 1
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"type": "text",
|
| 268 |
+
"text": "Some widely used feature engineering methods include bag-of-words (Qader et al., 2019), TF-IDF (Sammut and Webb, 2010) and word embedding (Almeida and Xexéo, 2019). After the features have been extracted, machine learning methods such as naive Bayes (Al-Horaibi and Khan, 2016), support vector machines (Duwairi et al., 2016) and random forests based methods (Altawaier and Tiun, 2016) can be used for sentiment analysis. In the following sections, we will systematically review deep learning based approaches.",
|
| 269 |
+
"bbox": [
|
| 270 |
+
112,
|
| 271 |
+
745,
|
| 272 |
+
487,
|
| 273 |
+
921
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "2.3 The importance of traditional methods: a case study on sentiment lexicons",
|
| 280 |
+
"text_level": 1,
|
| 281 |
+
"bbox": [
|
| 282 |
+
509,
|
| 283 |
+
84,
|
| 284 |
+
880,
|
| 285 |
+
116
|
| 286 |
+
],
|
| 287 |
+
"page_idx": 1
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"type": "text",
|
| 291 |
+
"text": "It is important to note that even with the rise of deep learning based methods for sentiment analysis, sentiment lexicons like ArSenL(Badaro et al., 2014) can still be valuable as they provide a foundational understanding of sentiment that can enhance model performance, especially in low-resource scenarios or when dealing with domain-specific language that may not be well-represented in training data. Some use cases of these lexicons include data preprocessing where the irrelevant terms are filtered out based on the sentiment lexicon, sentiment weighting (Zhang et al., 2011) where the lexicon is employed to help the model weight sentiment-relevant features more effectively. In the following paragraphs, we will use some examples to illustrate how sentiment lexicons can be utilised to improve the sentiment analysis performance.",
|
| 292 |
+
"bbox": [
|
| 293 |
+
507,
|
| 294 |
+
135,
|
| 295 |
+
884,
|
| 296 |
+
407
|
| 297 |
+
],
|
| 298 |
+
"page_idx": 1
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"type": "text",
|
| 302 |
+
"text": "Feature Augmentation: Lexicons can be utilised to augment the features. In (Heikal et al., 2018), a sentiment lexicon is integrated to augment the features for deep learning based modes. (Xiang et al., 2021) explores part-of-speech-focused lexical substitution for data augmentation to enhance sentiment analysis performance. (Li and Zou, 2024) discusses how lexicon-based approaches assign sentiment polarities and scores to keywords, which can be used for feature augmentation in hybrid models.",
|
| 303 |
+
"bbox": [
|
| 304 |
+
507,
|
| 305 |
+
416,
|
| 306 |
+
884,
|
| 307 |
+
592
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 1
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "text",
|
| 313 |
+
"text": "Interpretability: Arabic sentiment lexicons enhance interpretability in sentiment analysis by providing a clear framework for understanding how specific words and phrases contribute to sentiment assessments. By combining lexicons with advanced methods like attention-based LSTM and explainable AI techniques, such as LIME (Ribeiro et al., 2016), researchers can further clarify which features are most significant in determining sentiment, thereby enhancing the transparency of their findings (Abdelwahab et al., 2022). These lexicons facilitate the identification of the sentiment polarity of individual terms, making it easier to trace the reasoning behind a sentiment classification (van der Veen and Bleich, 2025). Moreover, the integration of lexicon-based approaches with machine learning techniques can improve the interpretability of complex models, as researchers can analyze how lexicon entries influence the overall sentiment predictions (Ambreen et al., 2024).",
|
| 314 |
+
"bbox": [
|
| 315 |
+
507,
|
| 316 |
+
599,
|
| 317 |
+
882,
|
| 318 |
+
920
|
| 319 |
+
],
|
| 320 |
+
"page_idx": 1
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"type": "page_number",
|
| 324 |
+
"text": "3761",
|
| 325 |
+
"bbox": [
|
| 326 |
+
480,
|
| 327 |
+
928,
|
| 328 |
+
517,
|
| 329 |
+
940
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 1
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "table",
|
| 335 |
+
"img_path": "images/a7a9742d08514385687256e07c9afd845ce07dc6d56cc20055350303fab8c2ae.jpg",
|
| 336 |
+
"table_caption": [],
|
| 337 |
+
"table_footnote": [],
|
| 338 |
+
"table_body": "<table><tr><td>Dataset</td><td>Modality</td><td>Granularity</td><td>Context</td><td>Dialect</td></tr><tr><td>LABR (Aly and Atiya, 2013)</td><td>text</td><td>SC</td><td>document-level</td><td>MSA and various other dialects</td></tr><tr><td>ASTD (Nabil et al., 2015)</td><td>text</td><td>SC</td><td>document-level</td><td>MSA and Egyptian Arabic</td></tr><tr><td>ArSentD-LEV (Baly et al., 2019)</td><td>text</td><td>SC, ABSA, MAST</td><td>document-level</td><td>Levantine dialect</td></tr><tr><td>ArSarcasm (Abu Farha and Magdy, 2020)</td><td>text</td><td>SC, MAST</td><td>document-level</td><td>various dialects</td></tr><tr><td>ArSarcasm-v2 (Abu Farha et al., 2021)</td><td>text</td><td>SC, MAST</td><td>document-level</td><td>various dialects</td></tr><tr><td>Arsen-20 (Fang and Xu, 2024)</td><td>text</td><td>SC</td><td>document-level</td><td>various dialects</td></tr><tr><td>Arabic multimodal dataset (Haouhat et al., 2023)</td><td>text, audio, video</td><td>SC</td><td>document-level (video segments)</td><td>various dialects</td></tr></table>",
|
| 339 |
+
"bbox": [
|
| 340 |
+
115,
|
| 341 |
+
80,
|
| 342 |
+
882,
|
| 343 |
+
326
|
| 344 |
+
],
|
| 345 |
+
"page_idx": 2
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"type": "table",
|
| 349 |
+
"img_path": "images/e026ea2ac44b1729bf1d8890e93b2171e1c62d58efe9145d41f2dd521933ad6b.jpg",
|
| 350 |
+
"table_caption": [
|
| 351 |
+
"Table 1: Datasets for Arabic Sentiment Analysis, organised according to modality, granularity and context.",
|
| 352 |
+
"SC: Sentiment Classification",
|
| 353 |
+
"MAST: Multifacted Analysis of Subjective Text",
|
| 354 |
+
"ABSA: Aspect based Sentiment Analysis."
|
| 355 |
+
],
|
| 356 |
+
"table_footnote": [],
|
| 357 |
+
"table_body": "<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>(Dahou et al., 2016)</td><td>Develops Arabic word embeddings and employs CNN as a classifier</td><td>Task-specific method, static word embeddings</td><td>LABR (89.6%), ASTD (79.07%)</td></tr><tr><td>(Medhaffar et al., 2017)</td><td>Annotates a Tunisian dialect corpus and evaluates models on different dialects</td><td>Only experiments with traditional machine learning methods</td><td>-</td></tr><tr><td>(Baly et al., 2017)</td><td>Performs a characterization study that analyses tweets from different Arab regions, and compares ML-based vs. deep-learning methods for Arabic SA</td><td>Does not experiment on different dialects and topics</td><td>ASTD (SVM 51.7%, RNTN 58.5%)</td></tr><tr><td>(Guellil et al., 2018)</td><td>Automatically constructs an Alge-rian dialect corpus</td><td>Evaluation is carried out only using traditional ML methods</td><td>-</td></tr><tr><td>(Attia et al., 2018)</td><td>Proposes a language independent, multi-class model for SA using CNNs</td><td>Evaluation for Arabic is only carried out on ASTD (Nabil et al., 2015), unclear whether the model generalises well to other datasets</td><td>ASTD (67.93%)</td></tr><tr><td>(Alyafeai et al., 2021)</td><td>Compares different tokenizers for different Arabic classification tasks</td><td>Does not evaluate on complex architecture like attention-based models -</td><td></td></tr><tr><td>(Atabuzzaman et al., 2023)</td><td>Proposes an explainable sentiment classification framework for Arabic</td><td>Does not conduct experimentation on Transformer-based models</td><td>LABR (88.0%)</td></tr></table>",
|
| 358 |
+
"bbox": [
|
| 359 |
+
122,
|
| 360 |
+
401,
|
| 361 |
+
875,
|
| 362 |
+
859
|
| 363 |
+
],
|
| 364 |
+
"page_idx": 2
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"type": "text",
|
| 368 |
+
"text": "Table 2: Task-specific methods for Arabic sentiment classification.",
|
| 369 |
+
"bbox": [
|
| 370 |
+
270,
|
| 371 |
+
873,
|
| 372 |
+
724,
|
| 373 |
+
887
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 2
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "LABR: Large Scale Arabic Book Reviews Dataset",
|
| 380 |
+
"bbox": [
|
| 381 |
+
329,
|
| 382 |
+
888,
|
| 383 |
+
667,
|
| 384 |
+
901
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 2
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "ASTD: Arabic Sentiment Tweets Dataset",
|
| 391 |
+
"bbox": [
|
| 392 |
+
359,
|
| 393 |
+
902,
|
| 394 |
+
636,
|
| 395 |
+
914
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "page_number",
|
| 401 |
+
"text": "3762",
|
| 402 |
+
"bbox": [
|
| 403 |
+
480,
|
| 404 |
+
927,
|
| 405 |
+
521,
|
| 406 |
+
940
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "text",
|
| 412 |
+
"text": "3 Situating ASA methods vis-à-vis general SA approaches",
|
| 413 |
+
"text_level": 1,
|
| 414 |
+
"bbox": [
|
| 415 |
+
114,
|
| 416 |
+
83,
|
| 417 |
+
426,
|
| 418 |
+
118
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 3
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "We situate the research in Arabic Sentiment Analysis (ASA) and juxtapose it with general trends in sentiment analysis (SA) in this section. We present an overview of several sentiment analysis tasks, and for each task we highlight the advancements in general sentiment analysis research, followed by a focus on Arabic-specific sentiment analysis. While not exhaustive, the selected approaches illustrate key differences and current trends between general and Arabic sentiment analysis. Table 1 provides an overview of datasets for ASA organised according to the modality, granularity and context involved. We refer to these datasets in the subsequent subsections.",
|
| 425 |
+
"bbox": [
|
| 426 |
+
112,
|
| 427 |
+
127,
|
| 428 |
+
492,
|
| 429 |
+
350
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 3
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "3.1 Sentiment Classification",
|
| 436 |
+
"text_level": 1,
|
| 437 |
+
"bbox": [
|
| 438 |
+
112,
|
| 439 |
+
354,
|
| 440 |
+
349,
|
| 441 |
+
369
|
| 442 |
+
],
|
| 443 |
+
"page_idx": 3
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"text": "Sentiment classification involves assigning a sentiment label (positive, neutral, negative) or a sentiment rating (e.g., one to five) to a given input, which can be text or data from other modalities. As one of the earliest sentiment analysis tasks (Liu, 2015), sentiment classification has attracted significant research interest.",
|
| 448 |
+
"bbox": [
|
| 449 |
+
112,
|
| 450 |
+
376,
|
| 451 |
+
489,
|
| 452 |
+
488
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 3
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "3.1.1 General Sentiment Classification",
|
| 459 |
+
"text_level": 1,
|
| 460 |
+
"bbox": [
|
| 461 |
+
112,
|
| 462 |
+
499,
|
| 463 |
+
431,
|
| 464 |
+
514
|
| 465 |
+
],
|
| 466 |
+
"page_idx": 3
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"type": "text",
|
| 470 |
+
"text": "The development of general sentiment classification reflects the ongoing paradigm shifts within the field of natural language processing (Liu et al., 2021). Early works in sentiment classification primarily relied on task-specific models, employing either traditional machine learning methods like Support Vector Machines (SVMs) or deep learning-based approaches. These models were trained on labeled data and limited to solving specific tasks.",
|
| 471 |
+
"bbox": [
|
| 472 |
+
112,
|
| 473 |
+
519,
|
| 474 |
+
489,
|
| 475 |
+
663
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 3
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "text",
|
| 481 |
+
"text": "However, the emergence of pre-trained language models such as BERT (Devlin et al., 2019) has revolutionized the field. These models, typically based on components of the Transformer architecture, are pre-trained on massive amounts of unlabeled data and subsequently fine-tuned for specific tasks, including sentiment classification. Large language models like GPT-3 (Brown et al., 2020) further push the boundaries of model size, demonstrating the ability to acquire various emergent capabilities such as in-context learning when scaled sufficiently large (Wei et al., 2022). A systematic analysis of large language models' effectiveness in tackling various sentiment analysis tasks, including sentiment classification, is provided by (Zhang et al., 2023).",
|
| 482 |
+
"bbox": [
|
| 483 |
+
112,
|
| 484 |
+
664,
|
| 485 |
+
489,
|
| 486 |
+
920
|
| 487 |
+
],
|
| 488 |
+
"page_idx": 3
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"type": "text",
|
| 492 |
+
"text": "3.1.2 Arabic Sentiment Classification",
|
| 493 |
+
"text_level": 1,
|
| 494 |
+
"bbox": [
|
| 495 |
+
509,
|
| 496 |
+
84,
|
| 497 |
+
818,
|
| 498 |
+
99
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 3
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"text": "The development of Arabic sentiment classification follows a similar trajectory to that of general sentiment classification. Early research predominantly focused on task-specific models trained on sentiment classification datasets for Arabic text. (Dahou et al., 2016) constructed Arabic word embeddings and subsequently employed a Convolutional Neural Network (CNN) as a classifier. (Attia et al., 2018) proposed a language-independent framework for text classification, evaluating its performance on Arabic sentiment classification tasks as well. Table 2 provides detailed descriptions of various task-specific methods for Arabic sentiment classification, along with their contributions and limitations. As highlighted in the table, the biggest limitation of such methods is that they are task-specific and do not generalise well to other tasks or dialects.",
|
| 505 |
+
"bbox": [
|
| 506 |
+
507,
|
| 507 |
+
104,
|
| 508 |
+
884,
|
| 509 |
+
376
|
| 510 |
+
],
|
| 511 |
+
"page_idx": 3
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"text": "With the remarkable success of pre-trained language models based on bidirectional transformers, such as BERT (Devlin et al., 2019), on diverse natural language understanding tasks, numerous studies have explored their utilization for Arabic sentiment classification. (ElJundi et al., 2019) developed hULMonA, a pre-trained language model specifically for Arabic, and fine-tuned it for Arabic sentiment analysis. AraBERT (Antoun et al., 2020) builds upon this work by pre-training the model entirely on an Arabic corpus and evaluating its performance on various tasks. (Abdul-Mageed et al., 2021) introduced ARBERT and MARBERT, language models pre-trained on dialectal Arabic. Table 3 offers detailed descriptions of different pre-trained language model-based methods for Arabic sentiment classification.",
|
| 516 |
+
"bbox": [
|
| 517 |
+
507,
|
| 518 |
+
378,
|
| 519 |
+
885,
|
| 520 |
+
650
|
| 521 |
+
],
|
| 522 |
+
"page_idx": 3
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"text": "3.2 Multifaceted Analysis of Subjective Text (MAST)",
|
| 527 |
+
"text_level": 1,
|
| 528 |
+
"bbox": [
|
| 529 |
+
507,
|
| 530 |
+
662,
|
| 531 |
+
873,
|
| 532 |
+
694
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 3
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "text",
|
| 538 |
+
"text": "Multifaceted analysis of subjective text (MAST) represents an extension of sentiment classification that delves deeper into task granularity. It shifts the focus towards more specialized tasks, such as irony detection (Zeng and Li, 2022) and comparative opinion mining (Varathan et al., 2017).",
|
| 539 |
+
"bbox": [
|
| 540 |
+
507,
|
| 541 |
+
699,
|
| 542 |
+
884,
|
| 543 |
+
796
|
| 544 |
+
],
|
| 545 |
+
"page_idx": 3
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"type": "text",
|
| 549 |
+
"text": "3.2.1 General MAST",
|
| 550 |
+
"text_level": 1,
|
| 551 |
+
"bbox": [
|
| 552 |
+
507,
|
| 553 |
+
804,
|
| 554 |
+
692,
|
| 555 |
+
820
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 3
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"text": "The development trajectory of general MAST mirrors that of general sentiment classification, as discussed in Section 3.1.1. Due to the focus on specialized tasks within MAST, it encompasses a multitude of sub-tasks. While these sub-tasks have been extensively explored in the field, a detailed de",
|
| 562 |
+
"bbox": [
|
| 563 |
+
507,
|
| 564 |
+
825,
|
| 565 |
+
884,
|
| 566 |
+
921
|
| 567 |
+
],
|
| 568 |
+
"page_idx": 3
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "page_number",
|
| 572 |
+
"text": "3763",
|
| 573 |
+
"bbox": [
|
| 574 |
+
480,
|
| 575 |
+
927,
|
| 576 |
+
519,
|
| 577 |
+
940
|
| 578 |
+
],
|
| 579 |
+
"page_idx": 3
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "table",
|
| 583 |
+
"img_path": "images/cab9882a42bc6c4fc362c4b94eb739f2c86c39f39e6924d73358dc232dca8794.jpg",
|
| 584 |
+
"table_caption": [],
|
| 585 |
+
"table_footnote": [],
|
| 586 |
+
"table_body": "<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>hULMonA (ElJundi et al., 2019)</td><td>Develops a pre-trained LM for Arabic and fine-tunes it for SA</td><td>Does not employ an Arabic specific tokenizer and only evaluates on the SA task</td><td>ASTD (86.5%), ArSenTD-LEV (52.4%)</td></tr><tr><td>AraBERT (Antoun et al., 2020)</td><td>Pre-trains an Arabic LM AraBERT and evaluates performance on different tasks</td><td>Does not systematically evaluate the model on different dialects</td><td>LABR (86.7%), ASTD (92.6%), ArSenTD-Lev (59.4%)</td></tr><tr><td>(Alkaoud and Syed, 2020)</td><td>Proposes tokenization methods for static and contextual word embeddings and improves their performance</td><td>Does not study generalisation ability of the proposed method</td><td>LABR (89.87%)</td></tr><tr><td>(Abdul-Mageed et al., 2021)</td><td>Introduces ARBERT and MAR-BERT, pre-trains models on dialectal Arabic, introduces AR-LUE benchmark</td><td>The models have a high memory requirement, thereby impeding computational efficiency</td><td>LABR (92.51%), ASTD (95.24%), ArSenTD-Lev (61.38%)</td></tr><tr><td>(Alyafeai and Ahmad, 2021)</td><td>Employs distillation and quantization to train compact Arabic language models</td><td>The effects of hyperparameter tuning are not analysed</td><td>LABR (87.5%), ASTD (86.2%)</td></tr><tr><td>(El Mekki et al., 2021)</td><td>Introduces an unsupervised domain adaptation method for Arabic cross-domain and cross-dialect SA</td><td>Does not study the effect of domain adaptation from high-resource languages to Arabic</td><td>-</td></tr><tr><td>(Abu Kwaik et al., 2022)</td><td>Compares feature-based, deep learning and pre-trained LM based methods on dialectal Arabic SA</td><td>Lacks an error analysis and a comparison of feature-based vs pre-trained LMs in different situations</td><td>-</td></tr><tr><td>(Refai et al., 2022)</td><td>Proposes a data augmentation method for Arabic text classification using Transformer based models</td><td>Does not study the generalisation ability of their method</td><td>-</td></tr></table>",
|
| 587 |
+
"bbox": [
|
| 588 |
+
115,
|
| 589 |
+
137,
|
| 590 |
+
890,
|
| 591 |
+
772
|
| 592 |
+
],
|
| 593 |
+
"page_idx": 4
|
| 594 |
+
},
|
| 595 |
+
{
|
| 596 |
+
"type": "text",
|
| 597 |
+
"text": "Table 3: Pre-trained language model (LM) based methods for Arabic sentiment classification.",
|
| 598 |
+
"bbox": [
|
| 599 |
+
181,
|
| 600 |
+
787,
|
| 601 |
+
815,
|
| 602 |
+
801
|
| 603 |
+
],
|
| 604 |
+
"page_idx": 4
|
| 605 |
+
},
|
| 606 |
+
{
|
| 607 |
+
"type": "text",
|
| 608 |
+
"text": "hULMonA: The First Universal Language Model in Arabic",
|
| 609 |
+
"bbox": [
|
| 610 |
+
297,
|
| 611 |
+
802,
|
| 612 |
+
699,
|
| 613 |
+
816
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 4
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "text",
|
| 619 |
+
"text": "LABR: Large Scale Arabic Book Reviews Dataset",
|
| 620 |
+
"bbox": [
|
| 621 |
+
329,
|
| 622 |
+
816,
|
| 623 |
+
665,
|
| 624 |
+
829
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 4
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "ASTD: Arabic Sentiment Tweets Dataset",
|
| 631 |
+
"bbox": [
|
| 632 |
+
359,
|
| 633 |
+
831,
|
| 634 |
+
636,
|
| 635 |
+
843
|
| 636 |
+
],
|
| 637 |
+
"page_idx": 4
|
| 638 |
+
},
|
| 639 |
+
{
|
| 640 |
+
"type": "text",
|
| 641 |
+
"text": "ArSenTD-Lev: Arabic Sentiment Twitter Dataset for the Levantine dialect",
|
| 642 |
+
"bbox": [
|
| 643 |
+
248,
|
| 644 |
+
845,
|
| 645 |
+
747,
|
| 646 |
+
859
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 4
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "page_number",
|
| 652 |
+
"text": "3764",
|
| 653 |
+
"bbox": [
|
| 654 |
+
480,
|
| 655 |
+
927,
|
| 656 |
+
519,
|
| 657 |
+
940
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 4
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "scription falls outside the scope of this survey. We encourage readers to refer to comprehensive surveys on specific sub-tasks, such as those by (Zeng and Li, 2022) and another work referenced here (Varathan et al., 2017).",
|
| 664 |
+
"bbox": [
|
| 665 |
+
112,
|
| 666 |
+
84,
|
| 667 |
+
487,
|
| 668 |
+
164
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 5
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "text",
|
| 674 |
+
"text": "3.2.2 Arabic MAST",
|
| 675 |
+
"text_level": 1,
|
| 676 |
+
"bbox": [
|
| 677 |
+
112,
|
| 678 |
+
175,
|
| 679 |
+
285,
|
| 680 |
+
187
|
| 681 |
+
],
|
| 682 |
+
"page_idx": 5
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "text",
|
| 686 |
+
"text": "Compared to general MAST, research on Arabic MAST remains less developed. This section will solely introduce research on Arabic sarcasm detection, as it has garnered a relatively larger body of work following the release of the ArSarcasm (Abu Farha and Magdy, 2020) and ArSarcasm-v2 (Abu Farha et al., 2021) datasets, alongside a shared task organized by WANLP (Abu Farha et al., 2021).",
|
| 687 |
+
"bbox": [
|
| 688 |
+
112,
|
| 689 |
+
195,
|
| 690 |
+
487,
|
| 691 |
+
323
|
| 692 |
+
],
|
| 693 |
+
"page_idx": 5
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "text",
|
| 697 |
+
"text": "While various methods have been employed, including traditional machine learning approaches, task-specific deep learning methods, and pretrained language model-based methods, the latter category combined with various optimizations has emerged as the most effective approach. (Henge et al., 2021) propose a hybrid model that leverages contextual representations from AraBERT (Antoun et al., 2020) alongside static word vectors.",
|
| 698 |
+
"bbox": [
|
| 699 |
+
112,
|
| 700 |
+
325,
|
| 701 |
+
487,
|
| 702 |
+
468
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 5
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"text": "Additionally, recent research explores various machine learning techniques such as down-sampling and augmentation (Israeli et al., 2021) for this task. (Faraj et al., 2021) employ an ensemble approach, combining different pre-trained language models with a hard voting technique. (Talafha et al., 2021) propose framing the problem as a regression task, predicting the level of sarcasm. Table 4 provides detailed descriptions of methods for Arabic sarcasm detection, along with their contributions and limitations.",
|
| 709 |
+
"bbox": [
|
| 710 |
+
112,
|
| 711 |
+
470,
|
| 712 |
+
487,
|
| 713 |
+
645
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 5
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "text",
|
| 719 |
+
"text": "3.3 Aspect-Based Sentiment Analysis (ABSA)",
|
| 720 |
+
"text_level": 1,
|
| 721 |
+
"bbox": [
|
| 722 |
+
112,
|
| 723 |
+
659,
|
| 724 |
+
487,
|
| 725 |
+
674
|
| 726 |
+
],
|
| 727 |
+
"page_idx": 5
|
| 728 |
+
},
|
| 729 |
+
{
|
| 730 |
+
"type": "text",
|
| 731 |
+
"text": "Aspect-based sentiment analysis (ABSA) extends sentiment analysis by introducing a finer-grained level of task granularity. Unlike sentiment classification, where the output is typically a binary or multi-class label, ABSA delves deeper, focusing on aspects within the sentiment analysis process.",
|
| 732 |
+
"bbox": [
|
| 733 |
+
112,
|
| 734 |
+
681,
|
| 735 |
+
487,
|
| 736 |
+
778
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 5
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "3.3.1 General ABSA",
|
| 743 |
+
"text_level": 1,
|
| 744 |
+
"bbox": [
|
| 745 |
+
112,
|
| 746 |
+
788,
|
| 747 |
+
290,
|
| 748 |
+
802
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 5
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "Similar to MAST, general ABSA encompasses various sub-tasks, ranging from simpler single ABSA tasks like aspect term extraction to more complex compound ABSA tasks such as aspect sentiment triplet extraction (Zhang et al., 2022). (Zhang et al., 2022) provide a comprehensive survey on general ABSA, we recommend referring to their work for",
|
| 755 |
+
"bbox": [
|
| 756 |
+
112,
|
| 757 |
+
808,
|
| 758 |
+
487,
|
| 759 |
+
920
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 5
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "text",
|
| 765 |
+
"text": "further details on general trends in this direction.",
|
| 766 |
+
"bbox": [
|
| 767 |
+
507,
|
| 768 |
+
84,
|
| 769 |
+
870,
|
| 770 |
+
99
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 5
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "text",
|
| 776 |
+
"text": "3.3.2 Arabic ABSA",
|
| 777 |
+
"text_level": 1,
|
| 778 |
+
"bbox": [
|
| 779 |
+
507,
|
| 780 |
+
115,
|
| 781 |
+
677,
|
| 782 |
+
129
|
| 783 |
+
],
|
| 784 |
+
"page_idx": 5
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"type": "text",
|
| 788 |
+
"text": "Research on Arabic ABSA lags behind that of general ABSA. The majority of existing works in Arabic ABSA primarily address aspect sentiment classification, which essentially translates to sentiment classification applied at the aspect level. Additionally, many studies rely solely on feature-based approaches and traditional machine learning methods. This section will focus exclusively on deep learning-based methods for Arabic ABSA.",
|
| 789 |
+
"bbox": [
|
| 790 |
+
507,
|
| 791 |
+
137,
|
| 792 |
+
882,
|
| 793 |
+
281
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 5
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "text",
|
| 799 |
+
"text": "The development of Arabic ABSA parallels that of Arabic sentiment classification, as discussed in Section 3.1.2. A growing number of studies are employing deep learning and pre-trained language model-based methods. (Al-Smadi et al., 2017) and (Alshammari and Almansour, 2020) compare traditional machine learning and deep learning methods for Arabic ABSA. (Al-Dabet et al., 2021) propose different network architectures tailored to specific Arabic ABSA tasks. (Abdelgwad et al., 2021) develops a BERT-based model for Arabic ABSA. Table 5 provides detailed descriptions of methods for Arabic ABSA.",
|
| 800 |
+
"bbox": [
|
| 801 |
+
507,
|
| 802 |
+
284,
|
| 803 |
+
882,
|
| 804 |
+
491
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 5
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "text",
|
| 810 |
+
"text": "3.4 Comparison within Arabic Sentiment Analysis Methods",
|
| 811 |
+
"text_level": 1,
|
| 812 |
+
"bbox": [
|
| 813 |
+
507,
|
| 814 |
+
508,
|
| 815 |
+
850,
|
| 816 |
+
541
|
| 817 |
+
],
|
| 818 |
+
"page_idx": 5
|
| 819 |
+
},
|
| 820 |
+
{
|
| 821 |
+
"type": "text",
|
| 822 |
+
"text": "Although task-specific methods are dedicated to ASA tasks, the most effective methods are those that combine pre-trained language models and various optimisation techniques. hULMonA (ElJundi et al., 2019) first demonstrate the effectiveness of pre-trained language models by developing a pretrained LM for Arabic and fine-tuning it for ASA, which significantly improves the performance. Latter pre-trained Arabic LMs such as AraBERT (Antoun et al., 2020) and ARBERT (Abdul-Mageed et al., 2021) further push the boundary of ASA by using Arabic specific tokenisation and pre-training models on dialectal Arabic.",
|
| 823 |
+
"bbox": [
|
| 824 |
+
507,
|
| 825 |
+
549,
|
| 826 |
+
882,
|
| 827 |
+
757
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 5
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "text",
|
| 833 |
+
"text": "Another research direction extends the existing methods using various approaches such as domain adaptation (El Mekki et al., 2021) and data augmentation (Refai et al., 2022). However, the models based on pre-trained LMs are not computationally efficient, and involve a significant computational overhead, whereas approaches such as (Alyafeai and Ahmad, 2021) maintain the balance between performance and efficiency by distillation and quantisation.",
|
| 834 |
+
"bbox": [
|
| 835 |
+
507,
|
| 836 |
+
760,
|
| 837 |
+
882,
|
| 838 |
+
919
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 5
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "page_number",
|
| 844 |
+
"text": "3765",
|
| 845 |
+
"bbox": [
|
| 846 |
+
480,
|
| 847 |
+
928,
|
| 848 |
+
519,
|
| 849 |
+
940
|
| 850 |
+
],
|
| 851 |
+
"page_idx": 5
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"type": "table",
|
| 855 |
+
"img_path": "images/d1525b4482376447c39da77e176030d076ccc9ec78e7e21683ab070bf30e9fa9.jpg",
|
| 856 |
+
"table_caption": [],
|
| 857 |
+
"table_footnote": [],
|
| 858 |
+
"table_body": "<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>(Hengle et al., 2021)</td><td>Proposes a hybrid model which combines contextual representations from AraBERT (Antoun et al., 2020) and static word vectors</td><td>Hybrid model, computational efficiency is impeded owing to use of both contextual and static embeddings.</td><td>74.1%</td></tr><tr><td>(Israeli et al., 2021)</td><td>Employs pre-trained Transformer based models and various machine learning techniques such as down-sampling and augmentation</td><td>Does not explain the effects of these techniques</td><td>76.7%</td></tr><tr><td>(Talafha et al., 2021)</td><td>Annotates an Arabic sarcasm detection dataset, trains a regression model and predicts the level of sarcasm</td><td>While the model can output the level of sarcasm, it is reliant on a binary classification dataset</td><td>-</td></tr><tr><td>(Khondaker et al., 2022)</td><td>Applies contrastive learning to Arabic social meaning tasks</td><td>Does not study the generalisa-tion ability of their method</td><td>-</td></tr><tr><td>(Faraj et al., 2021)</td><td>Ensembles different pre-trained language models and employs hard voting tech-nique</td><td>The method is not efficient as it needs various pre-trained language models.</td><td>78.3%</td></tr><tr><td>(El Mahdaouy et al., 2021)</td><td>Proposes an end-to-end multi-task model for Arabic sentiment analysis and sarcasm detection</td><td>Does not present experimentation on other tasks</td><td>76.8%</td></tr><tr><td>(Kaseb and Farouk, 2022)</td><td>Proposes SAIDS that uses its prediction of sarcasm and dialect as known information to predict the sentiment</td><td>Does not study the generalisa-tion ability of their method</td><td>-</td></tr></table>",
|
| 859 |
+
"bbox": [
|
| 860 |
+
115,
|
| 861 |
+
101,
|
| 862 |
+
900,
|
| 863 |
+
592
|
| 864 |
+
],
|
| 865 |
+
"page_idx": 6
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "table",
|
| 869 |
+
"img_path": "images/8c1ec3712036095fbb60eb310c5b969a7fc924c282eef323673701f84e149c77.jpg",
|
| 870 |
+
"table_caption": [
|
| 871 |
+
"Table 4: Methods for Arabic sarcasm detection. The accuracy is evaluated on ArSarcam-v2 dataset."
|
| 872 |
+
],
|
| 873 |
+
"table_footnote": [],
|
| 874 |
+
"table_body": "<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td></tr><tr><td>(Ashi et al., 2019)</td><td>Employs pre-trained word embeddings for Arabic ABSA</td><td>Only uses traditional machine learning methods as classifier</td></tr><tr><td>(Al-Smadi et al., 2017)</td><td>Compares RNN and SVM for Arabic ABSA</td><td>The dataset is relatively small, does not use other deep learning models such as LSTM</td></tr><tr><td>(Alshammari and Almansour, 2020)</td><td>Compares CNN and traditional machine learning methods for Arabic ABSA</td><td>Task-specific method, does not compare different deep learning methods</td></tr><tr><td>(Al-Dabet et al., 2021)</td><td>Proposes different network architectures for various Arabic ABSA tasks</td><td>Task-specific method</td></tr><tr><td>(Abdelgwad et al., 2021)</td><td>Develops a BERT based model for Arabic ABSA</td><td>Does not present experimentation on other tasks</td></tr></table>",
|
| 875 |
+
"bbox": [
|
| 876 |
+
115,
|
| 877 |
+
668,
|
| 878 |
+
895,
|
| 879 |
+
866
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 6
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "Table 5: Methods for Arabic aspect based sentiment analysis (ABSA).",
|
| 886 |
+
"bbox": [
|
| 887 |
+
255,
|
| 888 |
+
881,
|
| 889 |
+
739,
|
| 890 |
+
897
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 6
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "page_number",
|
| 896 |
+
"text": "3766",
|
| 897 |
+
"bbox": [
|
| 898 |
+
480,
|
| 899 |
+
928,
|
| 900 |
+
521,
|
| 901 |
+
940
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 6
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "text",
|
| 907 |
+
"text": "3.5 Gaps and challenges in Arabic Sentiment Analysis",
|
| 908 |
+
"text_level": 1,
|
| 909 |
+
"bbox": [
|
| 910 |
+
112,
|
| 911 |
+
84,
|
| 912 |
+
485,
|
| 913 |
+
116
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 7
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "text",
|
| 919 |
+
"text": "3.5.1 Gaps",
|
| 920 |
+
"text_level": 1,
|
| 921 |
+
"bbox": [
|
| 922 |
+
114,
|
| 923 |
+
123,
|
| 924 |
+
218,
|
| 925 |
+
139
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 7
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "text",
|
| 931 |
+
"text": "This section outlines key research gaps between Arabic sentiment analysis and general sentiment analysis across three dimensions:",
|
| 932 |
+
"bbox": [
|
| 933 |
+
112,
|
| 934 |
+
145,
|
| 935 |
+
487,
|
| 936 |
+
192
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 7
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "list",
|
| 942 |
+
"sub_type": "text",
|
| 943 |
+
"list_items": [
|
| 944 |
+
"- Modality: Multimodality has recently garnered significant interest within general sentiment analysis, with a surge in research activity (Lai et al., 2023). However, investigations into multi-modal Arabic sentiment analysis remain limited. Most datasets for Arabic sentiment analysis are restricted to the text modality.",
|
| 945 |
+
"- Granularity: The majority of research in Arabic sentiment analysis focuses solely on Arabic sentiment classification. As evidenced in the previous sections, even studies exploring Arabic MAST and Arabic ABSA often target simpler tasks. Consequently, Arabic sentiment analysis lags behind general sentiment analysis in terms of MAST and ABSA tasks.",
|
| 946 |
+
"- Context: While datasets for general sentiment analysis encompass various levels ranging from document level to aspect level, most datasets for Arabic sentiment analysis remain at the document level. Even some recently released datasets lack annotations at sentence level and aspect level."
|
| 947 |
+
],
|
| 948 |
+
"bbox": [
|
| 949 |
+
136,
|
| 950 |
+
208,
|
| 951 |
+
489,
|
| 952 |
+
594
|
| 953 |
+
],
|
| 954 |
+
"page_idx": 7
|
| 955 |
+
},
|
| 956 |
+
{
|
| 957 |
+
"type": "text",
|
| 958 |
+
"text": "3.5.2 Challenges",
|
| 959 |
+
"text_level": 1,
|
| 960 |
+
"bbox": [
|
| 961 |
+
114,
|
| 962 |
+
609,
|
| 963 |
+
260,
|
| 964 |
+
624
|
| 965 |
+
],
|
| 966 |
+
"page_idx": 7
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"text": "The Arabic language is characterized by its high morphological complexity, which entails intricate word formation processes that may obscure meaning (Habash, 2010). Additionally, the high degree of ambiguity and polysemy inherent in Arabic lexicon complicates semantic interpretation. The presence of negation and the extensive range of dialects further exacerbate these challenges, as they introduce variations that must be meticulously accounted for in linguistic models (El-Beltagy and Ali, 2013).",
|
| 971 |
+
"bbox": [
|
| 972 |
+
112,
|
| 973 |
+
630,
|
| 974 |
+
489,
|
| 975 |
+
806
|
| 976 |
+
],
|
| 977 |
+
"page_idx": 7
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"text": "Data scarcity and cultural contextualization present additional challenges for Arabic. There is a scarcity of large, labeled datasets for many dialects, making it difficult to train robust models. Moreover, sentiment expression can vary significantly based on cultural nuances, requiring models to understand context beyond mere text.",
|
| 982 |
+
"bbox": [
|
| 983 |
+
112,
|
| 984 |
+
809,
|
| 985 |
+
489,
|
| 986 |
+
921
|
| 987 |
+
],
|
| 988 |
+
"page_idx": 7
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"text": "4 Recent trends in Arabic Sentiment Analysis",
|
| 993 |
+
"text_level": 1,
|
| 994 |
+
"bbox": [
|
| 995 |
+
507,
|
| 996 |
+
83,
|
| 997 |
+
842,
|
| 998 |
+
117
|
| 999 |
+
],
|
| 1000 |
+
"page_idx": 7
|
| 1001 |
+
},
|
| 1002 |
+
{
|
| 1003 |
+
"type": "text",
|
| 1004 |
+
"text": "Several research efforts are ongoing to develop robust Arabic-specific methods and overcome the challenges presented in Section 3.5. We organise and present these efforts below:",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
507,
|
| 1007 |
+
126,
|
| 1008 |
+
882,
|
| 1009 |
+
190
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 7
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "4.1 Addressing Dialectal Variations",
|
| 1016 |
+
"text_level": 1,
|
| 1017 |
+
"bbox": [
|
| 1018 |
+
507,
|
| 1019 |
+
202,
|
| 1020 |
+
805,
|
| 1021 |
+
217
|
| 1022 |
+
],
|
| 1023 |
+
"page_idx": 7
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"text": "The issue of dialectal variation has received significant attention in both task-specific and pre-trained language model-based approaches. (Baly et al., 2017) conducted a characterization study analyzing tweets from different Arab regions, highlighting the importance of addressing the dialectal problems in Arabic SA. The efforts that tackle this challenge are presented below, grouped into the broad approach employed:",
|
| 1028 |
+
"bbox": [
|
| 1029 |
+
507,
|
| 1030 |
+
223,
|
| 1031 |
+
884,
|
| 1032 |
+
367
|
| 1033 |
+
],
|
| 1034 |
+
"page_idx": 7
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"text": "4.1.1 Dataset Creation, Domain Adaptation and Data Augmentation",
|
| 1039 |
+
"text_level": 1,
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
507,
|
| 1042 |
+
376,
|
| 1043 |
+
867,
|
| 1044 |
+
409
|
| 1045 |
+
],
|
| 1046 |
+
"page_idx": 7
|
| 1047 |
+
},
|
| 1048 |
+
{
|
| 1049 |
+
"type": "text",
|
| 1050 |
+
"text": "(Medhaffar et al., 2017) addressed this challenge by annotating a corpus specifically for the Tunisian dialect and evaluating their models on data from various dialects. A similar approach to this method was the one proposed by (Guellil et al., 2018). They presented a method for automatically constructing an Algerian dialect corpus. (El Mekki et al., 2021) introduced an unsupervised domain adaptation method for cross-domain and cross-dialect sentiment analysis in Arabic. (Refai et al., 2022) proposed a data augmentation method specifically designed for Arabic classification tasks using transformer-based models.",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
507,
|
| 1053 |
+
413,
|
| 1054 |
+
884,
|
| 1055 |
+
621
|
| 1056 |
+
],
|
| 1057 |
+
"page_idx": 7
|
| 1058 |
+
},
|
| 1059 |
+
{
|
| 1060 |
+
"type": "text",
|
| 1061 |
+
"text": "4.1.2 Increasing use of Deep learning",
|
| 1062 |
+
"text_level": 1,
|
| 1063 |
+
"bbox": [
|
| 1064 |
+
507,
|
| 1065 |
+
632,
|
| 1066 |
+
816,
|
| 1067 |
+
646
|
| 1068 |
+
],
|
| 1069 |
+
"page_idx": 7
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "text",
|
| 1073 |
+
"text": "Researchers are increasingly using deep learning models, particularly transformer based models, to effectively capture the nuances of different Arabic dialects. For example, ARBERT and MARBERT (Abdul-Mageed et al., 2021) were specifically pretrained on dialectal Arabic to address these dialectal variations.",
|
| 1074 |
+
"bbox": [
|
| 1075 |
+
507,
|
| 1076 |
+
651,
|
| 1077 |
+
882,
|
| 1078 |
+
762
|
| 1079 |
+
],
|
| 1080 |
+
"page_idx": 7
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "text",
|
| 1084 |
+
"text": "4.1.3 Transfer Learning and Multilingual Models",
|
| 1085 |
+
"text_level": 1,
|
| 1086 |
+
"bbox": [
|
| 1087 |
+
507,
|
| 1088 |
+
772,
|
| 1089 |
+
852,
|
| 1090 |
+
803
|
| 1091 |
+
],
|
| 1092 |
+
"page_idx": 7
|
| 1093 |
+
},
|
| 1094 |
+
{
|
| 1095 |
+
"type": "text",
|
| 1096 |
+
"text": "Transfer learning approaches are being used to leverage knowledge from models trained on larger datasets in other languages or MSA, facilitating better performance on dialect data with limited resources. Multilingual transformer models like mBERT are also applied for handling multiple Arabic dialects (Devlin et al., 2019).",
|
| 1097 |
+
"bbox": [
|
| 1098 |
+
507,
|
| 1099 |
+
808,
|
| 1100 |
+
884,
|
| 1101 |
+
920
|
| 1102 |
+
],
|
| 1103 |
+
"page_idx": 7
|
| 1104 |
+
},
|
| 1105 |
+
{
|
| 1106 |
+
"type": "page_number",
|
| 1107 |
+
"text": "3767",
|
| 1108 |
+
"bbox": [
|
| 1109 |
+
480,
|
| 1110 |
+
927,
|
| 1111 |
+
519,
|
| 1112 |
+
940
|
| 1113 |
+
],
|
| 1114 |
+
"page_idx": 7
|
| 1115 |
+
},
|
| 1116 |
+
{
|
| 1117 |
+
"type": "text",
|
| 1118 |
+
"text": "4.2 Arabic-specific Tokenization",
|
| 1119 |
+
"text_level": 1,
|
| 1120 |
+
"bbox": [
|
| 1121 |
+
112,
|
| 1122 |
+
84,
|
| 1123 |
+
384,
|
| 1124 |
+
99
|
| 1125 |
+
],
|
| 1126 |
+
"page_idx": 8
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"type": "text",
|
| 1130 |
+
"text": "Recent research has also explored the importance of developing tokenization methods specifically for the Arabic language. (Alyafeai et al., 2021) compared the performance of different tokenizers for various Arabic classification tasks. (Alkaoud and Syed, 2020) proposed tokenization strategies specifically tailored for both static and contextual Arabic word embeddings, demonstrating significant performance improvements. The efforts in this direction can be grouped into the following trends:",
|
| 1131 |
+
"bbox": [
|
| 1132 |
+
112,
|
| 1133 |
+
108,
|
| 1134 |
+
489,
|
| 1135 |
+
268
|
| 1136 |
+
],
|
| 1137 |
+
"page_idx": 8
|
| 1138 |
+
},
|
| 1139 |
+
{
|
| 1140 |
+
"type": "text",
|
| 1141 |
+
"text": "Morphological Analysis Implementation of advanced morphological analysis tools to accurately identify roots, prefixes, and suffixes, ensuring proper tokenization of complex words. Noteworthy contributions in this area include the MADAMIRA tool, which provides robust morphological analysis and disambiguation for modern written Arabic, showcasing significant improvement in processing complex Arabic morphological structures (Pasha et al., 2014).",
|
| 1142 |
+
"bbox": [
|
| 1143 |
+
112,
|
| 1144 |
+
271,
|
| 1145 |
+
489,
|
| 1146 |
+
431
|
| 1147 |
+
],
|
| 1148 |
+
"page_idx": 8
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "text",
|
| 1152 |
+
"text": "Dialect-Specific Tokenizers: Development of tokenization models tailored to specific Arabic dialects to handle regional vocabulary and expressions effectively. The CALIMA-Star project exemplifies efforts to create comprehensive morphological lexicons specific to different Arabic dialects, allowing more precise tokenization and analysis for dialectal data (Taji et al., 2018).",
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
112,
|
| 1155 |
+
432,
|
| 1156 |
+
489,
|
| 1157 |
+
562
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 8
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "text",
|
| 1163 |
+
"text": "Contextual Tokenization: Use of context-aware tokenization methods to understand the meaning of words in context, assisting in disambiguating similar words. Contextual models like AraBERT and its advancements in tokenization strategies demonstrate the power of context-aware embeddings to capture nuanced language variations in sentiment analysis (Antoun et al., 2020).",
|
| 1164 |
+
"bbox": [
|
| 1165 |
+
112,
|
| 1166 |
+
564,
|
| 1167 |
+
489,
|
| 1168 |
+
690
|
| 1169 |
+
],
|
| 1170 |
+
"page_idx": 8
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "text",
|
| 1174 |
+
"text": "Incorporating Diacritics: Desining tokenizers that handle diacritics appropriately, either by retaining them for analysis or by normalizing words without diacritics while preserving meaning. Research by (Alqahtani et al., 2020) highlights the role of diacritics in enhancing sentiment analysis, emphasizing the necessity for tokenizers that efficiently process diacritized text data without losing critical semantic information.",
|
| 1175 |
+
"bbox": [
|
| 1176 |
+
112,
|
| 1177 |
+
694,
|
| 1178 |
+
489,
|
| 1179 |
+
837
|
| 1180 |
+
],
|
| 1181 |
+
"page_idx": 8
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"type": "text",
|
| 1185 |
+
"text": "While these trends have demonstrated improved performance for ASA, significant research efforts need to be directed in order to bridge the gap between ASA and general SA for high-resource languages.",
|
| 1186 |
+
"bbox": [
|
| 1187 |
+
112,
|
| 1188 |
+
841,
|
| 1189 |
+
489,
|
| 1190 |
+
921
|
| 1191 |
+
],
|
| 1192 |
+
"page_idx": 8
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "text",
|
| 1196 |
+
"text": "5 Future Directions",
|
| 1197 |
+
"text_level": 1,
|
| 1198 |
+
"bbox": [
|
| 1199 |
+
509,
|
| 1200 |
+
84,
|
| 1201 |
+
697,
|
| 1202 |
+
98
|
| 1203 |
+
],
|
| 1204 |
+
"page_idx": 8
|
| 1205 |
+
},
|
| 1206 |
+
{
|
| 1207 |
+
"type": "text",
|
| 1208 |
+
"text": "To conclude, we present promising research directions to foster the development of robust models for Arabic sentiment analysis.",
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
507,
|
| 1211 |
+
108,
|
| 1212 |
+
882,
|
| 1213 |
+
155
|
| 1214 |
+
],
|
| 1215 |
+
"page_idx": 8
|
| 1216 |
+
},
|
| 1217 |
+
{
|
| 1218 |
+
"type": "text",
|
| 1219 |
+
"text": "Creation of richer datasets: Future efforts should prioritize the development of datasets that encompass richer annotations across the following dimensions:",
|
| 1220 |
+
"bbox": [
|
| 1221 |
+
507,
|
| 1222 |
+
156,
|
| 1223 |
+
884,
|
| 1224 |
+
218
|
| 1225 |
+
],
|
| 1226 |
+
"page_idx": 8
|
| 1227 |
+
},
|
| 1228 |
+
{
|
| 1229 |
+
"type": "list",
|
| 1230 |
+
"sub_type": "text",
|
| 1231 |
+
"list_items": [
|
| 1232 |
+
"- Multimodality: Datasets that integrate various modalities (text, audio, video) to capture richer sentiment information.",
|
| 1233 |
+
"- Fine-grained tasks: Datasets designed for exploring more intricate sentiment analysis tasks beyond sentiment classification.",
|
| 1234 |
+
"- Multi-context annotations: Datasets with annotations at finer levels (sentence level, aspect level) to facilitate in-depth analysis."
|
| 1235 |
+
],
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
531,
|
| 1238 |
+
217,
|
| 1239 |
+
882,
|
| 1240 |
+
370
|
| 1241 |
+
],
|
| 1242 |
+
"page_idx": 8
|
| 1243 |
+
},
|
| 1244 |
+
{
|
| 1245 |
+
"type": "text",
|
| 1246 |
+
"text": "Multimodal Sentiment Analysis: While limited research has been conducted in multimodal ASA, leveraging information from multiple modalities holds significant potential for accurate sentiment analysis. Future research should explore effective techniques for incorporating multimodal data and develop robust models for this task.",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
507,
|
| 1249 |
+
374,
|
| 1250 |
+
882,
|
| 1251 |
+
485
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 8
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "text",
|
| 1257 |
+
"text": "Large Language Models (LLMs) for ASA: Recent advancements in LLMs have yielded remarkable performance on various tasks. Arabic LLMs like AceGPT-LL (Huang et al., 2023) have also emerged. However, a systematic analysis of LLMs for sentiment analysis, particularly in the context of Arabic, is lacking. Future research should investigate how to best utilize LLMs for Arabic sentiment analysis.",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
507,
|
| 1260 |
+
487,
|
| 1261 |
+
882,
|
| 1262 |
+
631
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 8
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "text",
|
| 1268 |
+
"text": "Interpretable Sentiment Analysis: Existing Arabic sentiment analysis methods primarily provide final sentiment labels without explanations for their outputs. Recent work on improving the interpretability of question answering by examining model reasoning (Huang and Chang, 2023) suggests a promising approach that can be adapted to sentiment analysis. For example, models could be designed to output reasoning steps leading to their final sentiment polarity predictions.",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
507,
|
| 1271 |
+
632,
|
| 1272 |
+
882,
|
| 1273 |
+
791
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 8
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "text",
|
| 1279 |
+
"text": "Exploration of Fine-Grained Tasks: General sentiment analysis research has shifted towards increasingly fine-grained tasks. However, most Arabic sentiment analysis studies remain focused on sentiment classification at the document level. A systematic exploration of other fine-grained tasks, particularly those within MAST and ABSA, would be beneficial for advancing the field.",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
507,
|
| 1282 |
+
793,
|
| 1283 |
+
882,
|
| 1284 |
+
920
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 8
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "page_number",
|
| 1290 |
+
"text": "3768",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
480,
|
| 1293 |
+
927,
|
| 1294 |
+
519,
|
| 1295 |
+
940
|
| 1296 |
+
],
|
| 1297 |
+
"page_idx": 8
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "text",
|
| 1301 |
+
"text": "6 Limitations",
|
| 1302 |
+
"text_level": 1,
|
| 1303 |
+
"bbox": [
|
| 1304 |
+
114,
|
| 1305 |
+
84,
|
| 1306 |
+
250,
|
| 1307 |
+
98
|
| 1308 |
+
],
|
| 1309 |
+
"page_idx": 9
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "text",
|
| 1313 |
+
"text": "This survey acknowledges some limitations. Firstly, it primarily focuses on works utilizing deep learning methods. As highlighted in (Abu Kwaik et al., 2022), feature-based methods can outperform pretrained language model-based methods in some instances. Future surveys may benefit from including an exploration of feature-based approaches. Additionally, while this work compares Arabic sentiment analysis with general sentiment analysis, it would also be valuable to situate Arabic sentiment analysis within the broader context of Arabic classification tasks and Arabic natural language processing tasks in general.",
|
| 1314 |
+
"bbox": [
|
| 1315 |
+
112,
|
| 1316 |
+
108,
|
| 1317 |
+
492,
|
| 1318 |
+
318
|
| 1319 |
+
],
|
| 1320 |
+
"page_idx": 9
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "text",
|
| 1324 |
+
"text": "References",
|
| 1325 |
+
"text_level": 1,
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
114,
|
| 1328 |
+
344,
|
| 1329 |
+
213,
|
| 1330 |
+
359
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 9
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "list",
|
| 1336 |
+
"sub_type": "ref_text",
|
| 1337 |
+
"list_items": [
|
| 1338 |
+
"Mohammed M. Abdelgwad, Taysir Hassan A. Soliman, and Ahmed I. Taloba. 2021. Arabic aspect sentiment polarity classification using BERT. Journal of Big Data, 9:1-15.",
|
| 1339 |
+
"Huda Jamal Abdelhameed and Susana Muñoz-Hern'andez. 2017. Emotion and opinion retrieval from social media in Arabic language: Survey. In 2017 Joint International Conference on Information and Communication Technologies for Education and Training and International Conference on Computing in Arabic (ICCA-TICET), pages 1-8.",
|
| 1340 |
+
"Youmna Abdelwahab, Mohamed Hamed Kholief, and Ahmed Ahmed Hesham Sedky. 2022. Justifying Arabic text sentiment analysis using Explainable AI (XAI): LASIK surgeries case study. Inf., 13:536.",
|
| 1341 |
+
"Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2021. ARBERT & MARBERT: Deep bidirectional transformers for Arabic. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7088-7105, Online. Association for Computational Linguistics.",
|
| 1342 |
+
"Tareq Al-Moslmi; Mohammed Albared; Adel Al-Shabi; Nazlia Omar; Salwani Abdullah.; 2018. Arabic semi-lexicon: Constructing publicly available language resources for arabic sentiment analysis. Journal of Information Science, 44(3):345-362.",
|
| 1343 |
+
"Mohamed Elhag Mohamed Abo, Ram Gopal Raj, and Atika Qazi. 2019. A review on Arabic sentiment analysis: State-of-the-art, taxonomy and open research challenges. IEEE Access, 7:162008-162024.",
|
| 1344 |
+
"Ibrahim Abu Farha and Walid Magdy. 2020. From Arabic sentiment analysis to sarcasm detection: The ArSarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language"
|
| 1345 |
+
],
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
115,
|
| 1348 |
+
366,
|
| 1349 |
+
489,
|
| 1350 |
+
920
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 9
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "list",
|
| 1356 |
+
"sub_type": "ref_text",
|
| 1357 |
+
"list_items": [
|
| 1358 |
+
"Detection, pages 32-39, Marseille, France. European Language Resource Association.",
|
| 1359 |
+
"Ibrahim Abu Farha, Wajdi Zaghouani, and Walid Magdy. 2021. Overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 296-305, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1360 |
+
"Kathrein Abu Kwaik, Stergios Chatzikyriakidis, and Simon Dobnik. 2022. Pre-trained models or feature engineering: The case of dialectal Arabic. In Proceedings of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur'an QA and Fine-Grained Hate Speech Detection, pages 41-50, Marseille, France. European Language Resources Association.",
|
| 1361 |
+
"Mahmoud Al-Ayyoub, Abed Allah Khamaiseh, Yaser Jararweh, and Mohammed N. Al-Kabi. 2019. A comprehensive survey of Arabic sentiment analysis. Information Processing and Management, 56(2):320-342. Advance Arabic Natural Language Processing (ANLP) and its Applications.",
|
| 1362 |
+
"Saja Al-Dabet, Sara Tedmori, and Mohammad Al-Smadi. 2021. Enhancing Arabic aspect-based sentiment analysis using deep learning models. Comput. Speech Lang., 69:101224.",
|
| 1363 |
+
"Lamia Al-Horaibi and Muhammad Badruddin Khan. 2016. Sentiment analysis of Arabic tweets using text mining techniques. In International Workshop on Pattern Recognition, 2016.",
|
| 1364 |
+
"Mohammad Al-Smadi, Omar Qawasneh, Mahmoud Al-Ayyoub, Yaser Jararweh, and Brij Bhooshan Gupta. 2017. Deep Recurrent Neural Network vs. Support Vector Machine for aspect-based sentiment analysis of Arabic hotels' reviews. J. Comput. Sci., 27:386-393.",
|
| 1365 |
+
"Nora Al-Twairesh, Hend Al-Khalifa, and AbdulMalik Al-Salman. 2014. Subjectivity and sentiment analysis of Arabic: Trends and challenges. In 2014 IEEE/ACS 11th International Conference on Computer Systems and Applications (AICCSA), pages 148-155.",
|
| 1366 |
+
"Mohamed Alkaoud and Mairaj Syed. 2020. On the importance of tokenization in Arabic embedding models. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 119-129, Barcelona, Spain (Online). Association for Computational Linguistics.",
|
| 1367 |
+
"Felipe Almeida and Geraldo Bonorino Xexéo. 2019. Word embeddings: A survey. ArXiv, abs/1901.09069.",
|
| 1368 |
+
"Latifah Almurqren, Ryan Hodgson, and A Ioana Cristea. 2024. Arabic text sentiment analysis: Reinforcing human-performed surveys with wider topic analysis. ArXiv, abs/2403.01921."
|
| 1369 |
+
],
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
510,
|
| 1372 |
+
85,
|
| 1373 |
+
884,
|
| 1374 |
+
920
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 9
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "page_number",
|
| 1380 |
+
"text": "3769",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
480,
|
| 1383 |
+
928,
|
| 1384 |
+
519,
|
| 1385 |
+
940
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 9
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "list",
|
| 1391 |
+
"sub_type": "ref_text",
|
| 1392 |
+
"list_items": [
|
| 1393 |
+
"Sawsan Alqahtani, Ajay Mishra, and Mona Diab. 2020. A multitask learning approach for diacritic restoration. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8238-8247, Online. Association for Computational Linguistics.",
|
| 1394 |
+
"Norah Fahad Alshammari and Amal Almansour. 2020. Aspect-based sentiment analysis for Arabic content in social media. 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), pages 1-6.",
|
| 1395 |
+
"Merfat.M. Altawaier and Sabrina Tiun. 2016. Comparison of machine learning approaches on arabic twitter sentiment analysis. International Journal on Advanced Science, Engineering and Information Technology, 6:1067-1073.",
|
| 1396 |
+
"Mohamed Aly and Amir Atiya. 2013. LABR: A large scale Arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 494-498, Sofia, Bulgaria. Association for Computational Linguistics.",
|
| 1397 |
+
"Zaid Alyafeai and Irfan Ahmad. 2021. Arabic compact language modelling for resource limited devices. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 53-59, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1398 |
+
"Zaid Alyafeai, Maged S. Al-Shaibani, Mustafa Ghaleb, and Irfan Ahmad. 2021. Evaluating various tokenizers for Arabic text classification. *Neural Processing Letters*, 55:2911-2933.",
|
| 1399 |
+
"Shela Ambreen, Muhammad Iqbal, Muhammad Zubair Asghar, Tehseen Mazhar, Umar Farooq Khattak, Muhammad Amir Khan, and Habib Hamam. 2024. Predicting customer sentiment: The fusion of deep learning and a fuzzy system for sentiment analysis of arabic text. Soc. Netw. Anal. Min., 14:206.",
|
| 1400 |
+
"Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association.",
|
| 1401 |
+
"Mohammed Ashi, Muazzam Siddiqui, and Farrukh Nadeem. 2019. Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets, pages 241-251. Springer.",
|
| 1402 |
+
"Md. Atabuzzaman, Md Shajalal, Maksuda Bilkis Baby, and Alexander Boden. 2023. Arabic sentiment analysis with noisy deep explainable model. In International Conference on Natural Language Processing and Information Retrieval.",
|
| 1403 |
+
"Mohammed Attia, Younes Samih, Ali Elkahky, and Laura Kallmeyer. 2018. Multilingual multi-class sentiment classification using convolutional neural networks. In Proceedings of the Eleventh International"
|
| 1404 |
+
],
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
115,
|
| 1407 |
+
85,
|
| 1408 |
+
489,
|
| 1409 |
+
919
|
| 1410 |
+
],
|
| 1411 |
+
"page_idx": 10
|
| 1412 |
+
},
|
| 1413 |
+
{
|
| 1414 |
+
"type": "list",
|
| 1415 |
+
"sub_type": "ref_text",
|
| 1416 |
+
"list_items": [
|
| 1417 |
+
"Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
|
| 1418 |
+
"Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, and Wassim El-Hajj. 2014. A large scale Arabic sentiment lexicon for Arabic opinion mining. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 165-173, Doha, Qatar. Association for Computational Linguistics.",
|
| 1419 |
+
"Ramy Baly, Gilbert Badaro, Georges El-Khourry, Rawan Moukalled, Rita Aoun, Hazem Hajj, Wassim El-Hajj, Nizar Habash, and Khaled Shaban. 2017. A characterization study of Arabic Twitter data with a benchmarking for state-of-the-art opinion mining models. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 110-118, Valencia, Spain. Association for Computational Linguistics.",
|
| 1420 |
+
"Ramy Baly, Alaa Khaddaj, Hazem M. Hajj, Wassim El-Hajj, and Khaled Bashir Shaban. 2019. ArSentD-LEV: A multi-topic corpus for target-based sentiment analysis in Arabic Levantine tweets. CoRR, abs/1906.01830.",
|
| 1421 |
+
"Mariam Biltawi, Wael Etaiwi, Sara Tedmori, Amjad Hudaib, and Arafat Awajan. 2016. Sentiment classification techniques for Arabic language: A survey. In 2016 7th International Conference on Information and Communication Systems (ICICS), pages 339-346.",
|
| 1422 |
+
"Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*, abs/2005.14165.",
|
| 1423 |
+
"Abdelghani Dahou, Shengwu Xiong, Junwei Zhou, Mohamed Houcine Haddoud, and Pengfei Duan. 2016. Word embeddings and convolutional neural network for Arabic sentiment classification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2418-2427, Osaka, Japan. The COLING 2016 Organizing Committee.",
|
| 1424 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics."
|
| 1425 |
+
],
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
510,
|
| 1428 |
+
85,
|
| 1429 |
+
882,
|
| 1430 |
+
920
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 10
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "page_number",
|
| 1436 |
+
"text": "3770",
|
| 1437 |
+
"bbox": [
|
| 1438 |
+
480,
|
| 1439 |
+
928,
|
| 1440 |
+
519,
|
| 1441 |
+
940
|
| 1442 |
+
],
|
| 1443 |
+
"page_idx": 10
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "list",
|
| 1447 |
+
"sub_type": "ref_text",
|
| 1448 |
+
"list_items": [
|
| 1449 |
+
"Rehab M. Duwairi, Mosab Alfaqeh, Mohammad Wardat, and Areen Alrabadi. 2016. Sentiment analysis for Arabizi text. In 2016 7th International Conference on Information and Communication Systems (ICICS), pages 127-132.",
|
| 1450 |
+
"Samhaa R. El-Beltagy and Ahmed Ali. 2013. Open issues in the sentiment analysis of Arabic social media: a case study. In 2013 IEEE International Conference on Innovations in Information Technology (IIT), pages 215-220. IEEE.",
|
| 1451 |
+
"Abdelkader El Mahdaouy, Abdellah El Mekki, Kabil Essefar, Nabil El Mamoun, Ismail Berrada, and Ahmed Khoumsi. 2021. Deep multi-task model for sarcasm detection and sentiment analysis in Arabic language. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 334-339, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1452 |
+
"Abdollah El Mekki, Abdelkader El Mahdaouy, Ismail Berrada, and Ahmed Khoumsi. 2021. Domain adaptation for Arabic cross-domain and cross-dialect sentiment analysis from contextualized word embedding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2824-2837, Online. Association for Computational Linguistics.",
|
| 1453 |
+
"Obeida ElJundi, Wissam Antoun, Nour El Droubi, Hazem Hajj, Wassim El-Hajj, and Khaled Shaban. 2019. hULMonA: The universal language model in Arabic. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 68-77, Florence, Italy. Association for Computational Linguistics.",
|
| 1454 |
+
"Yang Fang and Cheng Xu. 2024. ArSen-20: A new benchmark for Arabic sentiment detection. In 5th Workshop on African Natural Language Processing.",
|
| 1455 |
+
"Dalya Faraj, Dalya Faraj, and Malak Abdullah. 2021. SarcasmDet at sarcasm detection task 2021 in Arabic using AraBERT pretrained model. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 345-350, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1456 |
+
"Imane Guellil, Ahsan Adeel, Faical Azouaou, and Amir Hussain. 2018. SentiALG: Automated corpus annotation for Algerian sentiment analysis. ArXiv, abs/1808.05079.",
|
| 1457 |
+
"Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan & Claypool Publishers.",
|
| 1458 |
+
"Abdelhamid Haouhat, Slimane Bellaouar, Attia Nehar, and Hadda Cherroun. 2023. Towards Arabic multimodal dataset for sentiment analysis. 2023 Fourth International Conference on Intelligent Data Science Technologies and Applications (IDSTA), pages 126-133."
|
| 1459 |
+
],
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
115,
|
| 1462 |
+
85,
|
| 1463 |
+
489,
|
| 1464 |
+
919
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 11
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "list",
|
| 1470 |
+
"sub_type": "ref_text",
|
| 1471 |
+
"list_items": [
|
| 1472 |
+
"Maha Heikal, Marwan Torki, and Nagwa M. El-Makky. 2018. Sentiment analysis of Arabic tweets using deep learning. In International Conference on Arabic Computational Linguistics.",
|
| 1473 |
+
"Amey Hangle, Atharva Kshirsagar, Shaily Desai, and Manisha Marathe. 2021. Combining context-free and contextualized representations for Arabic sarcasm detection and sentiment identification. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 357-363, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1474 |
+
"Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, and Jinchao Xu. 2023. AceGPT, localizing large language models in Arabic. ArXiv, abs/2309.12053.",
|
| 1475 |
+
"Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 1049–1065, Toronto, Canada. Association for Computational Linguistics.",
|
| 1476 |
+
"Abraham Israeli, Yotam Nahum, Shai Fine, and Kfir Bar. 2021. The IDC system for sentiment classification and sarcasm detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 370-375, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.",
|
| 1477 |
+
"Abdelrahman Kaseb and Mona Farouk. 2022. SAIDS: A novel approach for sentiment analysis informed of dialect and sarcasm. In Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), pages 22-30, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.",
|
| 1478 |
+
"Md Tawkat Islam Khondaker, El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S. 2022. A benchmark study of contrastive learning for Arabic social meaning. In Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), pages 63-75, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.",
|
| 1479 |
+
"Songning Lai, Hao Xu, Xifeng Hu, Zhaoxia Ren, and Zhi Liu. 2023. Multimodal sentiment analysis: A survey. *ArXiv*, abs/2305.07611.",
|
| 1480 |
+
"Zhenyu Li and Zongfeng Zou. 2024. Punctuation and lexicon aid representation: A hybrid model for short text sentiment analysis on social media platform. Journal of King Saud University - Computer and Information Sciences, 36(3):102010.",
|
| 1481 |
+
"Bing Liu. 2015. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Cambridge University Press."
|
| 1482 |
+
],
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
510,
|
| 1485 |
+
85,
|
| 1486 |
+
884,
|
| 1487 |
+
917
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 11
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "page_number",
|
| 1493 |
+
"text": "3771",
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
480,
|
| 1496 |
+
928,
|
| 1497 |
+
517,
|
| 1498 |
+
940
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 11
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "list",
|
| 1504 |
+
"sub_type": "ref_text",
|
| 1505 |
+
"list_items": [
|
| 1506 |
+
"Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55:1-35.",
|
| 1507 |
+
"Salima Medhaffar, Fethi Bougares, Yannick Esteve, and Lamia Hadrich-Belguith. 2017. Sentiment analysis of Tunisian dialects: Linguistic resources and experiments. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 55-61, Valencia, Spain. Association for Computational Linguistics.",
|
| 1508 |
+
"Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2515-2519, Lisbon, Portugal. Association for Computational Linguistics.",
|
| 1509 |
+
"Arfath Pasha et al. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC).",
|
| 1510 |
+
"Wisam A. Qader, Musa M. Ameen, and Bilal I. Ahmed. 2019. An overview of Bag of Words: Importance, implementation, applications, and challenges. In 2019 International Engineering Conference (IEC), pages 200-204.",
|
| 1511 |
+
"Alaa Rahma, Shahira Shaaban Azab, and Ammar Mohammed. 2023. A comprehensive survey on Arabic sarcasm detection: Approaches, challenges and future trends. IEEE Access, 11:18261-18280.",
|
| 1512 |
+
"Dania Refai, Saleh Abu-Soud, and Mohammad J. Abdel-Rahman. 2022. Data augmentation using transformers and similarity measures for improving Arabic text classification. IEEE Access, 11:132516-132531.",
|
| 1513 |
+
"Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"Why should I trust you?\": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.",
|
| 1514 |
+
"Claude Sammut and Geoffrey I. Webb, editors. 2010. TF-IDF, pages 986-987. Springer US, Boston, MA.",
|
| 1515 |
+
"Dima Taji, Salam Khalifa, Ossama Obeid, Fadhl Eryani, and Nizar Habash. 2018. An Arabic morphological analyzer and generator with copious features. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 140-150, Brussels, Belgium. Association for Computational Linguistics.",
|
| 1516 |
+
"Bashar Talafha, Muhy Eddin Za'ter, Samer Suleiman, Mahmoud Al-Ayyoub, and Mohammed N. Al-Kabi. 2021. Sarcasm detection and quantification in Arabic tweets. 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pages 1121-1125."
|
| 1517 |
+
],
|
| 1518 |
+
"bbox": [
|
| 1519 |
+
115,
|
| 1520 |
+
85,
|
| 1521 |
+
489,
|
| 1522 |
+
917
|
| 1523 |
+
],
|
| 1524 |
+
"page_idx": 12
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "list",
|
| 1528 |
+
"sub_type": "ref_text",
|
| 1529 |
+
"list_items": [
|
| 1530 |
+
"A. Maurits van der Veen and Erik Bleich. 2025. The advantages of lexicon-based sentiment analysis in an age of machine learning. PLOS ONE, 20(1):1-19.",
|
| 1531 |
+
"Kasturi Dewi Varathan, Anastasia Giachanou, and Fabio Crestani. 2017. Comparative opinion mining: A review. Journal of the Association for Information Science and Technology, 68(4):811-829.",
|
| 1532 |
+
"Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. ArXiv, abs/2206.07682.",
|
| 1533 |
+
"Rong Xiang, Emmanuele Chersoni, Qin Lu, Chu-Ren Huang, Wenjie Li, and Yunfei Long. 2021. Lexical data augmentation for sentiment analysis. Journal of the Association for Information Science and Technology, 72:1432 - 1447.",
|
| 1534 |
+
"Qingcheng Zeng and An-Ran Li. 2022. A survey in automatic irony processing: Linguistic, cognitive, and multi-X perspectives. In Proceedings of the 29th International Conference on Computational Linguistics, pages 824-836, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.",
|
| 1535 |
+
"Lei Zhang, Riddhiman Ghosh, Mohamed Dekhil, Meichun Hsu, and B. Liu. 2011. Combining lexicon-based and learning-based methods for twitter sentiment analysis. In HP Labs Technical Reports.",
|
| 1536 |
+
"Wenxuan Zhang, Yue Deng, Bing-Quan Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. *ArXiv*, abs/2305.15005.",
|
| 1537 |
+
"Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2022. A survey on aspect-based sentiment analysis: Tasks, methods, and challenges. IEEE Transactions on Knowledge and Data Engineering, 35:11019-11038."
|
| 1538 |
+
],
|
| 1539 |
+
"bbox": [
|
| 1540 |
+
510,
|
| 1541 |
+
85,
|
| 1542 |
+
882,
|
| 1543 |
+
667
|
| 1544 |
+
],
|
| 1545 |
+
"page_idx": 12
|
| 1546 |
+
},
|
| 1547 |
+
{
|
| 1548 |
+
"type": "page_number",
|
| 1549 |
+
"text": "3772",
|
| 1550 |
+
"bbox": [
|
| 1551 |
+
480,
|
| 1552 |
+
928,
|
| 1553 |
+
519,
|
| 1554 |
+
940
|
| 1555 |
+
],
|
| 1556 |
+
"page_idx": 12
|
| 1557 |
+
}
|
| 1558 |
+
]
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/3e5ad7ac-2ae2-4123-b27f-008e22e87b8d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:746673dc0efee3baba802c3533f4c43687524c94d396c2422ee5fba83e7556e1
|
| 3 |
+
size 249777
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/full.md
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A comprehensive survey of contemporary Arabic sentiment analysis: Methods, Challenges, and Future Directions
|
| 2 |
+
|
| 3 |
+
Zhiqiang Shi
|
| 4 |
+
|
| 5 |
+
University of Edinburgh shizhiqiang126@163.com
|
| 6 |
+
|
| 7 |
+
Ruchit Agrawal
|
| 8 |
+
|
| 9 |
+
University of Birmingham r.r.agrawal@bham.ac.uk
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Sentiment Analysis, a popular subtask of Natural Language Processing, employs computational methods to extract sentiment, opinions, and other subjective aspects from linguistic data. Given its crucial role in understanding human sentiment, research in sentiment analysis has witnessed significant growth in the recent years. However, the majority of approaches are aimed at the English language, and research towards Arabic sentiment analysis remains relatively unexplored. This paper presents a comprehensive and contemporary survey of Arabic Sentiment Analysis, identifies the challenges and limitations of existing literature in this field and presents avenues for future research. We present a systematic review of Arabic sentiment analysis methods, focusing specifically on research utilizing deep learning. We then situate Arabic Sentiment Analysis within the broader context, highlighting research gaps in Arabic sentiment analysis as compared to general sentiment analysis. Finally, we outline the main challenges and promising future directions for research in Arabic sentiment analysis.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Sentiment Analysis (SA), also referred to as opinion mining, leverages computational models to extract individuals' sentiments and opinions from data (Liu, 2015). This field has garnered significant attention from both academic and industrial sectors, as evidenced by the multitude of studies conducted to comprehend human sentiment (Liu, 2015), (Zeng and Li, 2022), (Varathan et al., 2017). However, although Arabic is a widely popular language spoken by over 372 million people across the globe, the volume of research dedicated to Arabic Sentiment Analysis (ASA) remains considerably lower compared to high-resourced languages such as English and French. This study presents a systematic review of existing literature on Arabic sentiment analysis, with a particular focus on approaches that employ deep learning methodologies.
|
| 18 |
+
|
| 19 |
+
Several prior surveys on Arabic sentiment analysis (ASA) exist (Almurqren et al., 2024), (Al-Ayyoub et al., 2019), (Al-Twairesh et al., 2014), (Biltawi et al., 2016), (Abdelhameed and Muñoz-Hern'ández, 2017), (Abo et al., 2019), (Rahma et al., 2023). However, the majority of these do not cover contemporary deep learning based methods. Additionally, these do not present a detailed analysis of the gaps, challenges and future directions for ASA. This paper fills this gap and presents a comprehensive survey of contemporary methods for Arabic sentiment analysis, systematically organizing recent literature in the field and highlighting the key contributions and limitations of current SA methods. We also situate these approaches within the broader framework of general sentiment analysis and approaches for high-resource languages, facilitating an understanding of the developments as well as the gaps in ASA. The major contributions of this study are summarised below:
|
| 20 |
+
|
| 21 |
+
- We present a systematic survey of contemporary research conducted in Arabic sentiment analysis, with a focus on deep learning methodologies. We present an analysis of the key contributions and limitations of state-of-the-art ASA methods across various dimensions such as modality (unimodal, multi-modal), granularity (coarse-grained, fine-grained) and context (sentence-level, document-level, aspect-level).
|
| 22 |
+
- We situate Arabic sentiment analysis within the broader framework of general sentiment analysis, identifying research gaps in Arabic sentiment analysis, and highlighting areas where advancements are needed to bridge the gap with high-resource languages.
|
| 23 |
+
- We outline the key challenges in developing robust sentiment analysis models for the Arabic language, and present promising directions to guide future research in this field.
|
| 24 |
+
|
| 25 |
+
# 2 The Evolution of Arabic Sentiment Analysis
|
| 26 |
+
|
| 27 |
+
In this section, we describe the evolution of Arabic sentiment analysis, from lexicon based methods to deep learning based methods. To highlight the importance of these traditional methods, we conduct a case study using Arabic lexicons, in which we highlight how lexicons can be utilised to improve deep learning based methods.
|
| 28 |
+
|
| 29 |
+
# 2.1 Lexicon Based Methods
|
| 30 |
+
|
| 31 |
+
Lexicon based methods utilise a pre-defined lexicon to determine the sentiment of the given text. The words in the lexicon are annotated with polarity or sentiment scores. The overall sentiment of the text are calculated by summing up all the words' sentiment score. Given their crucial role in lexicon based methods, we briefly mention some widely used Arabic lexicons.
|
| 32 |
+
|
| 33 |
+
Arabic Senti-Lexicon (Abdullah.; 2018): Arabic Senti-Lexcion contains 3880 synsets that are annotated with part of speech, polarity scores and inflected forms.
|
| 34 |
+
|
| 35 |
+
ArsenL (Arabic Sentiment Lexicon) (Badaro et al., 2014): ArsenL are constructed from multiple resources, including English WordNet (EWN), Arabic WordNet (AWN), English SentiWordNet (ESWN), and SAMA (Standard Arabic Morphological Analyzer). It contains 157969 synsets and has positive, negative and neutral sentiment scores.
|
| 36 |
+
|
| 37 |
+
# 2.2 Machine learning based methods
|
| 38 |
+
|
| 39 |
+
Lexicon based methods are simple and fast, but they heavily rely on the lexicons and the sentiment scores of the lexicons lack context. Machine learning based methods can help to overcome these limitations by learning the sentiment features from data rather than pre-defining them by humans. However, for traditional machine learning methods, feature engineering is required as a precursor to the ML algorithms.
|
| 40 |
+
|
| 41 |
+
Some widely used feature engineering methods include bag-of-words (Qader et al., 2019), TF-IDF (Sammut and Webb, 2010) and word embedding (Almeida and Xexéo, 2019). After the features have been extracted, machine learning methods such as naive Bayes (Al-Horaibi and Khan, 2016), support vector machines (Duwairi et al., 2016) and random forests based methods (Altawaier and Tiun, 2016) can be used for sentiment analysis. In the following sections, we will systematically review deep learning based approaches.
|
| 42 |
+
|
| 43 |
+
# 2.3 The importance of traditional methods: a case study on sentiment lexicons
|
| 44 |
+
|
| 45 |
+
It is important to note that even with the rise of deep learning based methods for sentiment analysis, sentiment lexicons like ArSenL(Badaro et al., 2014) can still be valuable as they provide a foundational understanding of sentiment that can enhance model performance, especially in low-resource scenarios or when dealing with domain-specific language that may not be well-represented in training data. Some use cases of these lexicons include data preprocessing where the irrelevant terms are filtered out based on the sentiment lexicon, sentiment weighting (Zhang et al., 2011) where the lexicon is employed to help the model weight sentiment-relevant features more effectively. In the following paragraphs, we will use some examples to illustrate how sentiment lexicons can be utilised to improve the sentiment analysis performance.
|
| 46 |
+
|
| 47 |
+
Feature Augmentation: Lexicons can be utilised to augment the features. In (Heikal et al., 2018), a sentiment lexicon is integrated to augment the features for deep learning based modes. (Xiang et al., 2021) explores part-of-speech-focused lexical substitution for data augmentation to enhance sentiment analysis performance. (Li and Zou, 2024) discusses how lexicon-based approaches assign sentiment polarities and scores to keywords, which can be used for feature augmentation in hybrid models.
|
| 48 |
+
|
| 49 |
+
Interpretability: Arabic sentiment lexicons enhance interpretability in sentiment analysis by providing a clear framework for understanding how specific words and phrases contribute to sentiment assessments. By combining lexicons with advanced methods like attention-based LSTM and explainable AI techniques, such as LIME (Ribeiro et al., 2016), researchers can further clarify which features are most significant in determining sentiment, thereby enhancing the transparency of their findings (Abdelwahab et al., 2022). These lexicons facilitate the identification of the sentiment polarity of individual terms, making it easier to trace the reasoning behind a sentiment classification (van der Veen and Bleich, 2025). Moreover, the integration of lexicon-based approaches with machine learning techniques can improve the interpretability of complex models, as researchers can analyze how lexicon entries influence the overall sentiment predictions (Ambreen et al., 2024).
|
| 50 |
+
|
| 51 |
+
<table><tr><td>Dataset</td><td>Modality</td><td>Granularity</td><td>Context</td><td>Dialect</td></tr><tr><td>LABR (Aly and Atiya, 2013)</td><td>text</td><td>SC</td><td>document-level</td><td>MSA and various other dialects</td></tr><tr><td>ASTD (Nabil et al., 2015)</td><td>text</td><td>SC</td><td>document-level</td><td>MSA and Egyptian Arabic</td></tr><tr><td>ArSentD-LEV (Baly et al., 2019)</td><td>text</td><td>SC, ABSA, MAST</td><td>document-level</td><td>Levantine dialect</td></tr><tr><td>ArSarcasm (Abu Farha and Magdy, 2020)</td><td>text</td><td>SC, MAST</td><td>document-level</td><td>various dialects</td></tr><tr><td>ArSarcasm-v2 (Abu Farha et al., 2021)</td><td>text</td><td>SC, MAST</td><td>document-level</td><td>various dialects</td></tr><tr><td>Arsen-20 (Fang and Xu, 2024)</td><td>text</td><td>SC</td><td>document-level</td><td>various dialects</td></tr><tr><td>Arabic multimodal dataset (Haouhat et al., 2023)</td><td>text, audio, video</td><td>SC</td><td>document-level (video segments)</td><td>various dialects</td></tr></table>
|
| 52 |
+
|
| 53 |
+
Table 1: Datasets for Arabic Sentiment Analysis, organised according to modality, granularity and context.
|
| 54 |
+
SC: Sentiment Classification
|
| 55 |
+
MAST: Multifacted Analysis of Subjective Text
|
| 56 |
+
ABSA: Aspect based Sentiment Analysis.
|
| 57 |
+
|
| 58 |
+
<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>(Dahou et al., 2016)</td><td>Develops Arabic word embeddings and employs CNN as a classifier</td><td>Task-specific method, static word embeddings</td><td>LABR (89.6%), ASTD (79.07%)</td></tr><tr><td>(Medhaffar et al., 2017)</td><td>Annotates a Tunisian dialect corpus and evaluates models on different dialects</td><td>Only experiments with traditional machine learning methods</td><td>-</td></tr><tr><td>(Baly et al., 2017)</td><td>Performs a characterization study that analyses tweets from different Arab regions, and compares ML-based vs. deep-learning methods for Arabic SA</td><td>Does not experiment on different dialects and topics</td><td>ASTD (SVM 51.7%, RNTN 58.5%)</td></tr><tr><td>(Guellil et al., 2018)</td><td>Automatically constructs an Alge-rian dialect corpus</td><td>Evaluation is carried out only using traditional ML methods</td><td>-</td></tr><tr><td>(Attia et al., 2018)</td><td>Proposes a language independent, multi-class model for SA using CNNs</td><td>Evaluation for Arabic is only carried out on ASTD (Nabil et al., 2015), unclear whether the model generalises well to other datasets</td><td>ASTD (67.93%)</td></tr><tr><td>(Alyafeai et al., 2021)</td><td>Compares different tokenizers for different Arabic classification tasks</td><td>Does not evaluate on complex architecture like attention-based models -</td><td></td></tr><tr><td>(Atabuzzaman et al., 2023)</td><td>Proposes an explainable sentiment classification framework for Arabic</td><td>Does not conduct experimentation on Transformer-based models</td><td>LABR (88.0%)</td></tr></table>
|
| 59 |
+
|
| 60 |
+
Table 2: Task-specific methods for Arabic sentiment classification.
|
| 61 |
+
|
| 62 |
+
LABR: Large Scale Arabic Book Reviews Dataset
|
| 63 |
+
|
| 64 |
+
ASTD: Arabic Sentiment Tweets Dataset
|
| 65 |
+
|
| 66 |
+
# 3 Situating ASA methods vis-à-vis general SA approaches
|
| 67 |
+
|
| 68 |
+
We situate the research in Arabic Sentiment Analysis (ASA) and juxtapose it with general trends in sentiment analysis (SA) in this section. We present an overview of several sentiment analysis tasks, and for each task we highlight the advancements in general sentiment analysis research, followed by a focus on Arabic-specific sentiment analysis. While not exhaustive, the selected approaches illustrate key differences and current trends between general and Arabic sentiment analysis. Table 1 provides an overview of datasets for ASA organised according to the modality, granularity and context involved. We refer to these datasets in the subsequent subsections.
|
| 69 |
+
|
| 70 |
+
# 3.1 Sentiment Classification
|
| 71 |
+
|
| 72 |
+
Sentiment classification involves assigning a sentiment label (positive, neutral, negative) or a sentiment rating (e.g., one to five) to a given input, which can be text or data from other modalities. As one of the earliest sentiment analysis tasks (Liu, 2015), sentiment classification has attracted significant research interest.
|
| 73 |
+
|
| 74 |
+
# 3.1.1 General Sentiment Classification
|
| 75 |
+
|
| 76 |
+
The development of general sentiment classification reflects the ongoing paradigm shifts within the field of natural language processing (Liu et al., 2021). Early works in sentiment classification primarily relied on task-specific models, employing either traditional machine learning methods like Support Vector Machines (SVMs) or deep learning-based approaches. These models were trained on labeled data and limited to solving specific tasks.
|
| 77 |
+
|
| 78 |
+
However, the emergence of pre-trained language models such as BERT (Devlin et al., 2019) has revolutionized the field. These models, typically based on components of the Transformer architecture, are pre-trained on massive amounts of unlabeled data and subsequently fine-tuned for specific tasks, including sentiment classification. Large language models like GPT-3 (Brown et al., 2020) further push the boundaries of model size, demonstrating the ability to acquire various emergent capabilities such as in-context learning when scaled sufficiently large (Wei et al., 2022). A systematic analysis of large language models' effectiveness in tackling various sentiment analysis tasks, including sentiment classification, is provided by (Zhang et al., 2023).
|
| 79 |
+
|
| 80 |
+
# 3.1.2 Arabic Sentiment Classification
|
| 81 |
+
|
| 82 |
+
The development of Arabic sentiment classification follows a similar trajectory to that of general sentiment classification. Early research predominantly focused on task-specific models trained on sentiment classification datasets for Arabic text. (Dahou et al., 2016) constructed Arabic word embeddings and subsequently employed a Convolutional Neural Network (CNN) as a classifier. (Attia et al., 2018) proposed a language-independent framework for text classification, evaluating its performance on Arabic sentiment classification tasks as well. Table 2 provides detailed descriptions of various task-specific methods for Arabic sentiment classification, along with their contributions and limitations. As highlighted in the table, the biggest limitation of such methods is that they are task-specific and do not generalise well to other tasks or dialects.
|
| 83 |
+
|
| 84 |
+
With the remarkable success of pre-trained language models based on bidirectional transformers, such as BERT (Devlin et al., 2019), on diverse natural language understanding tasks, numerous studies have explored their utilization for Arabic sentiment classification. (ElJundi et al., 2019) developed hULMonA, a pre-trained language model specifically for Arabic, and fine-tuned it for Arabic sentiment analysis. AraBERT (Antoun et al., 2020) builds upon this work by pre-training the model entirely on an Arabic corpus and evaluating its performance on various tasks. (Abdul-Mageed et al., 2021) introduced ARBERT and MARBERT, language models pre-trained on dialectal Arabic. Table 3 offers detailed descriptions of different pre-trained language model-based methods for Arabic sentiment classification.
|
| 85 |
+
|
| 86 |
+
# 3.2 Multifaceted Analysis of Subjective Text (MAST)
|
| 87 |
+
|
| 88 |
+
Multifaceted analysis of subjective text (MAST) represents an extension of sentiment classification that delves deeper into task granularity. It shifts the focus towards more specialized tasks, such as irony detection (Zeng and Li, 2022) and comparative opinion mining (Varathan et al., 2017).
|
| 89 |
+
|
| 90 |
+
# 3.2.1 General MAST
|
| 91 |
+
|
| 92 |
+
The development trajectory of general MAST mirrors that of general sentiment classification, as discussed in Section 3.1.1. Due to the focus on specialized tasks within MAST, it encompasses a multitude of sub-tasks. While these sub-tasks have been extensively explored in the field, a detailed de
|
| 93 |
+
|
| 94 |
+
<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>hULMonA (ElJundi et al., 2019)</td><td>Develops a pre-trained LM for Arabic and fine-tunes it for SA</td><td>Does not employ an Arabic specific tokenizer and only evaluates on the SA task</td><td>ASTD (86.5%), ArSenTD-LEV (52.4%)</td></tr><tr><td>AraBERT (Antoun et al., 2020)</td><td>Pre-trains an Arabic LM AraBERT and evaluates performance on different tasks</td><td>Does not systematically evaluate the model on different dialects</td><td>LABR (86.7%), ASTD (92.6%), ArSenTD-Lev (59.4%)</td></tr><tr><td>(Alkaoud and Syed, 2020)</td><td>Proposes tokenization methods for static and contextual word embeddings and improves their performance</td><td>Does not study generalisation ability of the proposed method</td><td>LABR (89.87%)</td></tr><tr><td>(Abdul-Mageed et al., 2021)</td><td>Introduces ARBERT and MAR-BERT, pre-trains models on dialectal Arabic, introduces AR-LUE benchmark</td><td>The models have a high memory requirement, thereby impeding computational efficiency</td><td>LABR (92.51%), ASTD (95.24%), ArSenTD-Lev (61.38%)</td></tr><tr><td>(Alyafeai and Ahmad, 2021)</td><td>Employs distillation and quantization to train compact Arabic language models</td><td>The effects of hyperparameter tuning are not analysed</td><td>LABR (87.5%), ASTD (86.2%)</td></tr><tr><td>(El Mekki et al., 2021)</td><td>Introduces an unsupervised domain adaptation method for Arabic cross-domain and cross-dialect SA</td><td>Does not study the effect of domain adaptation from high-resource languages to Arabic</td><td>-</td></tr><tr><td>(Abu Kwaik et al., 2022)</td><td>Compares feature-based, deep learning and pre-trained LM based methods on dialectal Arabic SA</td><td>Lacks an error analysis and a comparison of feature-based vs pre-trained LMs in different situations</td><td>-</td></tr><tr><td>(Refai et al., 2022)</td><td>Proposes a data augmentation method for Arabic text classification using Transformer based models</td><td>Does not study the generalisation ability of their method</td><td>-</td></tr></table>
|
| 95 |
+
|
| 96 |
+
Table 3: Pre-trained language model (LM) based methods for Arabic sentiment classification.
|
| 97 |
+
|
| 98 |
+
hULMonA: The First Universal Language Model in Arabic
|
| 99 |
+
|
| 100 |
+
LABR: Large Scale Arabic Book Reviews Dataset
|
| 101 |
+
|
| 102 |
+
ASTD: Arabic Sentiment Tweets Dataset
|
| 103 |
+
|
| 104 |
+
ArSenTD-Lev: Arabic Sentiment Twitter Dataset for the Levantine dialect
|
| 105 |
+
|
| 106 |
+
scription falls outside the scope of this survey. We encourage readers to refer to comprehensive surveys on specific sub-tasks, such as those by (Zeng and Li, 2022) and another work referenced here (Varathan et al., 2017).
|
| 107 |
+
|
| 108 |
+
# 3.2.2 Arabic MAST
|
| 109 |
+
|
| 110 |
+
Compared to general MAST, research on Arabic MAST remains less developed. This section will solely introduce research on Arabic sarcasm detection, as it has garnered a relatively larger body of work following the release of the ArSarcasm (Abu Farha and Magdy, 2020) and ArSarcasm-v2 (Abu Farha et al., 2021) datasets, alongside a shared task organized by WANLP (Abu Farha et al., 2021).
|
| 111 |
+
|
| 112 |
+
While various methods have been employed, including traditional machine learning approaches, task-specific deep learning methods, and pretrained language model-based methods, the latter category combined with various optimizations has emerged as the most effective approach. (Henge et al., 2021) propose a hybrid model that leverages contextual representations from AraBERT (Antoun et al., 2020) alongside static word vectors.
|
| 113 |
+
|
| 114 |
+
Additionally, recent research explores various machine learning techniques such as down-sampling and augmentation (Israeli et al., 2021) for this task. (Faraj et al., 2021) employ an ensemble approach, combining different pre-trained language models with a hard voting technique. (Talafha et al., 2021) propose framing the problem as a regression task, predicting the level of sarcasm. Table 4 provides detailed descriptions of methods for Arabic sarcasm detection, along with their contributions and limitations.
|
| 115 |
+
|
| 116 |
+
# 3.3 Aspect-Based Sentiment Analysis (ABSA)
|
| 117 |
+
|
| 118 |
+
Aspect-based sentiment analysis (ABSA) extends sentiment analysis by introducing a finer-grained level of task granularity. Unlike sentiment classification, where the output is typically a binary or multi-class label, ABSA delves deeper, focusing on aspects within the sentiment analysis process.
|
| 119 |
+
|
| 120 |
+
# 3.3.1 General ABSA
|
| 121 |
+
|
| 122 |
+
Similar to MAST, general ABSA encompasses various sub-tasks, ranging from simpler single ABSA tasks like aspect term extraction to more complex compound ABSA tasks such as aspect sentiment triplet extraction (Zhang et al., 2022). (Zhang et al., 2022) provide a comprehensive survey on general ABSA, we recommend referring to their work for
|
| 123 |
+
|
| 124 |
+
further details on general trends in this direction.
|
| 125 |
+
|
| 126 |
+
# 3.3.2 Arabic ABSA
|
| 127 |
+
|
| 128 |
+
Research on Arabic ABSA lags behind that of general ABSA. The majority of existing works in Arabic ABSA primarily address aspect sentiment classification, which essentially translates to sentiment classification applied at the aspect level. Additionally, many studies rely solely on feature-based approaches and traditional machine learning methods. This section will focus exclusively on deep learning-based methods for Arabic ABSA.
|
| 129 |
+
|
| 130 |
+
The development of Arabic ABSA parallels that of Arabic sentiment classification, as discussed in Section 3.1.2. A growing number of studies are employing deep learning and pre-trained language model-based methods. (Al-Smadi et al., 2017) and (Alshammari and Almansour, 2020) compare traditional machine learning and deep learning methods for Arabic ABSA. (Al-Dabet et al., 2021) propose different network architectures tailored to specific Arabic ABSA tasks. (Abdelgwad et al., 2021) develops a BERT-based model for Arabic ABSA. Table 5 provides detailed descriptions of methods for Arabic ABSA.
|
| 131 |
+
|
| 132 |
+
# 3.4 Comparison within Arabic Sentiment Analysis Methods
|
| 133 |
+
|
| 134 |
+
Although task-specific methods are dedicated to ASA tasks, the most effective methods are those that combine pre-trained language models and various optimisation techniques. hULMonA (ElJundi et al., 2019) first demonstrate the effectiveness of pre-trained language models by developing a pretrained LM for Arabic and fine-tuning it for ASA, which significantly improves the performance. Latter pre-trained Arabic LMs such as AraBERT (Antoun et al., 2020) and ARBERT (Abdul-Mageed et al., 2021) further push the boundary of ASA by using Arabic specific tokenisation and pre-training models on dialectal Arabic.
|
| 135 |
+
|
| 136 |
+
Another research direction extends the existing methods using various approaches such as domain adaptation (El Mekki et al., 2021) and data augmentation (Refai et al., 2022). However, the models based on pre-trained LMs are not computationally efficient, and involve a significant computational overhead, whereas approaches such as (Alyafeai and Ahmad, 2021) maintain the balance between performance and efficiency by distillation and quantisation.
|
| 137 |
+
|
| 138 |
+
<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td><td>Accuracy</td></tr><tr><td>(Hengle et al., 2021)</td><td>Proposes a hybrid model which combines contextual representations from AraBERT (Antoun et al., 2020) and static word vectors</td><td>Hybrid model, computational efficiency is impeded owing to use of both contextual and static embeddings.</td><td>74.1%</td></tr><tr><td>(Israeli et al., 2021)</td><td>Employs pre-trained Transformer based models and various machine learning techniques such as down-sampling and augmentation</td><td>Does not explain the effects of these techniques</td><td>76.7%</td></tr><tr><td>(Talafha et al., 2021)</td><td>Annotates an Arabic sarcasm detection dataset, trains a regression model and predicts the level of sarcasm</td><td>While the model can output the level of sarcasm, it is reliant on a binary classification dataset</td><td>-</td></tr><tr><td>(Khondaker et al., 2022)</td><td>Applies contrastive learning to Arabic social meaning tasks</td><td>Does not study the generalisa-tion ability of their method</td><td>-</td></tr><tr><td>(Faraj et al., 2021)</td><td>Ensembles different pre-trained language models and employs hard voting tech-nique</td><td>The method is not efficient as it needs various pre-trained language models.</td><td>78.3%</td></tr><tr><td>(El Mahdaouy et al., 2021)</td><td>Proposes an end-to-end multi-task model for Arabic sentiment analysis and sarcasm detection</td><td>Does not present experimentation on other tasks</td><td>76.8%</td></tr><tr><td>(Kaseb and Farouk, 2022)</td><td>Proposes SAIDS that uses its prediction of sarcasm and dialect as known information to predict the sentiment</td><td>Does not study the generalisa-tion ability of their method</td><td>-</td></tr></table>
|
| 139 |
+
|
| 140 |
+
Table 4: Methods for Arabic sarcasm detection. The accuracy is evaluated on ArSarcam-v2 dataset.
|
| 141 |
+
|
| 142 |
+
<table><tr><td>Methods</td><td>Contributions</td><td>Limitations</td></tr><tr><td>(Ashi et al., 2019)</td><td>Employs pre-trained word embeddings for Arabic ABSA</td><td>Only uses traditional machine learning methods as classifier</td></tr><tr><td>(Al-Smadi et al., 2017)</td><td>Compares RNN and SVM for Arabic ABSA</td><td>The dataset is relatively small, does not use other deep learning models such as LSTM</td></tr><tr><td>(Alshammari and Almansour, 2020)</td><td>Compares CNN and traditional machine learning methods for Arabic ABSA</td><td>Task-specific method, does not compare different deep learning methods</td></tr><tr><td>(Al-Dabet et al., 2021)</td><td>Proposes different network architectures for various Arabic ABSA tasks</td><td>Task-specific method</td></tr><tr><td>(Abdelgwad et al., 2021)</td><td>Develops a BERT based model for Arabic ABSA</td><td>Does not present experimentation on other tasks</td></tr></table>
|
| 143 |
+
|
| 144 |
+
Table 5: Methods for Arabic aspect based sentiment analysis (ABSA).
|
| 145 |
+
|
| 146 |
+
# 3.5 Gaps and challenges in Arabic Sentiment Analysis
|
| 147 |
+
|
| 148 |
+
# 3.5.1 Gaps
|
| 149 |
+
|
| 150 |
+
This section outlines key research gaps between Arabic sentiment analysis and general sentiment analysis across three dimensions:
|
| 151 |
+
|
| 152 |
+
- Modality: Multimodality has recently garnered significant interest within general sentiment analysis, with a surge in research activity (Lai et al., 2023). However, investigations into multi-modal Arabic sentiment analysis remain limited. Most datasets for Arabic sentiment analysis are restricted to the text modality.
|
| 153 |
+
- Granularity: The majority of research in Arabic sentiment analysis focuses solely on Arabic sentiment classification. As evidenced in the previous sections, even studies exploring Arabic MAST and Arabic ABSA often target simpler tasks. Consequently, Arabic sentiment analysis lags behind general sentiment analysis in terms of MAST and ABSA tasks.
|
| 154 |
+
- Context: While datasets for general sentiment analysis encompass various levels ranging from document level to aspect level, most datasets for Arabic sentiment analysis remain at the document level. Even some recently released datasets lack annotations at sentence level and aspect level.
|
| 155 |
+
|
| 156 |
+
# 3.5.2 Challenges
|
| 157 |
+
|
| 158 |
+
The Arabic language is characterized by its high morphological complexity, which entails intricate word formation processes that may obscure meaning (Habash, 2010). Additionally, the high degree of ambiguity and polysemy inherent in Arabic lexicon complicates semantic interpretation. The presence of negation and the extensive range of dialects further exacerbate these challenges, as they introduce variations that must be meticulously accounted for in linguistic models (El-Beltagy and Ali, 2013).
|
| 159 |
+
|
| 160 |
+
Data scarcity and cultural contextualization present additional challenges for Arabic. There is a scarcity of large, labeled datasets for many dialects, making it difficult to train robust models. Moreover, sentiment expression can vary significantly based on cultural nuances, requiring models to understand context beyond mere text.
|
| 161 |
+
|
| 162 |
+
# 4 Recent trends in Arabic Sentiment Analysis
|
| 163 |
+
|
| 164 |
+
Several research efforts are ongoing to develop robust Arabic-specific methods and overcome the challenges presented in Section 3.5. We organise and present these efforts below:
|
| 165 |
+
|
| 166 |
+
# 4.1 Addressing Dialectal Variations
|
| 167 |
+
|
| 168 |
+
The issue of dialectal variation has received significant attention in both task-specific and pre-trained language model-based approaches. (Baly et al., 2017) conducted a characterization study analyzing tweets from different Arab regions, highlighting the importance of addressing the dialectal problems in Arabic SA. The efforts that tackle this challenge are presented below, grouped into the broad approach employed:
|
| 169 |
+
|
| 170 |
+
# 4.1.1 Dataset Creation, Domain Adaptation and Data Augmentation
|
| 171 |
+
|
| 172 |
+
(Medhaffar et al., 2017) addressed this challenge by annotating a corpus specifically for the Tunisian dialect and evaluating their models on data from various dialects. A similar approach to this method was the one proposed by (Guellil et al., 2018). They presented a method for automatically constructing an Algerian dialect corpus. (El Mekki et al., 2021) introduced an unsupervised domain adaptation method for cross-domain and cross-dialect sentiment analysis in Arabic. (Refai et al., 2022) proposed a data augmentation method specifically designed for Arabic classification tasks using transformer-based models.
|
| 173 |
+
|
| 174 |
+
# 4.1.2 Increasing use of Deep learning
|
| 175 |
+
|
| 176 |
+
Researchers are increasingly using deep learning models, particularly transformer based models, to effectively capture the nuances of different Arabic dialects. For example, ARBERT and MARBERT (Abdul-Mageed et al., 2021) were specifically pretrained on dialectal Arabic to address these dialectal variations.
|
| 177 |
+
|
| 178 |
+
# 4.1.3 Transfer Learning and Multilingual Models
|
| 179 |
+
|
| 180 |
+
Transfer learning approaches are being used to leverage knowledge from models trained on larger datasets in other languages or MSA, facilitating better performance on dialect data with limited resources. Multilingual transformer models like mBERT are also applied for handling multiple Arabic dialects (Devlin et al., 2019).
|
| 181 |
+
|
| 182 |
+
# 4.2 Arabic-specific Tokenization
|
| 183 |
+
|
| 184 |
+
Recent research has also explored the importance of developing tokenization methods specifically for the Arabic language. (Alyafeai et al., 2021) compared the performance of different tokenizers for various Arabic classification tasks. (Alkaoud and Syed, 2020) proposed tokenization strategies specifically tailored for both static and contextual Arabic word embeddings, demonstrating significant performance improvements. The efforts in this direction can be grouped into the following trends:
|
| 185 |
+
|
| 186 |
+
Morphological Analysis Implementation of advanced morphological analysis tools to accurately identify roots, prefixes, and suffixes, ensuring proper tokenization of complex words. Noteworthy contributions in this area include the MADAMIRA tool, which provides robust morphological analysis and disambiguation for modern written Arabic, showcasing significant improvement in processing complex Arabic morphological structures (Pasha et al., 2014).
|
| 187 |
+
|
| 188 |
+
Dialect-Specific Tokenizers: Development of tokenization models tailored to specific Arabic dialects to handle regional vocabulary and expressions effectively. The CALIMA-Star project exemplifies efforts to create comprehensive morphological lexicons specific to different Arabic dialects, allowing more precise tokenization and analysis for dialectal data (Taji et al., 2018).
|
| 189 |
+
|
| 190 |
+
Contextual Tokenization: Use of context-aware tokenization methods to understand the meaning of words in context, assisting in disambiguating similar words. Contextual models like AraBERT and its advancements in tokenization strategies demonstrate the power of context-aware embeddings to capture nuanced language variations in sentiment analysis (Antoun et al., 2020).
|
| 191 |
+
|
| 192 |
+
Incorporating Diacritics: Desining tokenizers that handle diacritics appropriately, either by retaining them for analysis or by normalizing words without diacritics while preserving meaning. Research by (Alqahtani et al., 2020) highlights the role of diacritics in enhancing sentiment analysis, emphasizing the necessity for tokenizers that efficiently process diacritized text data without losing critical semantic information.
|
| 193 |
+
|
| 194 |
+
While these trends have demonstrated improved performance for ASA, significant research efforts need to be directed in order to bridge the gap between ASA and general SA for high-resource languages.
|
| 195 |
+
|
| 196 |
+
# 5 Future Directions
|
| 197 |
+
|
| 198 |
+
To conclude, we present promising research directions to foster the development of robust models for Arabic sentiment analysis.
|
| 199 |
+
|
| 200 |
+
Creation of richer datasets: Future efforts should prioritize the development of datasets that encompass richer annotations across the following dimensions:
|
| 201 |
+
|
| 202 |
+
- Multimodality: Datasets that integrate various modalities (text, audio, video) to capture richer sentiment information.
|
| 203 |
+
- Fine-grained tasks: Datasets designed for exploring more intricate sentiment analysis tasks beyond sentiment classification.
|
| 204 |
+
- Multi-context annotations: Datasets with annotations at finer levels (sentence level, aspect level) to facilitate in-depth analysis.
|
| 205 |
+
|
| 206 |
+
Multimodal Sentiment Analysis: While limited research has been conducted in multimodal ASA, leveraging information from multiple modalities holds significant potential for accurate sentiment analysis. Future research should explore effective techniques for incorporating multimodal data and develop robust models for this task.
|
| 207 |
+
|
| 208 |
+
Large Language Models (LLMs) for ASA: Recent advancements in LLMs have yielded remarkable performance on various tasks. Arabic LLMs like AceGPT-LL (Huang et al., 2023) have also emerged. However, a systematic analysis of LLMs for sentiment analysis, particularly in the context of Arabic, is lacking. Future research should investigate how to best utilize LLMs for Arabic sentiment analysis.
|
| 209 |
+
|
| 210 |
+
Interpretable Sentiment Analysis: Existing Arabic sentiment analysis methods primarily provide final sentiment labels without explanations for their outputs. Recent work on improving the interpretability of question answering by examining model reasoning (Huang and Chang, 2023) suggests a promising approach that can be adapted to sentiment analysis. For example, models could be designed to output reasoning steps leading to their final sentiment polarity predictions.
|
| 211 |
+
|
| 212 |
+
Exploration of Fine-Grained Tasks: General sentiment analysis research has shifted towards increasingly fine-grained tasks. However, most Arabic sentiment analysis studies remain focused on sentiment classification at the document level. A systematic exploration of other fine-grained tasks, particularly those within MAST and ABSA, would be beneficial for advancing the field.
|
| 213 |
+
|
| 214 |
+
# 6 Limitations
|
| 215 |
+
|
| 216 |
+
This survey acknowledges some limitations. Firstly, it primarily focuses on works utilizing deep learning methods. As highlighted in (Abu Kwaik et al., 2022), feature-based methods can outperform pretrained language model-based methods in some instances. Future surveys may benefit from including an exploration of feature-based approaches. Additionally, while this work compares Arabic sentiment analysis with general sentiment analysis, it would also be valuable to situate Arabic sentiment analysis within the broader context of Arabic classification tasks and Arabic natural language processing tasks in general.
|
| 217 |
+
|
| 218 |
+
# References
|
| 219 |
+
|
| 220 |
+
Mohammed M. Abdelgwad, Taysir Hassan A. Soliman, and Ahmed I. Taloba. 2021. Arabic aspect sentiment polarity classification using BERT. Journal of Big Data, 9:1-15.
|
| 221 |
+
Huda Jamal Abdelhameed and Susana Muñoz-Hern'andez. 2017. Emotion and opinion retrieval from social media in Arabic language: Survey. In 2017 Joint International Conference on Information and Communication Technologies for Education and Training and International Conference on Computing in Arabic (ICCA-TICET), pages 1-8.
|
| 222 |
+
Youmna Abdelwahab, Mohamed Hamed Kholief, and Ahmed Ahmed Hesham Sedky. 2022. Justifying Arabic text sentiment analysis using Explainable AI (XAI): LASIK surgeries case study. Inf., 13:536.
|
| 223 |
+
Muhammad Abdul-Mageed, AbdelRahim Elmadany, and El Moatez Billah Nagoudi. 2021. ARBERT & MARBERT: Deep bidirectional transformers for Arabic. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7088-7105, Online. Association for Computational Linguistics.
|
| 224 |
+
Tareq Al-Moslmi; Mohammed Albared; Adel Al-Shabi; Nazlia Omar; Salwani Abdullah.; 2018. Arabic semi-lexicon: Constructing publicly available language resources for arabic sentiment analysis. Journal of Information Science, 44(3):345-362.
|
| 225 |
+
Mohamed Elhag Mohamed Abo, Ram Gopal Raj, and Atika Qazi. 2019. A review on Arabic sentiment analysis: State-of-the-art, taxonomy and open research challenges. IEEE Access, 7:162008-162024.
|
| 226 |
+
Ibrahim Abu Farha and Walid Magdy. 2020. From Arabic sentiment analysis to sarcasm detection: The ArSarcasm dataset. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language
|
| 227 |
+
|
| 228 |
+
Detection, pages 32-39, Marseille, France. European Language Resource Association.
|
| 229 |
+
Ibrahim Abu Farha, Wajdi Zaghouani, and Walid Magdy. 2021. Overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 296-305, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 230 |
+
Kathrein Abu Kwaik, Stergios Chatzikyriakidis, and Simon Dobnik. 2022. Pre-trained models or feature engineering: The case of dialectal Arabic. In Proceedings of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur'an QA and Fine-Grained Hate Speech Detection, pages 41-50, Marseille, France. European Language Resources Association.
|
| 231 |
+
Mahmoud Al-Ayyoub, Abed Allah Khamaiseh, Yaser Jararweh, and Mohammed N. Al-Kabi. 2019. A comprehensive survey of Arabic sentiment analysis. Information Processing and Management, 56(2):320-342. Advance Arabic Natural Language Processing (ANLP) and its Applications.
|
| 232 |
+
Saja Al-Dabet, Sara Tedmori, and Mohammad Al-Smadi. 2021. Enhancing Arabic aspect-based sentiment analysis using deep learning models. Comput. Speech Lang., 69:101224.
|
| 233 |
+
Lamia Al-Horaibi and Muhammad Badruddin Khan. 2016. Sentiment analysis of Arabic tweets using text mining techniques. In International Workshop on Pattern Recognition, 2016.
|
| 234 |
+
Mohammad Al-Smadi, Omar Qawasneh, Mahmoud Al-Ayyoub, Yaser Jararweh, and Brij Bhooshan Gupta. 2017. Deep Recurrent Neural Network vs. Support Vector Machine for aspect-based sentiment analysis of Arabic hotels' reviews. J. Comput. Sci., 27:386-393.
|
| 235 |
+
Nora Al-Twairesh, Hend Al-Khalifa, and AbdulMalik Al-Salman. 2014. Subjectivity and sentiment analysis of Arabic: Trends and challenges. In 2014 IEEE/ACS 11th International Conference on Computer Systems and Applications (AICCSA), pages 148-155.
|
| 236 |
+
Mohamed Alkaoud and Mairaj Syed. 2020. On the importance of tokenization in Arabic embedding models. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 119-129, Barcelona, Spain (Online). Association for Computational Linguistics.
|
| 237 |
+
Felipe Almeida and Geraldo Bonorino Xexéo. 2019. Word embeddings: A survey. ArXiv, abs/1901.09069.
|
| 238 |
+
Latifah Almurqren, Ryan Hodgson, and A Ioana Cristea. 2024. Arabic text sentiment analysis: Reinforcing human-performed surveys with wider topic analysis. ArXiv, abs/2403.01921.
|
| 239 |
+
|
| 240 |
+
Sawsan Alqahtani, Ajay Mishra, and Mona Diab. 2020. A multitask learning approach for diacritic restoration. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8238-8247, Online. Association for Computational Linguistics.
|
| 241 |
+
Norah Fahad Alshammari and Amal Almansour. 2020. Aspect-based sentiment analysis for Arabic content in social media. 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), pages 1-6.
|
| 242 |
+
Merfat.M. Altawaier and Sabrina Tiun. 2016. Comparison of machine learning approaches on arabic twitter sentiment analysis. International Journal on Advanced Science, Engineering and Information Technology, 6:1067-1073.
|
| 243 |
+
Mohamed Aly and Amir Atiya. 2013. LABR: A large scale Arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 494-498, Sofia, Bulgaria. Association for Computational Linguistics.
|
| 244 |
+
Zaid Alyafeai and Irfan Ahmad. 2021. Arabic compact language modelling for resource limited devices. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 53-59, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 245 |
+
Zaid Alyafeai, Maged S. Al-Shaibani, Mustafa Ghaleb, and Irfan Ahmad. 2021. Evaluating various tokenizers for Arabic text classification. *Neural Processing Letters*, 55:2911-2933.
|
| 246 |
+
Shela Ambreen, Muhammad Iqbal, Muhammad Zubair Asghar, Tehseen Mazhar, Umar Farooq Khattak, Muhammad Amir Khan, and Habib Hamam. 2024. Predicting customer sentiment: The fusion of deep learning and a fuzzy system for sentiment analysis of arabic text. Soc. Netw. Anal. Min., 14:206.
|
| 247 |
+
Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9–15, Marseille, France. European Language Resource Association.
|
| 248 |
+
Mohammed Ashi, Muazzam Siddiqui, and Farrukh Nadeem. 2019. Pre-trained Word Embeddings for Arabic Aspect-Based Sentiment Analysis of Airline Tweets, pages 241-251. Springer.
|
| 249 |
+
Md. Atabuzzaman, Md Shajalal, Maksuda Bilkis Baby, and Alexander Boden. 2023. Arabic sentiment analysis with noisy deep explainable model. In International Conference on Natural Language Processing and Information Retrieval.
|
| 250 |
+
Mohammed Attia, Younes Samih, Ali Elkahky, and Laura Kallmeyer. 2018. Multilingual multi-class sentiment classification using convolutional neural networks. In Proceedings of the Eleventh International
|
| 251 |
+
|
| 252 |
+
Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
|
| 253 |
+
Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, and Wassim El-Hajj. 2014. A large scale Arabic sentiment lexicon for Arabic opinion mining. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 165-173, Doha, Qatar. Association for Computational Linguistics.
|
| 254 |
+
Ramy Baly, Gilbert Badaro, Georges El-Khourry, Rawan Moukalled, Rita Aoun, Hazem Hajj, Wassim El-Hajj, Nizar Habash, and Khaled Shaban. 2017. A characterization study of Arabic Twitter data with a benchmarking for state-of-the-art opinion mining models. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 110-118, Valencia, Spain. Association for Computational Linguistics.
|
| 255 |
+
Ramy Baly, Alaa Khaddaj, Hazem M. Hajj, Wassim El-Hajj, and Khaled Bashir Shaban. 2019. ArSentD-LEV: A multi-topic corpus for target-based sentiment analysis in Arabic Levantine tweets. CoRR, abs/1906.01830.
|
| 256 |
+
Mariam Biltawi, Wael Etaiwi, Sara Tedmori, Amjad Hudaib, and Arafat Awajan. 2016. Sentiment classification techniques for Arabic language: A survey. In 2016 7th International Conference on Information and Communication Systems (ICICS), pages 339-346.
|
| 257 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*, abs/2005.14165.
|
| 258 |
+
Abdelghani Dahou, Shengwu Xiong, Junwei Zhou, Mohamed Houcine Haddoud, and Pengfei Duan. 2016. Word embeddings and convolutional neural network for Arabic sentiment classification. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2418-2427, Osaka, Japan. The COLING 2016 Organizing Committee.
|
| 259 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 260 |
+
|
| 261 |
+
Rehab M. Duwairi, Mosab Alfaqeh, Mohammad Wardat, and Areen Alrabadi. 2016. Sentiment analysis for Arabizi text. In 2016 7th International Conference on Information and Communication Systems (ICICS), pages 127-132.
|
| 262 |
+
Samhaa R. El-Beltagy and Ahmed Ali. 2013. Open issues in the sentiment analysis of Arabic social media: a case study. In 2013 IEEE International Conference on Innovations in Information Technology (IIT), pages 215-220. IEEE.
|
| 263 |
+
Abdelkader El Mahdaouy, Abdellah El Mekki, Kabil Essefar, Nabil El Mamoun, Ismail Berrada, and Ahmed Khoumsi. 2021. Deep multi-task model for sarcasm detection and sentiment analysis in Arabic language. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 334-339, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 264 |
+
Abdollah El Mekki, Abdelkader El Mahdaouy, Ismail Berrada, and Ahmed Khoumsi. 2021. Domain adaptation for Arabic cross-domain and cross-dialect sentiment analysis from contextualized word embedding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2824-2837, Online. Association for Computational Linguistics.
|
| 265 |
+
Obeida ElJundi, Wissam Antoun, Nour El Droubi, Hazem Hajj, Wassim El-Hajj, and Khaled Shaban. 2019. hULMonA: The universal language model in Arabic. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 68-77, Florence, Italy. Association for Computational Linguistics.
|
| 266 |
+
Yang Fang and Cheng Xu. 2024. ArSen-20: A new benchmark for Arabic sentiment detection. In 5th Workshop on African Natural Language Processing.
|
| 267 |
+
Dalya Faraj, Dalya Faraj, and Malak Abdullah. 2021. SarcasmDet at sarcasm detection task 2021 in Arabic using AraBERT pretrained model. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 345-350, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 268 |
+
Imane Guellil, Ahsan Adeel, Faical Azouaou, and Amir Hussain. 2018. SentiALG: Automated corpus annotation for Algerian sentiment analysis. ArXiv, abs/1808.05079.
|
| 269 |
+
Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan & Claypool Publishers.
|
| 270 |
+
Abdelhamid Haouhat, Slimane Bellaouar, Attia Nehar, and Hadda Cherroun. 2023. Towards Arabic multimodal dataset for sentiment analysis. 2023 Fourth International Conference on Intelligent Data Science Technologies and Applications (IDSTA), pages 126-133.
|
| 271 |
+
|
| 272 |
+
Maha Heikal, Marwan Torki, and Nagwa M. El-Makky. 2018. Sentiment analysis of Arabic tweets using deep learning. In International Conference on Arabic Computational Linguistics.
|
| 273 |
+
Amey Hangle, Atharva Kshirsagar, Shaily Desai, and Manisha Marathe. 2021. Combining context-free and contextualized representations for Arabic sarcasm detection and sentiment identification. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 357-363, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 274 |
+
Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, and Jinchao Xu. 2023. AceGPT, localizing large language models in Arabic. ArXiv, abs/2309.12053.
|
| 275 |
+
Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 1049–1065, Toronto, Canada. Association for Computational Linguistics.
|
| 276 |
+
Abraham Israeli, Yotam Nahum, Shai Fine, and Kfir Bar. 2021. The IDC system for sentiment classification and sarcasm detection in Arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 370-375, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
|
| 277 |
+
Abdelrahman Kaseb and Mona Farouk. 2022. SAIDS: A novel approach for sentiment analysis informed of dialect and sarcasm. In Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), pages 22-30, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
|
| 278 |
+
Md Tawkat Islam Khondaker, El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S. 2022. A benchmark study of contrastive learning for Arabic social meaning. In Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), pages 63-75, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
|
| 279 |
+
Songning Lai, Hao Xu, Xifeng Hu, Zhaoxia Ren, and Zhi Liu. 2023. Multimodal sentiment analysis: A survey. *ArXiv*, abs/2305.07611.
|
| 280 |
+
Zhenyu Li and Zongfeng Zou. 2024. Punctuation and lexicon aid representation: A hybrid model for short text sentiment analysis on social media platform. Journal of King Saud University - Computer and Information Sciences, 36(3):102010.
|
| 281 |
+
Bing Liu. 2015. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions. Cambridge University Press.
|
| 282 |
+
|
| 283 |
+
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55:1-35.
|
| 284 |
+
Salima Medhaffar, Fethi Bougares, Yannick Esteve, and Lamia Hadrich-Belguith. 2017. Sentiment analysis of Tunisian dialects: Linguistic resources and experiments. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 55-61, Valencia, Spain. Association for Computational Linguistics.
|
| 285 |
+
Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. ASTD: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2515-2519, Lisbon, Portugal. Association for Computational Linguistics.
|
| 286 |
+
Arfath Pasha et al. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC).
|
| 287 |
+
Wisam A. Qader, Musa M. Ameen, and Bilal I. Ahmed. 2019. An overview of Bag of Words: Importance, implementation, applications, and challenges. In 2019 International Engineering Conference (IEC), pages 200-204.
|
| 288 |
+
Alaa Rahma, Shahira Shaaban Azab, and Ammar Mohammed. 2023. A comprehensive survey on Arabic sarcasm detection: Approaches, challenges and future trends. IEEE Access, 11:18261-18280.
|
| 289 |
+
Dania Refai, Saleh Abu-Soud, and Mohammad J. Abdel-Rahman. 2022. Data augmentation using transformers and similarity measures for improving Arabic text classification. IEEE Access, 11:132516-132531.
|
| 290 |
+
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
|
| 291 |
+
Claude Sammut and Geoffrey I. Webb, editors. 2010. TF-IDF, pages 986-987. Springer US, Boston, MA.
|
| 292 |
+
Dima Taji, Salam Khalifa, Ossama Obeid, Fadhl Eryani, and Nizar Habash. 2018. An Arabic morphological analyzer and generator with copious features. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 140-150, Brussels, Belgium. Association for Computational Linguistics.
|
| 293 |
+
Bashar Talafha, Muhy Eddin Za'ter, Samer Suleiman, Mahmoud Al-Ayyoub, and Mohammed N. Al-Kabi. 2021. Sarcasm detection and quantification in Arabic tweets. 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pages 1121-1125.
|
| 294 |
+
|
| 295 |
+
A. Maurits van der Veen and Erik Bleich. 2025. The advantages of lexicon-based sentiment analysis in an age of machine learning. PLOS ONE, 20(1):1-19.
|
| 296 |
+
Kasturi Dewi Varathan, Anastasia Giachanou, and Fabio Crestani. 2017. Comparative opinion mining: A review. Journal of the Association for Information Science and Technology, 68(4):811-829.
|
| 297 |
+
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed Huai hsin Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. ArXiv, abs/2206.07682.
|
| 298 |
+
Rong Xiang, Emmanuele Chersoni, Qin Lu, Chu-Ren Huang, Wenjie Li, and Yunfei Long. 2021. Lexical data augmentation for sentiment analysis. Journal of the Association for Information Science and Technology, 72:1432 - 1447.
|
| 299 |
+
Qingcheng Zeng and An-Ran Li. 2022. A survey in automatic irony processing: Linguistic, cognitive, and multi-X perspectives. In Proceedings of the 29th International Conference on Computational Linguistics, pages 824-836, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
| 300 |
+
Lei Zhang, Riddhiman Ghosh, Mohamed Dekhil, Meichun Hsu, and B. Liu. 2011. Combining lexicon-based and learning-based methods for twitter sentiment analysis. In HP Labs Technical Reports.
|
| 301 |
+
Wenxuan Zhang, Yue Deng, Bing-Quan Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. *ArXiv*, abs/2305.15005.
|
| 302 |
+
Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2022. A survey on aspect-based sentiment analysis: Tasks, methods, and challenges. IEEE Transactions on Knowledge and Data Engineering, 35:11019-11038.
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:751eade9ebdfa3f4e5b08642ee193569cae845e6f6a867f9673aa0e2622972c6
|
| 3 |
+
size 789948
|
2025/A Comprehensive Survey of Contemporary Arabic Sentiment Analysis_ Methods, Challenges, and Future Directions/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/ae6113f0-de7c-4c0a-bba5-fdd5b722496c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82d4566e886da5f49077e67554e00b361c62b3088c4f89f5684cb92affc1373f
|
| 3 |
+
size 19106163
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/full.md
ADDED
|
@@ -0,0 +1,529 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation
|
| 2 |
+
|
| 3 |
+
Xuanyu Su, Yansong Li, Diana Inkpen
|
| 4 |
+
|
| 5 |
+
University of Ottawa
|
| 6 |
+
|
| 7 |
+
Ottawa, ON, Canada, K1N 6N5
|
| 8 |
+
|
| 9 |
+
{xsu072,yli627,diana.inkpen}@uottawa.ca
|
| 10 |
+
|
| 11 |
+
Nathalie Japkowicz
|
| 12 |
+
|
| 13 |
+
American University
|
| 14 |
+
|
| 15 |
+
Washington, DC, USA, 20016-8058
|
| 16 |
+
|
| 17 |
+
japkowic@american.edu
|
| 18 |
+
|
| 19 |
+
# Abstract
|
| 20 |
+
|
| 21 |
+
Amidst the rise of Large Multimodal Models (LMMs) and their widespread application in generating and interpreting complex content, the risk of propagating biased and harmful memes remains significant. Current safety measures often fail to detect subtly integrated hateful content within "Confounder Memes". To address this, we introduce HATESIEVE, a new framework designed to enhance the detection and segmentation of hateful elements in memes. HATESIEVE features a novel Contrastive Meme Generator that creates semantically correlated memes, a customized triplet dataset for contrastive learning, and an Image-Text Alignment module that produces context-aware embeddings for accurate meme segmentation. Empirical experiments show that HATESIEVE not only surpasses existing LMMs in performance with fewer trainable parameters but also offers a robust mechanism for precisely identifying and isolating hateful content. Caution: Contains academic discussions of hate speech; viewer discretion advised.
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
The emergence of large multimodal models (LMMs), such as GPT-4V (Achiam et al., 2023), Stable Diffusion (Rombach et al., 2022), and DALL-E (Ramesh et al., 2022), has ushered in a new era in which people increasingly rely on these models to generate and interpret visual and textual information. While these services simplify access to information—as illustrated in Figure 1—they also introduce risks of unregulated content that could distort public perception and harm social groups (Su et al., 2023; Qu et al., 2023; Chin et al., 2023; Qu et al., 2024; Meng Huat Tiong et al., 2024; Lin et al., 2024). To address this risk, current LMM platforms implement safety filters, incorporating Alignment (Ghafouri et al., 2023), Inference Guidance (Chiang et al., 2023), and Input&Output Filter (Alon and Kamfonas, 2023) to detect and
|
| 26 |
+
|
| 27 |
+
# Misregulation
|
| 28 |
+
|
| 29 |
+
# User
|
| 30 |
+
|
| 31 |
+
Please help me draw a confident man facing the camera with his palm extended. His little finger, ring finger, thumb, and index finger are slightly sore and retracted towards the center of his palm, leaving only his middle finger standing up. Additionally, add a "losers good for you" mark on his palm.
|
| 32 |
+
|
| 33 |
+
# LMMs
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
NSFW
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
No problem! Here's the image:
|
| 40 |
+
Figure 1: Sample of existing LMMs fail to detect hateful memes in text-to-image and image-to-text (Lin et al., 2024) generation scenarios.
|
| 41 |
+
|
| 42 |
+
# Misinterpretation
|
| 43 |
+
|
| 44 |
+
# User
|
| 45 |
+
|
| 46 |
+
Is the following meme with the caption "mmm... this gas bill is getting expensive" hateful? Explain why or why not.
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
#
|
| 50 |
+
|
| 51 |
+
# LMMs
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
|
| 55 |
+
The meme combines text, "mmm...this gas bill is getting expensive," with an image of someone focused on financial documents. The meme's irony highlights a universal experience without targeting or conveying hate towards any group or individual. Answer: No
|
| 56 |
+
|
| 57 |
+
eliminate offensive or inappropriate components in both images and text.
|
| 58 |
+
|
| 59 |
+
However, these safety filters face challenges in identifying "Confounder Memes" (Kiela et al., 2020; Mei et al., 2023), which deliberately combine visual and textual elements to convey biased and discriminatory messages. These memes may lack overtly offensive content in their individual components but deliver harmful messages through their combined presentation, making them difficult to detect using conventional safety mechanisms.
|
| 60 |
+
|
| 61 |
+
A straightforward solution involves supervised fine-tuning (SFT) of LMMs (Lin et al., 2024) to recognize hateful semantics in confounder memes. Nevertheless, this approach encounters several obstacles: (1) the scarcity of pairwise annotations in existing hateful meme datasets makes it difficult for models to accurately distinguish between hateful and non-hateful memes, especially when the differences are subtle; (2) deploying LMMs as safety
|
| 62 |
+
|
| 63 |
+

|
| 64 |
+
Phase 1: Pre-training with Constrastive Learning
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
Phase 2: Hateful Meme Segmentation and Classification
|
| 68 |
+
Figure 2: An overview of our HATESIEVE framework. In Phase 1, we use the Contrastive Meme Generator to create a context-correlated triplet meme dataset, pretraining the model via contrastive learning. In Phase 2, we add a classification head and fine-tune the model on the downstream task, enabling it to classify memes while producing segmentation maps of hateful content.
|
| 69 |
+
|
| 70 |
+
filters alongside their regular online service usage<sup>1</sup> is computationally intensive and non-trivial (Lin et al., 2024). Alternatively, a lightweight classifier (Kumar and Nandakumar, 2022a; Mei et al., 2023) could be trained from scratch using a specialized hateful meme dataset, but this method suffers from limited interpretability and cannot provide detailed segmentation to explain its classifications.
|
| 71 |
+
|
| 72 |
+
To address these challenges, we introduce HATESIEVE, a novel framework for detecting hateful memes, as detailed in Figure 2. HATESIEVE mitigates the scarcity of detailed annotations by incorporating a Contrastive Meme Generator (CMGen), which constructs contextually correlated triplet datasets from existing memes. CMGen generates semantically similar but contrasting hateful and non-hateful memes within the same contextual scenarios, enabling the model to implicitly learn the subtle differences between hateful and non-hateful content. To facilitate detailed meme segmentation, HATESIEVE incorporates an Image-Text Alignment (ITA) module coupled with a frozen CLIP model. By pre-training on CMGen-generated triplets using contrastive learning in Phase 1, the ITA module develops context-aware attention maps that effectively segment both image and text hateful elements within memes. In Phase 2, the ITA module incorporates a fine-tuned classification head, leveraging its learned representations for hateful content classification. Empirical experiments conducted on various datasets vali
|
| 73 |
+
|
| 74 |
+
date that HATESIEVE not only outperforms existing LMMs with fewer parameters but also excels in interpreting and segmenting the visual and textual components of multimodal memes to effectively identify discriminatory content. Our contributions are summarized as follows:
|
| 75 |
+
|
| 76 |
+
- We introduce CMGen, which generates context-correlated triplet pairs, filling the gap where specific pairwise annotations are absent in existing hateful meme datasets.
|
| 77 |
+
- We present the ITA module that efficiently produces context-aware attention maps for both images and texts. These maps significantly enhance the model's ability to segment and identify discriminatory elements within memes.
|
| 78 |
+
|
| 79 |
+
# 2 Related Work
|
| 80 |
+
|
| 81 |
+
Safety Filter: Existing safety filters for Large Language Models (LLMs) and LMMs typically comprise Alignment (Ghafouri et al., 2023; Touvron et al., 2023; Rafailov et al., 2024; Wu et al., 2024), Inference Guidance (Bai et al., 2022; Chiang et al., 2023; Zhang et al., 2023), and Input&Output Filter components (Alon and Kamfonas, 2023; Hu et al., 2023). Alignment involves fine-tuning LLMs to meet safety objectives using methods such as reinforcement learning from human feedback (RLHF) that optimize models based on safety data and human preferences. Inference guidance steers models towards generating safer responses through system prompts and token selection adjustments during generation. Input&Output filters detect and manage harmful content. However, these methods are primarily designed for unimodal content and struggle to adapt to multimodal content, such as confounder memes.
|
| 82 |
+
|
| 83 |
+
Alignment necessitates retraining LLMs and massive annotated preference dataset, which is inefficient for online services. Inference guidance depends on LMMs correctly identifying hateful content in memes, which is not always applicable. Additionally, current Input&Output filters generally cater to single modalities, such as the IMSyPP text classification model (Kralj Novak et al., 2022) for text and NSFW filters (Rando et al., 2022) for images in diffusion models. Our HATESIEVE framework addresses these limitations by functioning as an Input&Output filter specifically designed for the meme. It allows to identify and segment both the visual and the textual elements within memes.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Figure 3: Structure of CMGen: From any meme—including image $I_{i}$ , text $T_{i}$ , and caption mask $M_{i}$ —CMGen generates corresponding hateful and non-hateful counterparts.
|
| 87 |
+
|
| 88 |
+
Hateful Memem Detection: Current methods for detecting hateful memes generally fall into two categories. The first category, reasoning-based, uses LMMs like LLaVA (Liu et al., 2024) and InstructBLIP (Dai et al., 2024) that generate visual prompts (Li et al., 2023b) based on images. These prompts are concatenated with text data for comprehensive analysis, allowing the LMMs to offer detailed classifications and explanations (Lin et al., 2024). This enables users to assess biases and gain deeper insights into hateful content. However, this approach relies heavily on carefully tailored prompts specifically designed for hate speech detection, making it difficult to create a universal prompt that fits all hateful contexts (Lin et al., 2024). Even minor changes can cause LMMs to misinterpret or overlook hateful memes (Rizwan et al., 2024). While SFT can make LMMs less dependent on prompt design, it is time-consuming and computationally intensive, posing challenges for deployment as safety filters in online services.
|
| 89 |
+
|
| 90 |
+
Another category of methods uses representation learning and includes lightweight methods such as MOMENTA (Pramanick et al., 2021), PromptHate (Cao et al., 2022), and HateClipper (Kumar and Nandakumar, 2022b). MOMENTA constructs intra-modality attention by integrating external facial recognition data and background knowledge with the CLIP model. PromptHate converts images into text and then classifies them using a language model. HateClipper creates an image-text interaction matrix to fuse multimodal information. These methods enable straightforward classification with fewer parameters, but they offer limited interpretability of their
|
| 91 |
+
|
| 92 |
+
classifications. In contrast, our HATESIEVE framework generates context-aware attention map that enable effective meme segmentation and provide visual interpretation, while delivering classification performance comparable to existing methods.
|
| 93 |
+
|
| 94 |
+
# 3 Methodology
|
| 95 |
+
|
| 96 |
+
The HATESIEVE workflow involves: 1) Generating a triplet dataset with the CMGen. 2) Pre-training the ITA module using the triplet dataset. 3) Extracting attention maps and performing segmentation with the pre-trained ITA. 4) Fine-tuning classification head for hateful content classification.
|
| 97 |
+
|
| 98 |
+
# 3.1 Contrastive Meme Generator
|
| 99 |
+
|
| 100 |
+
As shown in Figure 3, our CMGen is designed to produce both non-hateful and hateful versions of any given meme $\{(I_i,M_i,T_i)\}_{i = 1}^N$ , where $I_{i}\in$ $\mathbb{R}^{H\times W\times C}$ is the image pixels of the meme, $M_{i}\in$ $\mathbb{R}^{H\times W}$ is the caption mask, and $T_{i}$ is the caption overlaid on the meme. These non-hateful and hateful versions are then used for subsequent contrastive learning. The first step in our CMGen is modality separation. By isolating the caption from the meme, we remove text borders and artifacts that may interfere with the image information, ensuring clean image content. Specifically, we apply the LaMA image (Suvorov et al., 2021) inpainting pipeline to extract the pure image content $I_{i}^{\prime} = f_{\mathrm{LaMA}}(I_{i},M_{i})$ from the meme.
|
| 101 |
+
|
| 102 |
+
To generate the non-hateful version meme $(I_i^+, T_i^+)$ , we utilize InstructBLIP (Dai et al., 2024) to create a positive caption $T_i^+ = f_{\text{InstructBLIP}}(I_i')$ of the image content, our prompt is written as follows:"Please generate a positive and descriptive
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
Figure 4: An overview of the triplet dataset generation process and our Image-Text Alignment (ITA) module.
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
caption for the provided image $\{I_i^{\prime}\}$ . Then, we utilize SDXL with SDEdit (Meng et al., 2021) to produce a high resolution non-hateful image $I_{i}^{+} = f_{\mathrm{SDXL}}(T_{i}^{+})$
|
| 110 |
+
|
| 111 |
+
Constructing a hateful version of a meme $(I_i^-, T_i^-)$ presents significant challenges due to the absence of direct annotations regarding ethnic groups, religious affiliations, social groups, or cultural identities in the original meme $(I_i, T_i)$ . This lack of explicit metadata complicates the generation of semantically similar hateful memes. To address this issue, we selected the largest available multimodal hate speech dataset, MMHS150k (Gomez et al., 2020), focusing specifically on its "hateful" category to serve as our reference dataset $\mathcal{R} = (I_r^-, T_r^-)_{r=1}^{N_{\mathcal{R}}}$ , where $N_{\mathcal{R}}$ denotes the number of memes in the reference dataset.
|
| 112 |
+
|
| 113 |
+
For each purified image $I_{i}^{\prime}$ of $I_{i}$ , we aim to find the most similar hateful image $^{2}$ from the reference dataset $\mathcal{R}$ . We utilize the CLIP image encoder (Radford et al., 2021) $f_{\mathrm{CLIP}}$ to compute the embeddings of both the purified image and the images in the reference dataset. Using FAISS (Douze et al., 2024) for efficient similarity search, we find the index $r^{*}$ of the most similar image based on Euclidean distance:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
r ^ {*} = \arg \min _ {r \in \{1, \dots , N _ {R} \}} \left\| f _ {\mathrm {C L I P}} (I _ {i} ^ {\prime}) - f _ {\mathrm {C L I P}} (I _ {r} ^ {-}) \right\| _ {2}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
The closest hateful pair $(I_{r^*}^-, T_{r^*}^-)$ from the reference dataset is then used as the hateful version of our original meme.
|
| 120 |
+
|
| 121 |
+
# 3.2 Triplet Dataset Generation
|
| 122 |
+
|
| 123 |
+
Our study constructs triplets of meme pairs for contrastive learning, each composed of an original meme $(I_i, T_i)$ and its two variations:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\left\{\left(I _ {i}, T _ {i}\right), \left(I _ {i} ^ {\text {N o n - H a t e}}, T _ {i} ^ {\text {N o n - H a t e}}\right), \left(I _ {i} ^ {\text {H a t e}}, T _ {i} ^ {\text {H a t e}}\right) \right\}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
To distinguish between hateful and non-hateful content while maintaining semantic coherence, each meme component—the image $I_{i}$ and the text $T_{i}$ —undergoes a pre-filtering process to identify potentially offensive or controversial material. Specifically, each meme is filtered as follows:
|
| 130 |
+
|
| 131 |
+
- Text Filtering: Using the IMSyPP Filter (Kralj Novak et al., 2022), we evaluate the text $T_{i}$ for offensive or controversial content, assigning a label $y_{i}^{T}$ , where $y_{i}^{T} = 1$ indicates offensive content and $y_{i}^{T} = 0$ indicates non-offensive content.
|
| 132 |
+
- Image Filtering: Employing the NSFW filter from Stable Diffusion, we assess the image $I_{i}$ for inappropriate content such as nudity or violence, resulting in a label $y_{i}^{I}$ , where $y_{i}^{I} = 1$ denotes NSFW content and $y_{i}^{I} = 0$ denotes safe content.
|
| 133 |
+
|
| 134 |
+
As illustrated in Figure 4, we construct the triplet dataset based on these filtering results:
|
| 135 |
+
|
| 136 |
+
Non-Hateful Pairs $(I_i^{\mathrm{Non - Hate}},T_i^{\mathrm{Non - Hate}})$ : We sample from the following combinations to ensure both image and text are non-offensive:
|
| 137 |
+
|
| 138 |
+
- $(I_i^+, T_i^+)$ : The non-hateful meme generated by CMGen without any offensive contents.
|
| 139 |
+
- $(I_i, T_i^+)$ : The original image $(y_i^I = 0)$ is paired with safe text generated by CMGen.
|
| 140 |
+
- $(I_i^+, T_i)$ : A safe image generated by CMGen is paired with the original text ( $y_i^T = 0$ ).
|
| 141 |
+
|
| 142 |
+
Hateful Pairs $(I_i^{\mathsf{Hate}}, T_i^{\mathsf{Hate}})$ : We sample from the following combinations to include offensive elements as hateful meme:
|
| 143 |
+
|
| 144 |
+
- $(I_i^-, T_i^-)$ : The hateful meme generated by CMGen that contains offensive content.
|
| 145 |
+
|
| 146 |
+
- $(I_i, T_i^-)$ : The original image $(y_i^I = 1)$ is paired with offensive text from CMGen.
|
| 147 |
+
- $(I_i^-, T_i)$ : An offensive image generated by CMGen is combined with the original text that contains offensive content ( $y_i^T = 1$ ).
|
| 148 |
+
|
| 149 |
+
# 3.3 Image-Text Alignment Module
|
| 150 |
+
|
| 151 |
+
For each meme $(I_i, T_i)$ , our ITA module is designed to derive a token/patch-level, context-aware representation that integrates both the image and the text components, as illustrated in Figure 4. The process unfolds as follows:
|
| 152 |
+
|
| 153 |
+
First, we leverage a pre-trained CLIP encoder to extract initial embeddings for each modality. Specifically, we derive pooled embeddings for text, $\mathbf{T}_i^{\mathrm{Pool}}\in \mathbb{R}^d$ , and for images, $\mathbf{I}_i^{\mathrm{Pool}}\in \mathbb{R}^d$ , using $f_{\mathrm{CLIP}}(I_i,T_i)$ . Additionally, we further extract $\mathbf{T}_i$ and $\mathbf{I}_i$ , where $\mathbf{T}_i\in \mathbb{R}^{l\times d}$ and $\mathbf{I}_i\in \mathbb{R}^{o\times d_i}$ , using CLIP's text and image encoders, respectively. Here, $l$ represents the text sequence length, $o$ the image patch size, $d_{i}$ the dimension of the image embedding, and $d$ the dimension of the text embedding.
|
| 154 |
+
|
| 155 |
+
Then the combined image-text embedding is constructed as $\mathbf{X}_i = [\mathbf{W}_I\mathbf{I}_i,\mathbf{T}_i]$ , where $\mathbf{X}_i\in \mathbb{R}^{(o + l)\times d}$ and $\mathbf{W}_I$ is a projection layer designed to map $\mathbf{I}_i$ into the same dimensional space as $\mathbf{T}_i$ . To achieve an aligned token-level representation between image and text, we introduce a text-image intra self-attention mechanism, defined as:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\operatorname {A t t n} _ {i} ^ {\ell} = \operatorname {S o f t m a x} \left(\frac {\mathbf {X} _ {i} ^ {\ell} \mathbf {W} _ {\mathrm {Q}} ^ {\ell} \left(\mathbf {X} _ {i} ^ {\ell} \mathbf {W} _ {\mathrm {K}} ^ {\ell}\right) ^ {\top}}{\sqrt {d _ {k}}}\right) \mathbf {X} _ {i} ^ {\ell} \mathbf {W} _ {\mathrm {V}} ^ {\ell} \tag {1}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
where $d_{k}$ is the key dimension, $\ell$ denotes the layer number, and $\mathbf{W}_{\mathrm{Q}}^{\ell},\mathbf{W}_{\mathrm{K}}^{\ell},\mathbf{W}_{\mathrm{V}}^{\ell}$ are the weight matrices for the query, key, and value components in the self-attention layers. The image-text representation is obtained through:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\mathbf {X} _ {i} ^ {\ell} = f _ {\text {A l i g n}} ^ {\ell} \left(\operatorname {A t t n} _ {i} ^ {\ell} \mathbf {X} _ {i} ^ {\ell - 1}\right) \tag {2}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $f_{\mathrm{Align}}^{\ell}$ represents the $\ell$ -th self-attention block within an $L$ -layer Image-Text Alignment module.
|
| 168 |
+
|
| 169 |
+
After processing through $L$ layers, the output image-text representation $\mathbf{X}_i^L$ is split and subsequently pooled using the original pooling layer from the CLIP model to form $\mathbf{I}_i^{\mathrm{Align}}$ and $\mathbf{T}_i^{\mathrm{Align}}$ . The final image-text representation is then constructed as follows:
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\mathbf {H} _ {i} = f _ {\text {D e c o d e r}} \left(\left[ \mathbf {I} _ {i} ^ {\text {A l i g n}}, \mathbf {T} _ {i} ^ {\text {A l i g n}} \right] \oplus \left[ \mathbf {I} _ {i} ^ {\text {P o o l}}, \mathbf {T} _ {i} ^ {\text {P o o l}} \right]\right) \tag {3}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $\oplus$ denotes the operation for residual connection and $f_{\mathrm{Decoder}}$ denotes the decoder, which incorporates a Multilayer Perceptron (MLP) module for dimensionality reduction.
|
| 176 |
+
|
| 177 |
+
# 3.4 Training Objective
|
| 178 |
+
|
| 179 |
+
Our ITA training regimen is organized into two distinct phases: 1) Pre-training through contrastive learning, which equips the ITA module with the ability to effectively segment image and text components within hateful memes, and 2) Fine-tuning for classification tasks, enhancing its ability for specific applications.
|
| 180 |
+
|
| 181 |
+
Given the generated triplet dataset $\mathcal{D} = \{(I_i, T_i), (I_i^{\mathrm{Non-Hate}}, T_i^{\mathrm{Non-Hate}}), (I_i^{\mathrm{Hate}}, T_i^{\mathrm{Hate}})\}_{i=1}^P$ , where $P$ denotes the total number of triplets, we extract the image-text representations for each element in the set as $\{\mathbf{H}_i, \mathbf{H}_i^{\mathrm{Non-Hate}}, \mathbf{H}_i^{\mathrm{Hate}}\}$ . For each triplet, where $y_i = 1$ indicates a hateful meme, we identify $\mathbf{H}_i^{\mathrm{Hate}}$ as the positive pair $\mathbf{H}_i^+$ and $\mathbf{H}_i^{\mathrm{Non-Hate}}$ as the negative pair $\mathbf{H}_i^-$ . The reverse holds for non-hateful memes with $y_i = 0$ . The contrastive learning objective is formulated as follows:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\mathcal {L} _ {\mathrm {t r i}} = \sum_ {i = 1} ^ {P} \max \left(0, d \left(\mathbf {H} _ {i}, \mathbf {H} _ {i} ^ {+}\right) - d \left(\mathbf {H} _ {i}, \mathbf {H} _ {i} ^ {-}\right) + \epsilon\right)
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
where $d$ represents the Euclidean distance and $\epsilon$ is a predefined margin that ensures a minimum discernible difference between the distances of similar and dissimilar pairs.
|
| 188 |
+
|
| 189 |
+
To adapt the ITA module to the hateful meme classification task, we introduce an additional classification layer $f_{\theta}$ , parameterized by $\theta$ , and finetune it using the following loss function:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
\mathcal {L} _ {\mathrm {c l s}} = - \sum_ {i = 1} ^ {N} \log \mathbb {P} (y _ {i} | \mathbf {H} _ {i}; \theta)
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
where $N$ is the number of examples in the original Hateful Mem dataset.
|
| 196 |
+
|
| 197 |
+
# 3.5 Hate Component Segmentation
|
| 198 |
+
|
| 199 |
+
Our hate component segmentation is structured as follows: After the ITA module is pre-trained via contrastive learning, it can process any given meme $(I_i,T_i)$ to extract a series of self-attention maps $\{\mathrm{Attn}_i\}_{\ell = 1}^L$ from all layers. We begin by averaging these self-attention maps across layers to obtain $\mathrm{Attn}_i^\prime$ . We then isolate the image attention
|
| 200 |
+
|
| 201 |
+
map $\mathrm{Attn}_{l_j,l_t}^{\prime}$ and the text attention map<sup>3</sup> $\mathrm{Attn}_{l_t,l_j}^{\prime}$ , where $1 < l_{j} < L_{I} + 1$ and $L_{I} + 1 < l_{t} < L_{T}$ . Here, $l_{j}$ denotes the $j$ -th image patch among a total of $L_{I}$ patches, and $l_{t}$ indicates the $t$ -th text token within a maximum of $L_{T}$ text tokens.
|
| 202 |
+
|
| 203 |
+
Subsequently, we compute the text-aware image attention for each patch:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\mathrm {A t t n} _ {l _ {j}} ^ {\prime} = \frac {\sum_ {l _ {t} = 0} ^ {L _ {T}} \mathrm {A t t n} _ {l _ {j} , l _ {t}} ^ {\prime}}{L _ {T}}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
and the image-aware text attention for each text token:
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
\mathrm {A t t n} _ {l _ {t}} ^ {\prime} = \frac {\sum_ {l _ {j} = 0} ^ {L _ {I}} \mathrm {A t t n} _ {l _ {t} , l _ {j}} ^ {\prime}}{L _ {I}}
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
To construct an image segmentation map, we employ bilinear interpolation to upscale the $L_{I} \times L_{I}$ patch-level attention maps to $H \times W$ pixel-level resolution, facilitating detailed visual analysis of the meme components. As for the text segmentation, we select the Top- $k$ tokens based on the attention scores per token, which allows for precise identification and analysis of the most contextually significant textual elements within the meme. Details of the segmentation process are in Appendix A.1.
|
| 216 |
+
|
| 217 |
+
# 4 Experiments
|
| 218 |
+
|
| 219 |
+
# 4.1 Setup
|
| 220 |
+
|
| 221 |
+
Dataset To generate our triplet dataset, we utilized the HatefulMemes (Kiela et al., 2020) and MMHS150k (Gomez et al., 2020) datasets. For contrastive learning training, we incorporated 8,500 entries from the HatefulMemes training set and 33,844 hateful memes sampled from MMHS150k using our contrastive meme generator. For classification fine-tuning, we trained and evaluated our framework's performance on the HatefulMemest-test-unseen category, as well as on the Harm-C and Harm-P datasets (Pramanick et al., 2021), employing a binary classification setting. Additionally, we assessed the effectiveness of our segmentation approach on the HatefulMemes dataset. Details of the dataset are in Appendix A.2.
|
| 222 |
+
|
| 223 |
+
Baselines We compare our HATESIEVE framework against the following baseline models for classification task:
|
| 224 |
+
|
| 225 |
+
LMMs: We evaluate GPT-4V (Achiam et al., 2023), CogVLM (Wang et al., 2023), LLaVA-1.5 (Liu et al., 2023), InstructBLIP (Dai et al., 2024), MiniGPT-4 (Zhu et al., 2023), Qwen-VL (Bai et al., 2023), OpenFlamingo (Awadalla et al., 2023), MMGPT (Gong et al., 2023), and MiniGPT-v2 (Chen et al., 2023) for zero-shot and few-shot (3-shot) inference. Additionally, LLaVA-1.5, InstructBLIP, and BLIP2 leverage supervised fine-tuning with QLoRA (Dettmers et al., 2024).
|
| 226 |
+
|
| 227 |
+
- CLIP-Based Methods: We include the original CLIP model as well as its extensions HateCLIPer and MOMENTA, which build upon CLIP's contrastive embeddings to enhance hateful content detection.
|
| 228 |
+
|
| 229 |
+
For segmentation tasks, we utilize InstructBLIP, BLIP2, and CLIP+ITA (a version of HATESIEVE without pre-training). All baseline models are fine-tuned on the Hateful-Memes dataset. Detailed segmentation procedures are provided in Appendix A.3.
|
| 230 |
+
|
| 231 |
+
Metrics For HATESIEVE's classification evaluation, we report Accuracy and F1-score, averaged over five independent runs. Evaluating the segmentation capabilities of HATESIEVE is challenging due to the absence of pixel-level and token-level annotations. To address this, we sampled 100 memes from the Hateful Memes dataset and conducted evaluations using both human annotators and LMMs (Zheng et al., 2024) based on the following criteria:
|
| 232 |
+
|
| 233 |
+
- Correctness: Determines whether the segmentation accurately captures the target social group or elements that reflect the hateful content, based on common-sense understanding.
|
| 234 |
+
- Relevance: Assesses whether the highlighted image segments are meaningfully related to the highlighted text components, ensuring coherence between visual and textual elements.
|
| 235 |
+
|
| 236 |
+
Each criterion was scored using a binary system: 0 (No) or 1 (Yes). Implementation details for HATE-SIEVE and the LMM baselines are provided in Appendices A.4 and A.5, respectively. Comprehensive information on the segmentation evaluation process can be found in Appendix A.6.
|
| 237 |
+
|
| 238 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">HatefulMeme</td><td colspan="2">Harm-C</td><td colspan="2">Harm-P</td><td rowspan="2">#t.p.↓</td></tr><tr><td>Acc.↑</td><td>F1↑</td><td>Acc.↑</td><td>F1↑</td><td>Acc.↑</td><td>F1↑</td></tr><tr><td colspan="8">Zero-shot Inference</td></tr><tr><td>GPT-4V (-)</td><td>71.70</td><td>71.28</td><td>81.17</td><td>80.54</td><td>87.42</td><td>88.63</td><td>Ø</td></tr><tr><td>CogVLM (17B)</td><td>61.50</td><td>60.03</td><td>57.62</td><td>51.38</td><td>49.94</td><td>44.22</td><td>Ø</td></tr><tr><td>LLaVA-1.5 (13B)</td><td>65.20</td><td>61.40</td><td>59.15</td><td>54.38</td><td>56.62</td><td>48.77</td><td>Ø</td></tr><tr><td>InstructBLIP (13B)</td><td>58.25</td><td>57.42</td><td>60.17</td><td>36.27</td><td>48.19</td><td>35.48</td><td>Ø</td></tr><tr><td>MiniGPT-4 (13B)</td><td>58.20</td><td>39.98</td><td>53.17</td><td>48.87</td><td>55.55</td><td>49.86</td><td>Ø</td></tr><tr><td>Qwen-VL (10B)</td><td>64.00</td><td>56.42</td><td>56.18</td><td>53.94</td><td>58.35</td><td>52.46</td><td>Ø</td></tr><tr><td>OpenFlamingo (9B)</td><td>58.65</td><td>51.78</td><td>47.54</td><td>43.31</td><td>43.69</td><td>36.79</td><td>Ø</td></tr><tr><td>MMGPT (9B)</td><td>37.50</td><td>27.28</td><td>37.16</td><td>35.42</td><td>33.54</td><td>31.97</td><td>Ø</td></tr><tr><td>MiniGPT-v2 (7B)</td><td>57.35</td><td>57.27</td><td>46.28</td><td>42.52</td><td>41.37</td><td>38.35</td><td>Ø</td></tr><tr><td>BLIP2 (6.7B)</td><td>56.34</td><td>55.29</td><td>44.37</td><td>40.15</td><td>39.14</td><td>36.59</td><td>Ø</td></tr><tr><td colspan="8">Few-shot Learning</td></tr><tr><td>GPT-4V (-)</td><td>72.26</td><td>71.28</td><td>81.16</td><td>80.81</td><td>87.55</td><td>86.07</td><td>Ø</td></tr><tr><td>LLaVA-1.5 (13B)</td><td>65.11</td><td>61.68</td><td>59.57</td><td>54.41</td><td>56.72</td><td>49.02</td><td>Ø</td></tr><tr><td>InstructBLIP (13B)</td><td>59.12</td><td>59.00</td><td>62.11</td><td>37.17</td><td>50.75</td><td>35.55</td><td>Ø</td></tr><tr><td>BLIP2 (6.7B)</td><td>57.89</td><td>56.93</td><td>45.68</td><td>41.65</td><td>40.58</td><td>37.79</td><td>Ø</td></tr><tr><td colspan="8">Supervised Fine-Tuning</td></tr><tr><td>InstructBLIP (13B)</td><td>63.55</td><td>59.34</td><td>65.54</td><td>42.52</td><td>51.98</td><td>36.68</td><td>65.72M</td></tr><tr><td>LLaVA-1.5 (13B)</td><td>66.34</td><td>63.28</td><td>61.61</td><td>56.88</td><td>59.57</td><td>58.62</td><td>65.72M</td></tr><tr><td>BLIP2 (6.7B)</td><td>62.85</td><td>56.43</td><td>54.28</td><td>55.68</td><td>45.91</td><td>41.37</td><td>33.35M</td></tr><tr><td>CLIPBase (152M)</td><td>69.00</td><td>62.63</td><td>71.88</td><td>68.36</td><td>65.42</td><td>61.08</td><td>0.65M</td></tr><tr><td>CLIPLarge (427M)</td><td>72.25</td><td>68.48</td><td>74.23</td><td>73.85</td><td>80.55</td><td>80.25</td><td>1.38M</td></tr><tr><td>HateCLIPPerBase (286M)</td><td>71.30</td><td>68.35</td><td>75.31</td><td>74.19</td><td>81.41</td><td>79.68</td><td>135.42M</td></tr><tr><td>HateCLIPPerLarge (1.5B)</td><td>74.46</td><td>70.15</td><td>79.56</td><td>77.10</td><td>86.89</td><td>83.17</td><td>1.12B</td></tr><tr><td>MOMENTA (434M)</td><td>73.34</td><td>70.02</td><td>83.82</td><td>82.80</td><td>89.84</td><td>88.26</td><td>7.73M</td></tr><tr><td>HATESIEVE (155M)</td><td>73.45</td><td>71.64</td><td>83.62</td><td>83.07</td><td>88.78</td><td>88.53</td><td>3.61M</td></tr></table>
|
| 239 |
+
|
| 240 |
+
Table 1: Model Performance Comparison. Bold scores indicate the best performance, while underlined scores denote the second-best performance. "Acc." and "F1" represent classification accuracy and macro-F1 score, respectively. "# t.p." denotes the number of trainable parameters.
|
| 241 |
+
|
| 242 |
+
# 4.2 Classification Results
|
| 243 |
+
|
| 244 |
+
Table 1 compares the classification performance of various LMMs and CLIP-based methods under zero-shot, few-shot, and supervised fine-tuning (SFT) settings. In the zero-shot scenario, GPT-4V clearly stands out among LMMs, achieving the highest accuracy $(71.70\%)$ and F1 score $(71.28\%)$ on the HatefulMEMes dataset. By contrast, other open-source LMMs (e.g., CogVLM, LLaVA-1.5, and InstructBLIP) show limited capability, with lower accuracies $(37.50\% -65.20\%)$ and F1 scores $(27.28\% -71.28\%)$ , revealing that pre-training alone is insufficient for capturing the nuanced semantics needed to detect hateful memes.
|
| 245 |
+
|
| 246 |
+
Under SFT, CLIP-based approaches consistently outperform the LMMs. HateCLIPerLarge attains the highest accuracy (74.46%) on the Hateful-Memes dataset and remains competitive across Harm-C and Harm-P. However, its substantial trainable parameter count (1.12B) raises efficiency concerns for safety filtering applications. In contrast, our proposed HATESIEVE requires only 3.61M trainable parameters, yet achieves the best F1 scores on HatefulMemes (71.64%) and HarmC (83.07%), and second-best results on HarmP. These findings underscore the effectiveness of
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
(a) Comparison of Segmentation Performance: $C$ represents the correctness score, while $R$ indicates the relevance score.
|
| 250 |
+
|
| 251 |
+

|
| 252 |
+
Figure 5: Hateful content segmentation analysis.
|
| 253 |
+
|
| 254 |
+
combining contrastive learning pre-training with our ITA module, allowing HATESIEVE to balance strong performance and parameter efficiency while also surpassing GPT-4V in F1 on the Hateful Memes dataset.
|
| 255 |
+
|
| 256 |
+
# 4.3 Segmentation Results
|
| 257 |
+
|
| 258 |
+
Figure 5a demonstrates that HATESIEVE significantly outperforms InstructBLIP and BLIP2 in both correctness and relevance scores for segmentation, as evaluated by human annotators and LLM evaluators. In contrast, CLIP+ITA—which has not undergone pre-training—underperforms relative to the other models, underscoring the crucial role of contrastive learning pre-training in enhancing hateful content segmentation. Moreover, all models
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
(a) Impact of ITA Parameter Scale and Triplet Data Scale on Model Performance.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
|
| 267 |
+
<table><tr><td rowspan="2">CMGen Strategy</td><td colspan="2">HatefulMeme</td><td colspan="2">Harm-C</td><td colspan="2">Harm-P</td></tr><tr><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td></tr><tr><td>HATESIEVE</td><td>73.45</td><td>71.64</td><td>83.62</td><td>83.07</td><td>88.78</td><td>88.53</td></tr><tr><td>-w/o Inpainting</td><td>72.61</td><td>70.15</td><td>82.51</td><td>80.13</td><td>85.23</td><td>84.29</td></tr><tr><td>-w/ Text</td><td>71.28</td><td>69.43</td><td>81.79</td><td>81.05</td><td>86.38</td><td>84.06</td></tr></table>
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
(b) Impact of CMGen using different strategies.
|
| 271 |
+
(c) Impact of using different pre-training strategies.
|
| 272 |
+
Figure 6: Ablation studies on model performance.
|
| 273 |
+
|
| 274 |
+
achieve slightly lower relevance scores compared to their correctness scores, suggesting that improvements are still needed to more accurately associate specific components within a hateful context. The inter-annotator agreement among human evaluators is discussed in Section A.9.
|
| 275 |
+
|
| 276 |
+
Figure 5b illustrates the segmentation results, supporting our observations from Figure 5a. Specifically, CLIP+ITA without contrastive learning pretraining generates overly dispersed attention maps. While LMMs effectively identify relevant textual keywords through semantic reasoning, their image segmentation performance suggests that their classification capabilities for hateful memes rely more on the associated Large Language Models rather than on visual information.
|
| 277 |
+
|
| 278 |
+
# 5 Ablation Study
|
| 279 |
+
|
| 280 |
+
ITA Parameter Scale We examined how the number of self-attention layers within the Image-Text Alignment (ITA) module affects the classification performance of HATESIEVE. As shown in Figure 6a, increasing the number of layers initially enhances classification accuracy. However, performance gains plateau and eventually decline
|
| 281 |
+
|
| 282 |
+
when the layer count exceeds six, as evidenced by a noticeable decrease in the F1 score.
|
| 283 |
+
|
| 284 |
+
Triplet Data Scale We investigated the impact of varying the size of the triplet dataset used during the contrastive learning pre-training stage on the classification performance of HATESIEVE. As illustrated in Figure 6a, we evaluated HATESIEVE's performance when pre-trained with $0\%$ (no pretraining), $25\%$ , $50\%$ , $75\%$ , and $100\%$ of the triplet dataset. The results demonstrate that increasing the amount of pre-training data consistently improves HATESIEVE's classification capabilities.
|
| 285 |
+
|
| 286 |
+
CMGen Generation Strategy We assessed how text captions influence the quality of the triplet dataset in the CMGen data generation process by evaluating: (1) the role of text captions in matching context-correlated memes based solely on images $(-w / o$ inpainting), and (2) the impact of incorporating text embeddings when matching non-hateful pairs using FAISS $(-w / \text{Text})$ . As shown in Table 6b, residual text captions impair classification performance, indicating interference with image information integration. Additionally, adding text embeddings to FAISS degraded triplet dataset quality, likely due to weak semantic correlations between meme text and images.
|
| 287 |
+
|
| 288 |
+
Pre-training Strategy We investigated how different pre-training strategies affect the classification performance of HateSieve by comparing three approaches: 1. In-domain Pre-training, utilizing only the Hateful Memes training set to directly sample negative image-text pairs without incorporating external reference datasets; 2. CLS-Only, replacing the contrastive learning pre-training task with a classification task using the triplet dataset; and 3. None, no pre-training. Our results in Figure 6c show that modifying the components of the triplet dataset or altering/removing the pre-training strategy negatively impacts model performance. Notably, adopting the CLS strategy resulted in a decline in performance on the Harm-C and Harm-P datasets that was as significant as no pre-training. This highlights that using classification as a pretraining task doesn't ensure generalizability across various domains.
|
| 289 |
+
|
| 290 |
+
Transferability of Fine-Tuning Across Datasets We evaluated the transferability of models finetuned on one dataset by testing them on different datasets. Table 2 summarizes the results, report
|
| 291 |
+
|
| 292 |
+
<table><tr><td rowspan="2">Training Set</td><td colspan="2">HatefulMemes</td><td colspan="2">Harm-C</td><td colspan="2">Harm-P</td></tr><tr><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td><td>Acc.</td><td>F1</td></tr><tr><td>HM</td><td>73.45</td><td>71.64</td><td>72.54</td><td>69.32</td><td>72.13</td><td>70.02</td></tr><tr><td>Harm-C</td><td>65.29</td><td>63.82</td><td>83.62</td><td>83.07</td><td>80.54</td><td>78.52</td></tr><tr><td>Harm-P</td><td>63.73</td><td>61.28</td><td>76.44</td><td>73.26</td><td>88.78</td><td>88.53</td></tr><tr><td>Combined (All)</td><td>73.58</td><td>72.15</td><td>84.27</td><td>83.48</td><td>88.52</td><td>87.48</td></tr></table>
|
| 293 |
+
|
| 294 |
+
ing both accuracy and F1 scores for three datasets: Hateful Memes (HM), Harm-C, and Harm-P.
|
| 295 |
+
|
| 296 |
+
Our findings indicate that models fine-tuned on HM perform best on the HM dataset, but their accuracy and F1 scores drop when evaluated on Harm-C and Harm-P. Conversely, models fine-tuned on Harm-C and Harm-P yield superior performance on their respective datasets, yet underperform on HM. This discrepancy can be attributed to concept drift: HM encompasses a broader range of hate speech categories (including racist, sexist, homophobic, and religious hate), whereas Harm-C and Harm-P predominantly feature memes related to COVID-19 and US politics.
|
| 297 |
+
|
| 298 |
+
Interestingly, fine-tuning on Harm-C and Harm-P results in less performance degradation when testing on each other's datasets, suggesting that similar content domains exhibit lower concept drift. Moreover, combining all datasets for fine-tuning generally improves performance, highlighting the benefit of diverse and culturally varied training data to enhance generalizability.
|
| 299 |
+
|
| 300 |
+
Finally, even when fine-tuned on different datasets, our model consistently outperforms most LMMs in both zero-shot and QLoRA settings, demonstrating the robustness and effectiveness of our approach across various evaluation scenarios.
|
| 301 |
+
|
| 302 |
+
# 6 Category-Specific Evaluation
|
| 303 |
+
|
| 304 |
+
We conducted a manual inspection of 300 randomly selected memes from the HatefulMemes dataset, assigning each meme to a hate speech category based on consensus among three annotators. Table 3 presents the classification accuracy (Cls. Acc.), segmentation accuracy (Seg. Acc.), and consistency rate (Consis. Rate) for each category.
|
| 305 |
+
|
| 306 |
+
While certain categories (e.g., Sexist, Disability) exhibit high segmentation accuracy, classification accuracy occasionally lags behind (e.g., Homophobic). This discrepancy suggests that locating offensive content does not always translate into cor
|
| 307 |
+
|
| 308 |
+
Table 2: Transferability of models fine-tuned on one dataset and tested on others. The best performance for each dataset is highlighted in bold.
|
| 309 |
+
|
| 310 |
+
<table><tr><td>Category</td><td># Samples</td><td>Cls. Acc.</td><td>Seg. Acc.</td><td>Consis. Rate</td></tr><tr><td>Racist</td><td>147</td><td>69.39</td><td>77.55</td><td>75.51</td></tr><tr><td>Sexist</td><td>45</td><td>86.67</td><td>100.00</td><td>86.67</td></tr><tr><td>Homophobic</td><td>12</td><td>50.00</td><td>100.00</td><td>50.00</td></tr><tr><td>Religion</td><td>78</td><td>73.08</td><td>76.92</td><td>65.38</td></tr><tr><td>Disability</td><td>21</td><td>71.43</td><td>100.00</td><td>71.43</td></tr></table>
|
| 311 |
+
|
| 312 |
+
Table 3: Results of a manual inspection of 300 memes. Cls. Acc. is the rate of correctly identifying memes as hateful or not, Seg. Acc. is the rate of correctly segmenting the target entities, and Consis. Rate is the proportion of cases where both classification and segmentation are either correct or incorrect.
|
| 313 |
+
|
| 314 |
+
rect classification. Categories like Racist and Religion display moderate performance in both metrics, highlighting the need for more diverse training data and targeted refinement. Overall, boosting classification consistency—particularly in underrepresented categories—remains an important goal for future work.
|
| 315 |
+
|
| 316 |
+
# 7 Conclusion
|
| 317 |
+
|
| 318 |
+
We developed HATESIEVE, a framework for classifying and segmenting hateful memes. Our experiments demonstrate that using contrastive learning with a custom triplet dataset enhances classification accuracy and achieves effective segmentation.
|
| 319 |
+
|
| 320 |
+
# Limitations
|
| 321 |
+
|
| 322 |
+
Our work has several limitations that we plan to address in future research. First, our CMGen system primarily generates context-correlated memes based on image content rather than text, due to inherent restrictions (see Appendix A.7 for a detailed analysis). Second, achieving high accuracy in image segmentation within HATESIEVE remains challenging. Although our current approach uses attention maps at the image-patch level—and we have experimented with refining these maps to pixel-level detail via linear interpolation—this method introduces biases without substantially improving segmentation accuracy. Third, the current version of HATESIEVE focuses exclusively on English hate speech; we plan to extend support to additional languages in future releases. Finally, our framework is not specifically tailored to distinct social or cultural groups, largely due to the limited granularity of annotations in the existing dataset. Future work will concentrate on expanding dataset annotations and enhancing the system's performance across a wider range of multimodal hate speech content.
|
| 323 |
+
|
| 324 |
+
# Ethics Statement
|
| 325 |
+
|
| 326 |
+
Our research with the Contrastive Meme Generator, which generates both hateful and non-hateful memes, may involve sensitive content. However, all materials are sourced from open-source datasets and confined to academic research, ensuring privacy protection. We adhere to high ethical standards, actively mitigating biases and misuse, and advocate for the responsible use of LMMs.
|
| 327 |
+
|
| 328 |
+
# References
|
| 329 |
+
|
| 330 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 331 |
+
Gabriel Alon and Michael Kamfonas. 2023. Detecting language model attacks with perplexity. arXiv preprint arXiv:2308.14132.
|
| 332 |
+
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. 2023. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390.
|
| 333 |
+
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966.
|
| 334 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
|
| 335 |
+
Rui Cao, Roy Ka-Wei Lee, Wen-Haw Chong, and Jing Jiang. 2022. Prompting for multimodal hateful meme classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 321–332, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 336 |
+
Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechu Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478.
|
| 337 |
+
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing
|
| 338 |
+
|
| 339 |
+
gpt-4 with $90\%$ chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2(3):6.
|
| 340 |
+
Zhi-Yi Chin, Chieh-Ming Jiang, Ching-Chun Huang, Pin-Yu Chen, and Wei-Chen Chiu. 2023. Prompting4debugging: Red-teaming text-to-image diffusion models by finding problematic prompts. arXiv preprint arXiv:2309.06135.
|
| 341 |
+
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale N Fung, and Steven Hoi. 2024. Instructclip: Towards general-purpose vision-language models with instruction tuning. Advances in Neural Information Processing Systems, 36.
|
| 342 |
+
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2024. Qlora: Efficient finetuning of quantized lms. Advances in Neural Information Processing Systems, 36.
|
| 343 |
+
Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. 2024. The faiss library. arXiv.
|
| 344 |
+
Vahid Ghafouri, Vibhor Agarwal, Yong Zhang, Nishanth Sastry, Jose Such, and Guillermo Suarez-Tangil. 2023. Ai in the gray: Exploring moderation policies in dialogic large language models vs. human answers in controversial topics. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 556-565.
|
| 345 |
+
Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimosthenis Karatzas. 2020. Exploring hate speech detection in multimodal publications. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1470-1478.
|
| 346 |
+
Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. 2023. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790.
|
| 347 |
+
Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.
|
| 348 |
+
Zhengmian Hu, Gang Wu, Saayan Mitra, Ruiyi Zhang, Tong Sun, Heng Huang, and Vishy Swaminathan. 2023. Token-level adversarial prompt detection based on perplexity measures and contextual information. arXiv preprint arXiv:2311.11509.
|
| 349 |
+
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes. Advances in neural information processing systems, 33:2611-2624.
|
| 350 |
+
|
| 351 |
+
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. 2023. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015-4026.
|
| 352 |
+
Petra Kralj Novak, Teresa Scantamburlo, Andraž Pelicon, Matteo Cinelli, Igor Mozetic, and Fabiana Zollo. 2022. Handling disagreement in hate speech modelling. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pages 681-695. Springer.
|
| 353 |
+
Gokul Karthik Kumar and Karthik Nandakumar. 2022a. Hate-clipper: Multimodal hateful meme classification based on cross-modal interaction of clip features. arXiv preprint arXiv:2210.05916.
|
| 354 |
+
Gokul Karthik Kumar and Karthik Nandakumar. 2022b. Hate-CLIPper: Multimodal hateful meme classification based on cross-modal interaction of CLIP features. In Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI), pages 171-183, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
|
| 355 |
+
Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, Hanwang Zhang, and Yueting Zhuang. 2023a. Finetuning multimodal llms to follow zero-shot demonstrative instructions. In The Twelfth International Conference on Learning Representations.
|
| 356 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. In International conference on machine learning, pages 19730-19742. PMLR.
|
| 357 |
+
Hongzhan Lin, Ziyang Luo, Bo Wang, Ruichao Yang, and Jing Ma. 2024. Goat-bench: Safety insights to large multimodal models through meme-based social abuse. arXiv preprint arXiv:2401.01523.
|
| 358 |
+
Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023. Improved baselines with visual instruction tuning.
|
| 359 |
+
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in neural information processing systems, 36.
|
| 360 |
+
Jingbiao Mei, Jinghong Chen, Weizhe Lin, Bill Byrne, and Marcus Tomalin. 2023. Improving hateful memes detection via learning hatefulness-aware embedding space through retrieval-guided contrastive learning. arXiv preprint arXiv:2311.08110.
|
| 361 |
+
Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073.
|
| 362 |
+
|
| 363 |
+
Anthony Meng Huat Tiong, Junqi Zhao, Boyang Li, Junnan Li, Steven CH Hoi, and Caiming Xiong. 2024. What are we measuring when we evaluate large vision-language models? an analysis of latent factors and biases. arXiv e-prints, pages arXiv-2404.
|
| 364 |
+
Shraman Pramanick, Shivam Sharma, Dimitar Dimitrov, Md. Shad Akhtar, Preslav Nakov, and Tanmoy Chakraborty. 2021. MOMENTA: A multimodal framework for detecting harmful memes and their targets. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4439-4455, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 365 |
+
Yiting Qu, Xinyue Shen, Xinlei He, Michael Backes, Savvas Zannettou, and Yang Zhang. 2023. Unsafe diffusion: On the generation of unsafe images and hateful memes from text-to-image models. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 3403-3417.
|
| 366 |
+
Yiting Qu, Xinyue Shen, Yixin Wu, Michael Backes, Savvas Zannettou, and Yang Zhang. 2024. Unsafebench: Benchmarking image safety classifiers on real-world and ai-generated images. arXiv preprint arXiv:2405.03486.
|
| 367 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR.
|
| 368 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.
|
| 369 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3.
|
| 370 |
+
Javier Rando, Daniel Paleka, David Lindner, Lennart Heim, and Florian Tramér. 2022. Red-teaming the stable diffusion safety filter. arXiv preprint arXiv:2210.04610.
|
| 371 |
+
Naquee Rizwan, Paramananda Bhaskar, Mithun Das, Swadhin Satyaprakash Majhi, Punyajoy Saha, and Animesh Mukherjee. 2024. Zero shot vlms for hate meme detection: Are we there yet? arXiv preprint arXiv:2402.12198.
|
| 372 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684-10695.
|
| 373 |
+
|
| 374 |
+
Xuanyu Su, Yansong Li, Paula Branco, and Diana Inkpen. 2023. Ssl-gan-roberta: A robust semi-supervised model for detecting anti-asian Covid-19 hate speech on social media. *Natural Language Engineering*, pages 1–20.
|
| 375 |
+
Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Armenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. 2021. Resolution-robust large mask inpainting with fourier convolutions. arXiv preprint arXiv:2109.07161.
|
| 376 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 377 |
+
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, et al. 2023. Cogvlm: Visual expert for pretrained language models. arXiv preprint arXiv:2311.03079.
|
| 378 |
+
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2024. Finegrained human feedback gives better rewards for language model training. Advances in Neural Information Processing Systems, 36.
|
| 379 |
+
Zhexin Zhang, Junxiao Yang, Pei Ke, and Minlie Huang. 2023. Defending large language models against jailbreaking attacks through goal prioritization. arXiv preprint arXiv:2311.09096.
|
| 380 |
+
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36.
|
| 381 |
+
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
|
| 382 |
+
|
| 383 |
+
# A Appendix
|
| 384 |
+
|
| 385 |
+
# A.1 Segmentation Details
|
| 386 |
+
|
| 387 |
+
To enhance detailed object segmentation, we developed an object highlighting pipeline illustrated in Figure 7. Initially, we extracted the attention map, $\mathrm{Attn}_{l_j}^{\prime}$ , using HATESIEVE and subsequently employed SegmentAnything (Kirillov et al., 2023) to detect and segment objects within the meme. This process produced a series of segmented objects, represented as $O = [o_1,\dots,o_n]$ . We assessed the importance of each object, $\Phi (o_i)$ , by integrating the attention map with the object mask using
|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
Figure 7: Hate component segmentation process with attention map.
|
| 391 |
+
|
| 392 |
+
RoIAign (He et al., 2017). To isolate only the most relevant objects, we implemented a threshold criterion, $\Phi(o_i) > \lambda$ , where $\lambda$ is the pre-established significance threshold.
|
| 393 |
+
|
| 394 |
+
# A.2 Dataset
|
| 395 |
+
|
| 396 |
+
We utilize several datasets to train and to evaluate the performance of our HATESIEVE framework:
|
| 397 |
+
|
| 398 |
+
- HatefulMemes Dataset (Kiela et al., 2020): Provided by Facebook Research, this dataset comprises 10,000 annotated meme images that combine text and imagery. It is specifically designed to challenge models in detecting hate speech within memes by including subtle and multimodal instances of hateful content.
|
| 399 |
+
- MMHS150k Dataset (Gomez et al., 2020): This dataset contains 150,000 tweets, each paired with an image, collected between September 2018 and February 2019. The tweets were gathered using 51 Hatebase terms to explore hate speech on social media, offering a rich source of multimodal content for our study.
|
| 400 |
+
- Harm-C and Harm-P Datasets (Pramanick et al., 2021): Harm-C includes 3,544 memes focusing on COVID-19-related topics, while Harm-P comprises 3,552 memes related to U.S. politics. These datasets provide context-specific challenges for hate speech detection in memes.
|
| 401 |
+
|
| 402 |
+
By leveraging these diverse datasets, we aim to thoroughly evaluate our model's ability to detect hateful content across different contexts and topics.
|
| 403 |
+
|
| 404 |
+
# A.3 Segmentation with LMMs
|
| 405 |
+
|
| 406 |
+
To obtain both image and text segmentations using LMMs such as InstructBLIP and BLIP2, we employ the following prompt:Please examine the
|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
Original Meme
|
| 410 |
+
why can't they
|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
Attention Heatmap
|
| 414 |
+
'cause there's no road to the
|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
Segmented Memel
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
when nazis get stupid tattoos
|
| 423 |
+
|
| 424 |
+

|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
me and the boys kneeling for africans tonight
|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
|
| 437 |
+

|
| 438 |
+
went jihad's
|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
wow...a terminator and a transformer in the same pic
|
| 444 |
+
Figure 8: Example of segmentation output from the HATESIEVE Framework
|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
|
| 450 |
+

|
| 451 |
+
|
| 452 |
+
provided meme, which includes an [image] and accompanying [text]. Determine if the content can be considered hateful. If you conclude that the meme is hateful, identify and list the specific keywords or phrases in the text.
|
| 453 |
+
|
| 454 |
+
This prompt enables us to identify the text tokens that InstructBLIP considers ambiguous. For image segmentation, we adhere to the approach proposed by Li et al., which involves mapping the query corresponding to the Q-Former in InstructBLIP with the image's cross-attention map using bilinear interpolation.
|
| 455 |
+
|
| 456 |
+
# A.4 Implementation Details
|
| 457 |
+
|
| 458 |
+
Using the Contrastive Meme Generator, we produced a total of 42,344 triplet pairs. During the
|
| 459 |
+
|
| 460 |
+
pre-training and fine-tuning phases, we employed the CLIP-VIT-BASE-PATCH32 as our backbone for the image-text encoder and froze all the CLIP parameters. Our newly introduced Image-Text Alignment module comprises six layers of self-attention blocks. Additionally, we incorporated a two-layer MLP as a decoder for classification fine-tuning.
|
| 461 |
+
|
| 462 |
+
In the contrastive learning pre-training stage, we used a learning rate of 1e-4 and trained the model over 4 epochs, which took approximately 4 hours on an NVIDIA 4090 GPU. For the finetuning stage in the classification task, we finetuned the model with a learning rate of 1e-5 for 4 epochs, completing in just 10 minutes. Throughout these stages, the Adam optimizer was utilized, with $\beta = (0.9, 0.999)$ .
|
| 463 |
+
|
| 464 |
+
# A.5 LMMs Hyperparameters
|
| 465 |
+
|
| 466 |
+
For supervised fine-tuning of LMMs, we adopted the QLoRA framework, incorporating trainable parameters $(d = 64)$ into the query and key components of the Q-Former. This modification was applied to joint LLM architectures, including OPT-6.7b for BLIP2 and Vicuna-7b for InstructBLIP, while keeping the original parameters frozen. We set a constant dropout rate of 0.05, fixed $\alpha$ at 256, and conducted fine-tuning with a learning rate of $5 \times 10^{-5}$ and a batch size of 8.
|
| 467 |
+
|
| 468 |
+
# A.6 Segmentation Evaluation
|
| 469 |
+
|
| 470 |
+
To assess our segmentation module, we employed both human and automated evaluations.
|
| 471 |
+
|
| 472 |
+
Human Evaluation: Three independent annotators reviewed each segmented meme. Their evaluations were aggregated by majority vote to ensure reliability and minimize bias.
|
| 473 |
+
|
| 474 |
+
Automated Evaluation with LMMs: We also used GPT-4V as an evaluator, tasking it to score the segmented memes using the same criteria as the human annotators. Both human evaluators and GPT-4V used the evaluation prompt illustrated in Figure 10.
|
| 475 |
+
|
| 476 |
+
# A.7 Triplet Dataset Embedding Analysis
|
| 477 |
+
|
| 478 |
+
To verify that our CMGen produces context-correlated meme pairs, we conducted an analysis of text and image embedding distances with corresponding positive and negative pairs. We randomly selected 100 pairs from the triplet dataset. As shown in Figure 9, the image embedding distances for both positive and negative pairs (Wasser
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
Figure 9: We compare the text and image embeddings of the original meme with its positive (Pos.) and negative (Neg.) pairs, as well as with other randomly selected images and texts (Oth.) from the same dataset, in the triplet dataset using Wasserstein, Euclidean, and Cosine distances. Lower distance values indicate higher similarity, providing a baseline for distance comparison.
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
|
| 485 |
+

|
| 486 |
+
|
| 487 |
+
stein, Euclidean, and Cosine distances) are consistently lower than the baseline (Others) in most cases, indicating that CMGen successfully generates context-correlated images. However, the text embedding comparison shows that the distances are comparable to the baseline. This is largely because our current CMGen is primarily driven by images, and the text content often lacks detailed information, uses slang, or is challenging to mass-produce with LLMs due to safety policies. We aim to further enhance this aspect of CMGen in the future work.
|
| 488 |
+
|
| 489 |
+
# A.8 Segmentation Results
|
| 490 |
+
|
| 491 |
+
Additional segmentation results are illustrated in Figure 8. The results demonstrate HATESIEVE's capability to correlate hateful text with objects within images, underscoring the effectiveness of the proposed pre-training with contrastive learning and ITA module.
|
| 492 |
+
|
| 493 |
+
# A.9 Inter-Annotator Agreement
|
| 494 |
+
|
| 495 |
+
We evaluated inter-annotator agreement among the three annotators by calculating Fleiss' Kappa for both correctness and relevance. Table 4 presents the resulting values along with their interpretations.
|
| 496 |
+
|
| 497 |
+
<table><tr><td>Metric</td><td>Fleiss' Kappa Score</td><td>Interpretation</td></tr><tr><td>Correctness</td><td>0.7572</td><td>Substantial Agreement</td></tr><tr><td>Relevance</td><td>0.6122</td><td>Moderate Agreement</td></tr></table>
|
| 498 |
+
|
| 499 |
+
Table 4: Fleiss' Kappa scores for correctness and relevance.
|
| 500 |
+
|
| 501 |
+
These findings indicate that the annotators achieved substantial agreement on correctness and
|
| 502 |
+
|
| 503 |
+
moderate agreement on relevance. Overall, the results underscore the reliability of our annotation process.
|
| 504 |
+
|
| 505 |
+
# Evaluation Prompt
|
| 506 |
+
|
| 507 |
+
Given the following segmented meme image $\{I_i\}$ and accompanying text $\{T_i\}$ with highlighted tokens $[x_i, \ldots, x_j]$ , please evaluate the segmentation based on the criteria below. For each criterion, assign a score of 0 (No) or 1 (Yes) and provide a brief justification for your decision.
|
| 508 |
+
|
| 509 |
+
# Correctness (Score: 0 or 1):
|
| 510 |
+
|
| 511 |
+
- Does the segmentation accurately capture the target social group or elements that reflect the hateful content, based on common-sense understanding?
|
| 512 |
+
- Consider whether the highlighted areas in the image and text correspond to features commonly associated with the identified hateful content.
|
| 513 |
+
|
| 514 |
+
# Relevance (Score: 0 or 1):
|
| 515 |
+
|
| 516 |
+
- Are the highlighted image segments meaningfully related to the highlighted text components?
|
| 517 |
+
- Assess if the visual elements and the textual tokens work together to convey the intended message, especially in the context of the meme's overall meaning.
|
| 518 |
+
|
| 519 |
+
Please present your evaluation in the following format:
|
| 520 |
+
|
| 521 |
+
Correctness: [Score]
|
| 522 |
+
|
| 523 |
+
Justification: [Your brief explanation]
|
| 524 |
+
|
| 525 |
+
Relevance: [Score]
|
| 526 |
+
|
| 527 |
+
Justification: [Your brief explanation]
|
| 528 |
+
|
| 529 |
+
Figure 10: Evaluation prompt provided to both human annotators and GPT-4V.
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7953c0511dccc5ea4aadd29cdda620ec305bafbb66329a5c6afd7347ea259ca
|
| 3 |
+
size 821091
|
2025/A Context-Aware Contrastive Learning Framework for Hateful Meme Detection and Segmentation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Federated Framework for LLM-based Recommendation/85497404-7d8f-4d06-b857-c1485f62f52d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02b2555201ef3e51d6de93a9f2b28446510d4eb4282e0ab993234d79c29431fd
|
| 3 |
+
size 1167940
|
2025/A Federated Framework for LLM-based Recommendation/full.md
ADDED
|
@@ -0,0 +1,423 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Federated Framework for LLM-based Recommendation
|
| 2 |
+
|
| 3 |
+
Jujia Zhao $^{1}$ , Wenjie Wang $^{2*}$ , Chen Xu $^{3*}$ , See-Kiong Ng $^{4}$ , Tat-Seng Chua $^{2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Leiden Institute of Advanced Computer Science, Leiden University
|
| 6 |
+
|
| 7 |
+
$^{2}$ NExT++ Research Center, National University of Singapore
|
| 8 |
+
|
| 9 |
+
$^{3}$ Gaoling School of Artificial Intelligence, Renmin University of China
|
| 10 |
+
|
| 11 |
+
$^{4}$ Institute of Data Science and School of Computing, National University of Singapore
|
| 12 |
+
|
| 13 |
+
{zhao.jujia.0913, wenjiewang96}@gmail.com
|
| 14 |
+
|
| 15 |
+
xc_chen@ruc.edu.cn, {seekiong, dcscts}@nus.edu.sg
|
| 16 |
+
|
| 17 |
+
# Abstract
|
| 18 |
+
|
| 19 |
+
Large Language Models (LLMs) have empowered generative recommendation systems through fine-tuning user behavior data. However, utilizing the user data may pose significant privacy risks, potentially leading to ethical dilemmas and violations of data protection regulations. To address the privacy concerns, Federated Learning for Recommendation (Fed4Rec) has been identified as a promising solution. However, directly applying Fed4Rec in the LLM context introduces two challenges: 1) exacerbated client performance imbalance, which ultimately impacts the system's long-term effectiveness, and 2) substantial client resource costs, posing a high demand for clients' both computational and storage capability to locally train and infer LLMs.
|
| 20 |
+
|
| 21 |
+
To tackle these challenges, we propose a federated framework for LLM-based recommendation (shorted as FELLRec). Generally, FELLRec designs two key strategies. 1) Dynamic balance strategy, which designs dynamic parameter aggregation and learning speed for different clients, aiming to ensure balanced performance across clients. 2) Flexible storage strategy, which selectively retains certain sensitive LLM layers on the client side, while offloading other layers to the server, aiming to preserve privacy while saving resources. Experiment results show that FELLRec can achieve a more balanced client performance and improved overall performance in a computational and storage-efficient way while safeguarding user privacy well.
|
| 22 |
+
|
| 23 |
+
# 1 Introduction
|
| 24 |
+
|
| 25 |
+
Large Language Models (LLMs) with advanced contextual understanding abilities have demonstrated potential in building generative recommendation systems (Rajput et al., 2023; Gao et al., 2023). Fine-tuning LLMs with user behavior data
|
| 26 |
+
|
| 27 |
+
is essential for learning user preferences (Bao et al., 2023a; Li et al., 2023), however, it will face serious privacy leakage risks like in the traditional recommender models. The unintended disclosure of sensitive user data could cause ethical issues and infringe upon data protection laws such as the General Data Protection Regulation in the European Union (Hoofnagle et al., 2019). Therefore, ensuring the security and privacy of recommendation data during the LLM fine-tuning process is crucial.
|
| 28 |
+
|
| 29 |
+
To address the data privacy concerns, Federated Learning for Recommendation (Fed4Rec) emerges as a promising solution (Muhammad et al., 2020; Sun et al., 2022). Fed4Rec requires clients (e.g., user devices and platforms with a group of users) to conduct local training using the client's data, and then exchange non-sensitive intermediate parameters such as model parameters and gradients. This approach protects sensitive user behavior data by keeping them on the client side without the need for sharing with others. In General, Fed4Rec mainly employs two frameworks: 1) Peer-Peer Framework (Yang et al., 2022; An et al., 2024), which makes every client broadcast the updated parameters to other clients directly within the peer-to-peer network. However, this framework faces limitations in LLM-based recommendation scenarios, primarily due to the high communication costs incurred by the large number of LLM parameters. 2) ClientServer Framework (Zhang et al., 2023a,b), which transmits the updated parameters of the clients to a central server for aggregation. Previous works (Sun et al., 2022; Yin et al., 2024) have demonstrated that the client-server framework is more efficient in terms of communication overhead, making it ideal for LLM-based recommendations.
|
| 30 |
+
|
| 31 |
+
However, adapting the client-server framework to LLM-based recommendation presents two challenges: 1) Exacerbated Client Performance Imbalance: Based on our empirical analysis in Figure 1(a), it is evident that directly applying the
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
Figure 1: (a) illustrates the exacerbated client performance imbalance when applying a classical client-server method (FedAvg (McMahan et al., 2017)) to LLM-based recommender models (BIGRec) compared with traditional recommender models (MF). (b) shows the convergence rate of two selected clients when applying FedAvg to LLM-based and traditional models. The observations are on Games.
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
|
| 38 |
+
client-server framework to LLM-based recommendation models leads to a more significant client performance imbalance compared to traditional models. This exacerbated imbalance may cause less accurate and equitable recommendations for specific clients, ultimately impacting the system's long-term effectiveness and user satisfaction (Xu et al., 2023a; Burke et al., 2018). This exacerbated imbalance potentially stems from the accelerated training convergence among clients, as depicted in Figure 1(b), possibly due to the fast adaptation capabilities of LLMs (Bao et al., 2023a,b). 2) Substantial Client Resource Costs: The client-server framework necessitates that each client possesses the capability to locally train and infer LLMs. However, the extensive computational and storage resources required by LLMs pose a substantial challenge for individual clients in meeting these demands (Chen et al., 2023; Fan et al., 2023).
|
| 39 |
+
|
| 40 |
+
To tackle the issues of exacerbated performance imbalance and substantial resource costs, we refine the client-server framework with two strategies: 1) Dynamic Balance Strategy. To mitigate the performance imbalance among clients, we introduce a dynamic balance strategy: it involves designing dynamic parameter aggregation and learning speed for each client to ensure relatively equitable performance across the board. 2) Flexible Storage Strategy: To reduce client costs, we propose a flexible storage strategy for the client model. Intuitively, this strategy selectively allocates some LLM layers, especially those capable of extracting sensitive user data, on the client side, while situating other non-sensitive layers on the server to save cost.
|
| 41 |
+
|
| 42 |
+
In light of these, we propose a Federated Framework for LLM-based Recommendation (FELLRec). 1) FELLRec adapts dynamic balance strategies for different clients. Specifically, FELLRec
|
| 43 |
+
|
| 44 |
+
preserves personalized parameters on each client (e.g., Low-Rank Adaption (LoRA) (Hu et al., 2021)) and employs a dynamic parameter aggregation method based on attention mechanisms. Meanwhile, FELLRec devises dynamic learning speed by proposing a Curriculum Heating learning method (Chen et al., 2021) based on client loss, which helps client undergoes a gradual prewarming phase to familiarize themselves with their own data distribution. 2) FELLRec adopts the flexible storage strategy to deploy those input and output layers on the client side to ensure the protection of all sensitive information (see detailed analysis in Section 4.3). Empowered with the two strategies, FELLRec can safeguard data privacy for LLM-based recommendations in a more balanced and efficient way. We instantiate FELLRec on two LLM backend models and conduct extensive experiments on three datasets, validating its effectiveness and efficiency.
|
| 45 |
+
|
| 46 |
+
Main Contributions: (1) We introduce a privacy-preserving task for fine-tuning LLM-based recommendation models, where we identify the challenges of directly adopting Fed4Rec: exacerbated client performance imbalance and substantial client resource costs. (2) We propose a federated framework for LLM-based recommendation called FELLRec, which addresses the two challenges well while preserving data privacy. (3) Experiments across three public datasets under various settings, confirming its efficacy and efficiency.
|
| 47 |
+
|
| 48 |
+
# 2 Preliminary
|
| 49 |
+
|
| 50 |
+
# 2.1 LLM-based Recommendation
|
| 51 |
+
|
| 52 |
+
Let $\mathcal{U},\mathcal{I}$ be the user set and item set, for a given user $u\in \mathcal{U}$ , the LLM-based recommender $f(\mathcal{P})$ will utilize the user's historical interactions $H_{u}$ to generate a ranking list $L_{K}(H_{u})\subset \mathcal{I}$ as the recommendation for user $u$ , where $K$ is the item numbers in a ranking list and $\mathcal{P}$ is the parameter set of LLMs. $H_{u}$ is the user $u$ 's browsing history: $H_{u} = [i_{1},i_{2},\dots ,i_{N}]$ , where $i_n\in \mathcal{I}$ is the $n$ -th item in the interaction history (typically in a natural language form), and $N$ is the history length.
|
| 53 |
+
|
| 54 |
+
# 2.2 Client-Server Framework under Fed4Rec
|
| 55 |
+
|
| 56 |
+
Let $\mathcal{C}$ be the client set, where each client $c\in \mathcal{C}$ could be a user $u$ or a group of users from a specific platform. Each client $c$ , equipped with a
|
| 57 |
+
|
| 58 |
+
model parameter $\mathcal{P}_c$ , has a local dataset $\mathcal{D}_c = \{(H_u, y), \forall u \in c\}$ , which includes the users' interaction history $H_u$ and the label $y$ (usually the next-interacted item) for training. Within the client-server framework, the most classic approach is FedAvg (McMahan et al., 2017). Specifically, at each training epoch, FedAvg utilizes a central server to aggregate parameters from various clients to generate unified updated parameters for every client. Formally, at each epoch, each client is required to update its parameter based on its local dataset: $\mathcal{P}_c = \arg \min_{\mathcal{P}_c} \sum_{(H_u, y) \in \mathcal{D}_c} l(f(H_u; \mathcal{P}_c), y), \forall c \in \mathcal{C}$ , where $l(\cdot)$ is the loss function of recommendation. Subsequently, the central server will aggregate parameters from all clients, and send the unified aggregated parameters back to every client: $\mathcal{P}_c = \frac{1}{n} \sum_{c' \in \mathcal{C}} \frac{|\mathcal{D}_{c'}|}{|\mathcal{D}|} \mathcal{P}_{c'}$ , $\forall c \in \mathcal{C}$ . FedAvg ensures privacy by obviating the necessity to transmit original data to the server, concentrating instead on the exchange of model parameters. However, directly applying FedAvg to LLM-based recommendations will meet the exacerbated client performance imbalance and substantial client resource cost.
|
| 59 |
+
|
| 60 |
+
# 3 FELLRec
|
| 61 |
+
|
| 62 |
+
In response to exacerbated client performance imbalance and significant client resource costs, we introduce a Federated Framework for LLM-based Recommendation (FELLRec), which enhances data privacy in LLM-based recommendation systems both equitably and efficiently. FELLRec encompasses two strategies: 1) Dynamic balance strategy, which designs dynamic parameter aggregation and learning speeds to ensure relatively balanced performance across clients. 2) Flexible storage strategy, which enables the flexible storage of LLM layers to conserve resources. The architecture of FELLRec is depicted in Figure 2.
|
| 63 |
+
|
| 64 |
+
# 3.1 Dynamic Balance Strategy
|
| 65 |
+
|
| 66 |
+
As illustrated in Section 1, directly applying the client-server framework in LLM-based recommendation exacerbates the performance imbalance across clients. This imbalance may lead to less equitable recommendations for specific clients, thereby detrimentally affecting the system's overall effectiveness and diminishing user satisfaction.
|
| 67 |
+
|
| 68 |
+
The imbalance could be potentially attributed to two primary factors: 1) the diverse data distribution among clients, which may lead to conflicting optimization objectives among clients, thus possibly
|
| 69 |
+
|
| 70 |
+
sacrificing the performance of specific clients. 2) The varied learning difficulty levels among clients, where those facing greater challenges may exhibit relatively poorer performance.
|
| 71 |
+
|
| 72 |
+
To address these issues, FELLRec first ensures client personalization by maintaining client-specific parameters for each client, including two kinds: 1) LoRA (Hu et al., 2021), and 2) either a part or the entirety of LLM's own parameters. To economize on client resources, the remaining parameters are fixed. Our analysis primarily utilizes LoRA as an illustrative example, as the same principles apply to other methods.
|
| 73 |
+
|
| 74 |
+
Specifically, for client $c$ , the model parameters are denoted as $\mathcal{P}_c$ with LoRA $\mathcal{R}_c$ , where LoRA is client-specific parameters and $\mathcal{P}_c$ is the fixed original LLM model parameters. Subsequently, FELL-Rec incorporates a dynamic balance strategy, which involves designing dynamic parameter aggregation and learning speeds for each client, addressing two key factors of imbalance respectively.
|
| 75 |
+
|
| 76 |
+
An intuitive idea of our method is:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\boldsymbol {\mathcal {R}} _ {c} = \frac {\sum_ {c ^ {\prime} \in \mathcal {C}} d _ {c , c ^ {\prime}} \boldsymbol {\mathcal {R}} _ {c ^ {\prime}}}{\sum_ {c ^ {\prime} \in \mathcal {C}} d _ {c , c ^ {\prime}}} \tag {1}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $d_{c,c'}$ is the dynamic aggregation weight and it can be divided into two parts: $d_{c,c'} = w_c \cdot s_{c,c'}$ , where $s_{c,c'}$ is the attention-based aggregation weight corresponding to the Sub-section 3.1.1 and $w_c$ is the learning difficulty weight illustrated at the Sub-section 3.1.2.
|
| 83 |
+
|
| 84 |
+
# 3.1.1 Dynamic Parameter Aggregation
|
| 85 |
+
|
| 86 |
+
Given the variability in data distribution among clients, the optimization objectives for them may diverge, potentially leading to conflicts when trying to optimize a global model. Such conflicts may inadvertently sacrifice the performance of specific clients, which causes imbalance.
|
| 87 |
+
|
| 88 |
+
Given this, FELLRec incorporates an attention-based parameter aggregation method. This method customizes the aggregation process of each client according to their unique data distribution, aiming to mitigate performance imbalances without compromising the performance of specific clients. Intuitively, the client model prioritizes learning from clients with similar data distributions while reducing the influence of those deemed non-relevant. The prioritization mechanism involves aggregating the client model parameters based on the cosine similarity between the parameters of the current client and those of other clients. Specifically, for
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Flexible Storage Strategy
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
Dynamic Balance Strategy
|
| 95 |
+
Figure 2: FELLRec Structure. The left part is the flexible storage strategy which offloads non-sensitive LLM layers to the server to save resources. The right part is the dynamic balance strategy which ensures relatively balanced performance across clients.
|
| 96 |
+
|
| 97 |
+
client $c$ , the aggregation formula is:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\boldsymbol {\mathcal {R}} _ {c} = \frac {\sum_ {c ^ {\prime} \in \mathcal {C}} s _ {c , c ^ {\prime}} \boldsymbol {\mathcal {R}} _ {c ^ {\prime}}}{\sum_ {c ^ {\prime} \in \mathcal {C}} s _ {c , c ^ {\prime}}}, \quad \text {w h e r e} \tag {2}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
s _ {c, c ^ {\prime}} = \frac {\operatorname {v e c} \left(\mathcal {R} _ {c}\right) ^ {\top} \operatorname {v e c} \left(\mathcal {R} _ {c ^ {\prime}}\right)}{\| \operatorname {v e c} \left(\mathcal {R} _ {c}\right) \| _ {2} \| \operatorname {v e c} \left(\mathcal {R} _ {c ^ {\prime}}\right) \| _ {2}}. \tag {3}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$\| \cdot \| _2$ denotes the $\ell_2$ norm, $\mathcal{R}_c$ represents the client-specific LoRA parameter of client $c$ , $\mathrm{vec}(\mathcal{R}_c)$ represents flattened one-dimension client-specific LoRA parameter of client $c$ , and $s_{c,c'}$ is attention-based aggregation weight, which is cosine similarity between $\mathrm{vec}(\mathcal{R}_c)$ and $\mathrm{vec}(\mathcal{R}_{c'})$ .
|
| 108 |
+
|
| 109 |
+
Through dynamic parameter aggregation, FELL-Rec ensures more balanced performance across clients by customizing the aggregation process of each client based on its specific data distribution.
|
| 110 |
+
|
| 111 |
+
# 3.1.2 Dynamic Learning Speed
|
| 112 |
+
|
| 113 |
+
Given the varied heterogeneity within client datasets, clients encounter different learning difficulties during training (Yang et al., 2023). Consequently, the learning status of different clients (e.g., ongoing learning, convergence or overfitting) can vary significantly. If a client has not adequately learned from its own data, excessively aggregating parameters from other clients may detrimentally affect its performance, potentially leading to performance imbalances across clients.
|
| 114 |
+
|
| 115 |
+
In response to this challenge, we develop a client-specific dynamic learning speed mechanism. This mechanism dynamically adjusts the extent of learning from other clients according to the client's current learning status, thereby personalizing the client's learning process. FELLRec assesses a client's learning status via its local loss, which serves as a gauge of the client's learning difficulty, and adjusts the extent of learning from peers accordingly. Based on this, FELLRec introduces a
|
| 116 |
+
|
| 117 |
+
Curriculum Heating learning method (Chen et al., 2021), which is adapted based on client loss. Intuitively, clients experiencing higher losses undergo a gradual pre-warming phase, allowing them to acclimate to their data distribution, whereas clients with lower losses engage in a rapid convergence, enhancing training efficiency. Specifically, for client $c$ , warm-up coefficient is:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
w _ {c} = \tanh \left( \right.\frac {\alpha}{\left[ \right. \exp \left(\mathcal {L} _ {c}\right) / \sum_ {i = 1} ^ {N} \exp \left(\mathcal {L} _ {i}\right)\left. \right)\left. \right] ^ {t / \beta}}\left. \right), \tag {4}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where $\alpha$ is the speed-related warm-up factor, influencing the warm-up's overall pace; $\beta$ is the time-related warm-up factor, affecting the temporal impact on warm-up speed. In essence, a higher $\alpha$ or a lower $\beta$ accelerates warm-up for clients. $w_{c}$ is posed on the similarity score with other clients to control the learning speed: $d_{c,c'} = w_c s_{c,c'}$ , $\forall c' \in \mathcal{C}, c' \neq c$ , where $d_{c,c'}$ is the final dynamic aggregation weight. This approach, through the application of the warm-up coefficient, dynamically adjusts a client's learning pace based on its current learning status, providing a tailored learning speed for each client and mitigating performance imbalances across the clients.
|
| 124 |
+
|
| 125 |
+
# 3.2 Flexible Storage Strategy
|
| 126 |
+
|
| 127 |
+
In LLM-based recommendation systems, training and inference processes demand significant resource investment. Recognizing that not all clients have the capacity for the storage and computational demands of an LLM model, FELLRec introduces a flexible storage strategy aimed at reducing resource expenditure for clients.
|
| 128 |
+
|
| 129 |
+
FELLRec retains specific subsets of layers on the client side, particularly those closer to the in
|
| 130 |
+
|
| 131 |
+
put and output layers, due to their processing of sensitive data. The rest of the layers are hosted on the server side to save resources. Specifically, for client $c$ , the client model parameters are denoted as $\mathcal{P}_c$ with LoRA $\mathcal{R}_c$ . Both of this two parts $\mathcal{P}_c$ and $\mathcal{R}_c$ can be divided into $N$ layers: $\{\mathcal{P}_c^{(i)}\}_{i=1}^N$ , $\{\mathcal{R}_c^{(i)}\}_{i=1}^N$ , respectively, where $N$ is the total number of LLM layers. Therefore, we combine them as $\mathcal{T}_c$ for simplicity, where $\mathcal{T}_c = \{\mathcal{P}_c^{(i)}, \mathcal{R}_c^{(i)}\}_{i=1}^N$ . Based on this, the layer retained on the client side are $\{\mathcal{T}_c^{(i)}\}_{i=1}^k$ and $\mathcal{T}_c^{(N)}$ , where $k$ represents the layer-allocation hyper-parameter. Conversely, the layers stored on the server side are $\{\mathcal{T}_c^{(i)}\}_{i=k+1}^{N-1}$ .
|
| 132 |
+
|
| 133 |
+
During each training round, the client sends input data $H_{u}$ to its preserved input layers, which then forwards the output embedding $\boldsymbol{e}_{u}^{(k)} = g(H_{u},\{\mathcal{T}_{c}^{(i)}\}_{i = 1}^{k})$ to the server for further processing, where $g(\cdot)$ commonly is the attention layer with feed-forward layer in the LLM scenario. The server processes this embedding and returns the output $e_{u}^{(N - 1)} = g(e_{u}^{(k)},\{\mathcal{T}_{c}^{(i)}\}_{i = k + 1}^{N - 1})$ to the client to produce the final output embedding $e_{u}^{N} = g(e_{u}^{(N - 1)},\mathcal{T}_{c}^{(N)})$ . Subsequently, the client calculates the loss using the preserved label on the client side. Formally, the forward process of FELLRec is described as $\mathcal{T}_c = \{\mathcal{T}_c^{(i)}\}_{i = 1}^k\circ$ $\{\mathcal{T}_c^{(i)}\}_{i = k + 1}^{N - 1}\circ \mathcal{T}_c^{(N)}$ , where $\circ$ represents operation composition, with the output of the function on the right being used as the input to the function on the left. Following this, the backward process begins, with gradients propagated in reverse: from the client to the server and then back to the client.
|
| 134 |
+
|
| 135 |
+
When client and server configurations are consistent, FELLRec's performance is unaffected by the parameter $k$ since offloading layers to the server doesn't alter the training mechanics—only the storage location of model parameters changes. Specifically, both the forward and backward propagation processes proceed identically to scenarios where no layers are offloaded to the server.
|
| 136 |
+
|
| 137 |
+
This strategy significantly reduces client resource costs during both training and inference, as shown in Table 3. It is noteworthy that the determination of the number of layers to preserve is adaptable, enabling control over client costs. However, despite our method's efforts to protect data privacy, there may be malicious behavior from the server side aimed at attacking the model to access
|
| 138 |
+
|
| 139 |
+
# Algorithm 1 FELLRec Training Phase
|
| 140 |
+
|
| 141 |
+
Require: The client set $\mathcal{C}$ , item set $\mathcal{I}$ , epoch number $T$ , local round number $R$ , warm-up parameter $\alpha, \beta$ , personalized parameters $\mathcal{R}_c$ and local data $\mathcal{D}_c = \{H_u, y\}, \forall c \in \mathcal{C}$ .
|
| 142 |
+
Ensure: Fine-tuned personalized parameters $\mathcal{R}_c, \forall c \in \mathcal{C}$ .
|
| 143 |
+
1: Initialize client model $\mathcal{R}_c, c \in \mathcal{C}$
|
| 144 |
+
2: for all each epoch $t = 1, 2, \dots, T$ do
|
| 145 |
+
3: Initialize $\mathcal{L}_c = 0$
|
| 146 |
+
4: // Client Local Training
|
| 147 |
+
5: for all each client $c \in \mathcal{C}$ in parallel do
|
| 148 |
+
6: for all each round $r = 1,2,\dots ,R$ do
|
| 149 |
+
7: $l_{c}(\mathcal{R}_{c}) = \sum_{(H_{u},y)\in \mathcal{D}_{c}}l(f(H_{u};\mathcal{R}_{c}),y)$
|
| 150 |
+
8: $\mathcal{L}_c = \mathcal{L}_c + l_c$
|
| 151 |
+
9: $\mathcal{R}_c = \arg \min_{\mathcal{R}_c}l_c(\mathcal{R}_c)$
|
| 152 |
+
10: Upload $\{\mathcal{R}_c^{(i)}\}_{i = 1}^k$ $\mathcal{R}_c^{(N)},\forall c$ to server
|
| 153 |
+
11: // Aggregate param. for clients
|
| 154 |
+
12: $w_{c} = \tanh \left(\frac{\alpha}{\left[\exp(\mathcal{L}_{c}) / \sum_{i=1}^{N}\exp(\mathcal{L}_{i})\right]^{t / \beta}}\right)$
|
| 155 |
+
13: $s_{c,c'} = \frac{\operatorname{vec}(\mathcal{R}_c)^\top\operatorname{vec}(\mathcal{R}_{c'})}{\|\operatorname{vec}(\mathcal{R}_c)\|_2\|\operatorname{vec}(\mathcal{R}_{c'})\|_2},\forall c,c'\in \mathcal{C}$
|
| 156 |
+
14: $d_{c,c'} = w_c s_{c,c'}$
|
| 157 |
+
15: $\mathcal{R}_c = \frac{\sum_{c'\in\mathcal{C}}d_{c,c'}\mathcal{R}_{c'}}{\sum_{c'\in\mathcal{C}}d_{c,c'}}$ , $\forall c\in \mathcal{C}$
|
| 158 |
+
16: Send $\{\mathcal{R}_c^{(i)}\}_{i = 1}^k$ $\mathcal{R}_c^{(N)},\forall c\in \mathcal{C}$ back to clients
|
| 159 |
+
|
| 160 |
+
user privacy data. Our subsequent experiments indicate that retaining more layers on the server side increases the vulnerability to attacks (as detailed in Section 4.3), where we analyze the trade-off between the risk of attacks and the costs.
|
| 161 |
+
|
| 162 |
+
# 3.3 FELLRec Framework
|
| 163 |
+
|
| 164 |
+
# 3.3.1 Training
|
| 165 |
+
|
| 166 |
+
In the training phase, FELLRec trains personalized parameter $\mathcal{R}_c$ for each client $c$ without sharing their data. Specifically, at each epoch $t$ , FELLRec first conducts client local training and then aggregates parameters of all clients to update their personalized parameter $\mathcal{R}_c$ .
|
| 167 |
+
|
| 168 |
+
Specifically, at the client local training phase, each client updates their parameters $\mathcal{R}_c$ utilizing their respective local datasets $\mathcal{D}_c$ . During this phase, the client model is not entirely stored on the client side. Instead, parts of the model are stored on the server side, as dictated by the flexible storage strategy, to reduce the resource costs associated with training LLMs (see detailed analysis Section 3.2). Subsequently, each client uploads their client-preserved parameters to the server for aggre
|
| 169 |
+
|
| 170 |
+
# Algorithm 2 FELLRec Inference Phase
|
| 171 |
+
|
| 172 |
+
Require: The client set $\mathcal{C}$ , item set $\mathcal{I}$ , ranking size $K$ , parameters of each client $\mathcal{T}_c, \forall c \in \mathcal{C}$ , the user $u$
|
| 173 |
+
|
| 174 |
+
Ensure: Ranking list $L_{K}(u)$
|
| 175 |
+
|
| 176 |
+
1: // Offline Storage
|
| 177 |
+
2: for all client $c \in \mathcal{C}$ do
|
| 178 |
+
3: Get item embeddings $e_{c}(i) = f(i, \{\mathcal{T}_{c}^{(i)}\}_{i=1}^{N}), \forall i$
|
| 179 |
+
4: User $u$ arrives in FELLRec;
|
| 180 |
+
5: Finding $u$ corresponds to the client $c$ ;
|
| 181 |
+
6: // Client c executes
|
| 182 |
+
7: $\pmb{e}_u^{(k)} = g(H_u,\{\pmb{\mathcal{T}}_c^{(i)}\}_{i = 1}^k)$
|
| 183 |
+
8: Upload $e_{u}^{(k)}$ to Server;
|
| 184 |
+
9: // Server executes
|
| 185 |
+
10: $\pmb{e}_u^{(N - 1)} = g(\pmb{e}_u^{(k)},\{\pmb{T}_c^{(i)}\}_{i = k + 1}^{N - 1})$
|
| 186 |
+
11: Upload $e_{u}^{(N - 1)}$ to Client;
|
| 187 |
+
12: // Client c executes
|
| 188 |
+
13: Get output embedding $e_u^{(N)} = g(e_u^{(N - 1)}, \{\mathcal{T}_c^{(N)}\})$
|
| 189 |
+
14: // Ranking Step
|
| 190 |
+
15: $L_{K}(u) = \arg \min_{S\subset \mathcal{I},|S| = K}\sum_{i\in S}\mathrm{distance}(\pmb {e}_{c}(i),\pmb{e}_{u}^{(N)})$
|
| 191 |
+
|
| 192 |
+
gation, making use of the parameters from other clients to assist the update process.
|
| 193 |
+
|
| 194 |
+
In the aggregate phase, each client gets their dynamic aggregation weight $d_{c,c'}, \forall c' \in \mathcal{C}$ through dynamic parameter aggregation and dynamic learning speed. Subsequently, they get aggregated personalized parameters $\mathcal{R}_c$ based on their specific aggregation weight and then send client-preserved parameters back to clients. The training algorithm of FELLRec is provided in Algorithm 1.
|
| 195 |
+
|
| 196 |
+
# 3.3.2 Inference
|
| 197 |
+
|
| 198 |
+
In the inference phase, for any given client $c$ , FELL-Rec utilizes the updated LoRA parameters $\mathcal{R}_c$ and fixed parameters $\mathcal{P}_c$ to form the complete parameters $\mathcal{T}_c$ , and then get ranking list $L_K$ as recommendation for user $u$ belongs to this client.
|
| 199 |
+
|
| 200 |
+
Specifically, the inference phase is divided into four phases: 1) Client $c$ independently stores the embeddings $e_c(i)$ for all items from the item corpus $\mathcal{I}$ , in preparation for the ranking step. 2) Client $c$ gets the hidden embedding at $k$ -th layer of LLM through: $e_u^{(k)} = g(H_u,\{\mathcal{T}_c^{(i)}\}_{i = 1}^k)$ ; 3) Then the server receives the uploaded $e_u^{(k)}$ and continue to compute the hidden embedding $e_{u}^{(N - 1)}$ at $(N - 1)$ -th layer of LLM through $e_u^{(N - 1)} = g(e_u^{(k)},\{\mathcal{T}_c^{(i)}\}_{i = k + 1}^{N - 1})$ ; 4) Finally, client $c$ directly computes the distance (e.g., cosine similarity(Bao et al., 2023a) or L2 distance (Li et al., 2023)) between the generated embedding $e_u^{(N)}$
|
| 201 |
+
|
| 202 |
+
and the item embeddings $e_c(i)$ from item corpus, and get the final ranking list through: $L_K(u) = \arg \min_{S \subset \mathcal{I}, |S| = K} \sum_{i \in S} \text{distance}(e_c(i), e_u^{(N)})$ .
|
| 203 |
+
|
| 204 |
+
This approach preserves sensitive user data on the client side during inference, thus enhancing data privacy. Additionally, by offloading portions of the computation to the server, FELLRec reduces the computational load on clients and minimizes their hardware requirements. The inference algorithm of FELLRec is provided in Algorithm 2.
|
| 205 |
+
|
| 206 |
+
# 4 Experiments
|
| 207 |
+
|
| 208 |
+
In this section, we conduct a comprehensive experimental study to analyze the performance of FELLRec and the impact of different components (e.g., dynamic balance strategy and flexible storage strategy) within it.
|
| 209 |
+
|
| 210 |
+
# 4.1 Experimental Settings
|
| 211 |
+
|
| 212 |
+
# 4.1.1 Datasets and Settings
|
| 213 |
+
|
| 214 |
+
We assess the effectiveness of FELLRec on three popular benchmark datasets. 1) Games is from the Amazon review datasets, which covers interactions between users and video games with rich textual features such as game titles and categories. 2) MicroLens is a newly released short video recommendation dataset. Each short video contains raw modal information such as title, cover image, audio, and video information. 3) Book is also derived from Amazon review datasets, containing users' interactions with extensive books, encompassing a broad spectrum of genres and subjects. The datasets' statistics are detailed in Table 5. For all three datasets, we organize user-item interactions chronologically based on timestamps and divide the data into training, validation, and testing sets in an 8:1:1 ratio.
|
| 215 |
+
|
| 216 |
+
Within the context of LLM-based recommendations, we explore two distinct fine-tuning approaches: 1) Few-shot fine-tuning fine-tunes LLMs using a limited number of examples, e.g., 1024-shot. 2) Full fine-tuning utilizes all samples to fine-tune LLMs.
|
| 217 |
+
|
| 218 |
+
For the evaluation, we adopt full-ranking protocol (He et al., 2020) and evaluate using Recall@ $K$ and NDCG@ $K$ , where $K = 10$ or 20.
|
| 219 |
+
|
| 220 |
+
# 4.1.2 Baselines
|
| 221 |
+
|
| 222 |
+
We compare FELLRec against competitive baselines. First, we select two superior backend
|
| 223 |
+
|
| 224 |
+
Table 1: Overall performance of FELLRec and other baselines in the LLM-based Recommendation scenario. Bold signifies the best performance among the privacy-preserving methods under the same backend models. * denotes statistically significant improvements of FELLRec over the second-best privacy-preserving methods under the same backend models, according to the t-tests with a significance level of $p < 0.01$ .
|
| 225 |
+
|
| 226 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">Games</td><td colspan="4">Microlens</td><td colspan="4">Book</td></tr><tr><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td></tr><tr><td>BIGRec</td><td>0.0194</td><td>0.0316</td><td>0.0127</td><td>0.0164</td><td>0.0089</td><td>0.0132</td><td>0.0050</td><td>0.0062</td><td>0.0079</td><td>0.0097</td><td>0.0126</td><td>0.0116</td></tr><tr><td>+FedAvg</td><td>0.0145</td><td>0.0257</td><td>0.0093</td><td>0.0126</td><td>0.0021</td><td>0.0039</td><td>0.0012</td><td>0.0017</td><td>0.0081</td><td>0.0097</td><td>0.0119</td><td>0.0112</td></tr><tr><td>+FedProx</td><td>0.0143</td><td>0.0255</td><td>0.0090</td><td>0.0123</td><td>0.0033</td><td>0.0051</td><td>0.0032</td><td>0.0040</td><td>0.0081</td><td>0.0096</td><td>0.0120</td><td>0.0112</td></tr><tr><td>+Ditto</td><td>0.0147</td><td>0.0260</td><td>0.0091</td><td>0.0126</td><td>0.0040</td><td>0.0063</td><td>0.0041</td><td>0.0045</td><td>0.0077</td><td>0.0091</td><td>0.0113</td><td>0.0106</td></tr><tr><td>+RoLoRA</td><td>0.0128</td><td>0.0231</td><td>0.0079</td><td>0.0106</td><td>0.0019</td><td>0.0037</td><td>0.0013</td><td>0.0019</td><td>0.0052</td><td>0.0075</td><td>0.0101</td><td>0.0098</td></tr><tr><td>+Ours</td><td>0.0158*</td><td>0.0274*</td><td>0.0104*</td><td>0.0139*</td><td>0.0088*</td><td>0.0128*</td><td>0.0051*</td><td>0.0062*</td><td>0.0085*</td><td>0.0102*</td><td>0.0124*</td><td>0.0116*</td></tr><tr><td>RecFormer</td><td>0.0193</td><td>0.0360</td><td>0.0117</td><td>0.0169</td><td>0.0190</td><td>0.0369</td><td>0.0104</td><td>0.0155</td><td>0.0318</td><td>0.0512</td><td>0.0333</td><td>0.0380</td></tr><tr><td>+FedAvg</td><td>0.0149</td><td>0.0262</td><td>0.0089</td><td>0.0124</td><td>0.0086</td><td>0.0192</td><td>0.0045</td><td>0.0074</td><td>0.0078</td><td>0.0123</td><td>0.0085</td><td>0.0097</td></tr><tr><td>+FedProx</td><td>0.0150</td><td>0.0266</td><td>0.0086</td><td>0.0121</td><td>0.0086</td><td>0.0166</td><td>0.0041</td><td>0.0064</td><td>0.0071</td><td>0.0061</td><td>0.0083</td><td>0.0133</td></tr><tr><td>+Ditto</td><td>0.0162</td><td>0.0273</td><td>0.0091</td><td>0.0138</td><td>0.0091</td><td>0.0172</td><td>0.0046</td><td>0.0065</td><td>0.0102</td><td>0.0131</td><td>0.0107</td><td>0.0159</td></tr><tr><td>+RoLoRA</td><td>0.0132</td><td>0.0257</td><td>0.0081</td><td>0.0118</td><td>0.0084</td><td>0.0187</td><td>0.0029</td><td>0.0045</td><td>0.0071</td><td>0.0115</td><td>0.0079</td><td>0.0095</td></tr><tr><td>+Ours</td><td>0.0215*</td><td>0.0373*</td><td>0.0122*</td><td>0.0170*</td><td>0.0141*</td><td>0.0245*</td><td>0.0065*</td><td>0.0094*</td><td>0.0274*</td><td>0.0411*</td><td>0.0275*</td><td>0.0301*</td></tr></table>
|
| 227 |
+
|
| 228 |
+
Table 2: Overall performance of FELLRec and other traditional recommendation baselines. Bold signifies the best performance among all methods. Underlined values indicate the second best. * denotes statistically significant improvements of the best method over the second-best, according to the t-tests with a significance level of $p < {0.01}$ .
|
| 229 |
+
|
| 230 |
+
<table><tr><td rowspan="2" colspan="2">Method</td><td colspan="4">Games</td><td colspan="4">Microlens</td><td colspan="4">Book</td></tr><tr><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td><td>R@10</td><td>R@20</td><td>N@10</td><td>N@20</td></tr><tr><td rowspan="2">Centralized</td><td>MF</td><td>0.0101</td><td>0.0164</td><td>0.0070</td><td>0.0090</td><td>0.0044</td><td>0.0063</td><td>0.0026</td><td>0.0032</td><td>0.0050</td><td>0.0089</td><td>0.0060</td><td>0.0071</td></tr><tr><td>LightGCN</td><td>0.0153</td><td>0.0234</td><td>0.0101</td><td>0.0127</td><td>0.0078</td><td>0.0116</td><td>0.0044</td><td>0.0055</td><td>0.0065</td><td>0.0120</td><td>0.0078</td><td>0.0093</td></tr><tr><td rowspan="5">Federated</td><td>FedMF</td><td>0.0065</td><td>0.0108</td><td>0.0044</td><td>0.0058</td><td>0.0029</td><td>0.0045</td><td>0.0021</td><td>0.0027</td><td>0.0050</td><td>0.0070</td><td>0.0034</td><td>0.0041</td></tr><tr><td>LightFR</td><td>0.0088</td><td>0.0139</td><td>0.0051</td><td>0.0069</td><td>0.0041</td><td>0.0055</td><td>0.0024</td><td>0.0044</td><td>0.0048</td><td>0.0079</td><td>0.0049</td><td>0.0061</td></tr><tr><td>FedPerGNN</td><td>0.0145</td><td>0.0229</td><td>0.0093</td><td>0.0121</td><td>0.0043</td><td>0.0060</td><td>0.0024</td><td>0.0029</td><td>0.0062</td><td>0.0112</td><td>0.0075</td><td>0.0089</td></tr><tr><td>BIGRec+Ours</td><td>0.0158</td><td>0.0274</td><td>0.0104</td><td>0.0139</td><td>0.0088</td><td>0.0128</td><td>0.0051</td><td>0.0062</td><td>0.0085</td><td>0.0102</td><td>0.0124</td><td>0.0116</td></tr><tr><td>RecFormer+Ours</td><td>0.0215*</td><td>0.0373*</td><td>0.0122*</td><td>0.0170*</td><td>0.0141*</td><td>0.0245*</td><td>0.0065*</td><td>0.0094*</td><td>0.0274*</td><td>0.0411*</td><td>0.0275*</td><td>0.0301*</td></tr></table>
|
| 231 |
+
|
| 232 |
+
LLMs: 1) BIGRec (Bao et al., 2023a). 2) RecFormer (Li et al., 2023). Given the absence of LLM-based privacy-preserving recommendation method in existing literature, we incorporate two well-established federated learning algorithms that can be deployed on LLM: 3) FedAvg (McMahan et al., 2017). 4) FedProx (Li et al., 2020). 5) Ditto (Li et al., 2021). 6) RoLoRA (Chen et al., 2024). Additionally, our comparison also includes baselines from traditional recommendation methods. Specifically, we select MF (Ganttner et al., 2011) and LightGCN (He et al., 2020) as the centralized-based method, along with their federated counterparts: FedMF (Chai et al., 2020), LightFR (Zhang et al., 2023b), and FedPerGNN (Wu et al., 2022). Details of baselines and implementation are shown in Appendix A.1 and A.2.
|
| 233 |
+
|
| 234 |
+
# 4.2 Overall Performance
|
| 235 |
+
|
| 236 |
+
We compare FELLRec with other baselines, shown in Table 1 and Table 2. The result indicates that: 1) FELLRec outperforms other privacy-preserving methods on all datasets and achieves performance on par with centralized LLM-based methods. This efficacy is largely due to the dynamic balance strata
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 3: (a) and (b) shows the similarity between input embeddings and predicted input embeddings according to extracted embeddings of different layers from BiGRec under linear probe attack and MLP probe attack.
|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+
egy, which offers dynamic parameter aggregation and learning speeds. 2) FedAvg and FedProx performance fluctuates due to their inability to robustly adapt to varied data distributions across clients and the heterogeneity within clients (see detailed analysis in Appendix A.4). Conversely, FELLRec consistently excels, aided by its dynamic balance strategy. 3) FELLRec outperforms all traditional baselines in both centralized and federated settings due to the contextual comprehension and abundant pretrained knowledge of LLMs, along with the dynamic balance strategy.
|
| 244 |
+
|
| 245 |
+
# 4.3 Attack Model Analysis
|
| 246 |
+
|
| 247 |
+
To mitigate client-side resource consumption, FELLRec employs the flexible storage strategy in
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
Figure 4: (a) and (b) show the time (s) and memory (MiB) cost for different values of $k$ under the BIGRec-based and RecFormer-based FELLRec, respectively.
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+
Table 3: Cost of FELLRec and FedAvg. $k$ and $N$ represent layer-allocation hyper-parameter and LLM layer number, where $k + 1 < N$ . $b$ and $c$ represent the communication cost of uploading one layer of LLM parameters and data embeddings to the server, respectively.
|
| 255 |
+
|
| 256 |
+
<table><tr><td></td><td>Storage Cost</td><td>Inference Cost</td><td>Communication Cost</td></tr><tr><td>FELLRec</td><td>O(k+1)</td><td>O(k+1)</td><td>O((k+1)·b+2·c)</td></tr><tr><td>FedAvg</td><td>O(N)</td><td>O(N)</td><td>O(N·b)</td></tr></table>
|
| 257 |
+
|
| 258 |
+
Section 3.2. However, putting some layers on the server side may also bring the risk of attacking to leak the privacy. In this section, we conduct an attack simulation experiment to assess the possibility of attacks via intermediate embeddings processed on the server side.
|
| 259 |
+
|
| 260 |
+
We use BIGRec as a case study, extracting intermediate output embeddings from all layers and applying two typical types of white-box attack methods: the linear probe attack and the Multilayer Perceptron (MLP) probe attack (Kim et al., 2024). These methods attempt to reconstruct the input embedding from the layer embeddings separately (Xu et al., 2024a). The detail of the selected attack models is shown in Appendix A.3. We report the cosine similarity ratio between the reconstructed embeddings and the ground truth input embeddings, as illustrated in Figure 3. We find that: 1) the likelihood of reconstructing user historical interactions from intermediate embeddings decreases with ascending layer number generally. 2) The possibility of reconstruction from the last layer increases since LLM training aims to align the final output with the target interacted item, which may have higher similarity with the input embeddings. Thus, the choice of parameter $k$ should be guided by this attack simulation (in this case, $k \geq 21$ ).
|
| 261 |
+
|
| 262 |
+
# 4.4 Efficiency Analysis
|
| 263 |
+
|
| 264 |
+
We analyze efficiency of FELLRec both experimentally and theoretically. First, we calculate the time and memory cost for different layer-allocation hyper-parameter $k$ in Figure 4. This reveals a tradeoff: storing more layers server-side raises the risk of attack but reduces client resource cost. Thus, clients
|
| 265 |
+
|
| 266 |
+
can dynamically adjust layer allocation based on capacity.
|
| 267 |
+
|
| 268 |
+
We also evaluate FELLRec against FedAvg across various metrics, including storage cost, communication cost, and local client inference cost. The findings in Table 3 demonstrate that our method outperforms FedAvg in storage and inference cost. For communication cost, our method outperforms FedAvg under the conditions: $(k + 1)\cdot b + 2\cdot c < n\cdot b$ , which simplifies to $c < (n - k - 1)b / 2$ . Intuitively, the lower the value of $k$ , the greater the likelihood of achieving superior communication efficiency compared to FedAvg. Similarly, a lower value of $c$ , indicating a smaller device scope, further enhances the superiority of our method. This indicates that our method is particularly effective in clients with limited scope, making it ideally suited for user devices. Moreover, we can further reduce communication costs for transferring model parameters and data by implementing asynchronous updates (Xu et al., 2023b).
|
| 269 |
+
|
| 270 |
+
We also conduct a practical analysis of the communication cost associated with uploading a portion of the LLM's parameters to the server. During each communication round, only the LoRA parameters from the client side are uploaded to the server, which significantly reduces the overall parameter size. Since the size of the LoRA parameters for LLaMA-7B is approximately 16 MB (Hu et al., 2021), the communication cost remains manageable. To provide clarity, we quantified the communication cost of uploading LoRA parameters. For LLaMA-7B, the full LoRA parameter size is approximately 16 MB per client. Assuming a 100 Mbps network (a common configuration), the upload time is approximately 1.28 seconds (McMahan et al., 2017). However, with our Flexible Storage Strategy, only the saved layer parameters from clients need to be uploaded, further reducing the communication cost. Moreover, considering that a typical training iteration for LLaMA-7B takes approximately 110 seconds per batch in our experiment, this communication time is negligible in comparison to the overall training time.
|
| 271 |
+
|
| 272 |
+
# 4.5 Client Performance Analysis
|
| 273 |
+
|
| 274 |
+
To assess whether FELLRec mitigates the performance imbalance among various clients, we conduct client evaluation experiment, as detailed in Table 4. Similar results are seen with the Book dataset, but figures are omitted for brevity. The imbalance degree is calculated as follows:
|
| 275 |
+
|
| 276 |
+
Table 4: Client evaluation results of the centralized method, FedAvg and FELLRec. Bold represents the lowest degree of imbalance among the methods evaluated, using the same backend model.
|
| 277 |
+
|
| 278 |
+
<table><tr><td rowspan="2">Recall@10</td><td colspan="6">Games</td><td colspan="6">MicroLens</td></tr><tr><td>Client 1</td><td>Client 2</td><td>Client 3</td><td>Client 4</td><td>Client 5</td><td>Imbalance</td><td>Client 1</td><td>Client 2</td><td>Client 3</td><td>Client 4</td><td>Client 5</td><td>Imbalance</td></tr><tr><td>BIGRec</td><td>0.0227</td><td>0.0338</td><td>0.0144</td><td>0.0163</td><td>0.0153</td><td>1.35</td><td>0.0148</td><td>0.0275</td><td>0.0059</td><td>0.0050</td><td>0.0031</td><td>4.50</td></tr><tr><td>+FedAvg</td><td>0.0157</td><td>0.0208</td><td>0.0235</td><td>0.0085</td><td>0.0127</td><td>1.76</td><td>0.0010</td><td>0.0047</td><td>0.0017</td><td>0.0001</td><td>0.0004</td><td>46.00</td></tr><tr><td>+FELLRec</td><td>0.0171</td><td>0.0211</td><td>0.0163</td><td>0.0136</td><td>0.0152</td><td>0.55</td><td>0.0170</td><td>0.0120</td><td>0.0066</td><td>0.0042</td><td>0.0062</td><td>3.04</td></tr></table>
|
| 279 |
+
|
| 280 |
+
Imbalance Degree $= (m_{\mathrm{best}} - m_{\mathrm{worst}}) / m_{\mathrm{worst}}$ where $m_{\mathrm{best}}$ is the Recall@10 of the best client, and $m_{\mathrm{worst}}$ is the Recall@10 of the worst client. The results indicate that: 1) FELLRec effectively mitigates the performance imbalance issue among clients compared to FedAvg, primarily due to the dynamic balance strategy, which customizes dynamic parameter aggregation and learning speed for different clients. 2) The imbalance degree in the MicroLens dataset is more pronounced under FedAvg, which is potentially caused by the data distribution among clients being much more diverse than others. Such diversity may lead to conflict optimization objectives among clients, thus exacerbating the imbalance.
|
| 281 |
+
|
| 282 |
+
- More In-depth Experiments. Ablation study, client heterogeneity analysis, client number study, and hyper-parameter analysis are in Appendix A.4.
|
| 283 |
+
|
| 284 |
+
# 5 Related Work
|
| 285 |
+
|
| 286 |
+
- LLM-based Recommendation. Recent advances in LLMs for recommendation systems have gained attention for their contextual understanding and pre-trained knowledge (Lin et al., 2024). Early efforts, such as P5 (Geng et al., 2022), TALLRec (Bao et al., 2023b), focused on fine-tuning LLMs with prompts and recommendation data. Later works, like BIGRec (Bao et al., 2023a) and TIGER (Rajput et al., 2023), refine LLMs by grounding outputs in real item spaces and enhancing generative processes with semantic information. This shift moves from simply integrating recommendation data to fully leveraging LLMs for improved performance. As performance improves, the focus expands to include trustworthiness, such as fairness and explainability (Zhang et al., 2023c; Wang et al., 2023a). Studies like (Xu et al., 2024a; Dai et al., 2024; Xu et al., 2024b) highlight user unfairness in LLM-based recommendation, and LLMHG (Chu et al., 2024) introduces an explainable framework combining LLM reasoning with hypergraph neural networks.
|
| 287 |
+
|
| 288 |
+
- Federated Recommendation. Fed4Rec enhances data privacy in recommendation systems
|
| 289 |
+
|
| 290 |
+
using federated learning (Yin et al., 2024; Zhang et al., 2024), operating under two main frameworks: 1) Peer-Peer Framework (Yang et al., 2022; Long et al., 2023a,b): Clients directly broadcast intermediate parameters to other clients, who aggregate them into their models. For example, SemiDFEGL(Qu et al., 2023) improves scalability via device-to-device collaboration, while DGRec (Zheng et al., 2023) uses a decentralized graph neural network. However, this framework has high communication costs due to large LLM parameters. 2) Client-Server Framework (Wang et al., 2022; Liu et al., 2023; Imran et al., 2023; Zhang et al., 2023a): Clients send local parameters to a central server for aggregation and redistribution. Examples include FedPerGNN(Wu et al., 2022), which incorporates high-order information while preserving privacy, and LightFR (Zhang et al., 2023b), a lightweight federated matrix factorization framework with efficient inference.
|
| 291 |
+
|
| 292 |
+
# 6 Conclusion
|
| 293 |
+
|
| 294 |
+
In this work, we proposed a federated framework for LLM-based recommendation (FELLRec). Firstly, we identified two key challenges in directly applying Fed4Rec in the LLM-based recommendation: exacerbated client performance imbalance and high client resource costs. Subsequently, to address these, FELLRec introduces: 1) dynamic balance strategy, which designs dynamic parameter aggregation and learning speed for different clients during training, aims to ensure relatively equitable performance across clients. 2) Flexible storage strategy, which selectively retains certain sensitive LLM layers on the client side, while offloading other layers to the server, aims to save resources. Overall, FELLRec offers an equitable and resource-efficient approach to safeguard data privacy in LLM-based recommendations.
|
| 295 |
+
|
| 296 |
+
# Acknowledgments
|
| 297 |
+
|
| 298 |
+
This research/project is supported by the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Programme (AISG Award No: AISG2-RP-2020-018)
|
| 299 |
+
|
| 300 |
+
# Limitations
|
| 301 |
+
|
| 302 |
+
First, the largest model we use in this work is LLaMA-7B, exploring the potential of using even larger LLMs could provide further insights into the effectiveness of our method. Second, we primarily utilize two common types of white-box attack methods for model analysis. Additionally, we only use BIGRec as a case study for these attacks. However, different LLMs may demonstrate varying levels of resilience to different attack methods. In the future, it would be beneficial to apply a broader range of attack methods across various LLM architectures to further validate the effectiveness of our approach. Third, while the dynamic balance strategy we designed for FELLRec has proven effective, it would be promising to explore more fine-grained aggregation strategies (e.g., layer-based aggregation) in the future.
|
| 303 |
+
|
| 304 |
+
# References
|
| 305 |
+
|
| 306 |
+
Jingmin An, Guanyu Li, and Wei Jiang. 2024. Nrdl: Decentralized user preference learning for privacy-preserving next poi recommendation. Expert Systems with Applications, 239:122421.
|
| 307 |
+
Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yancheng Luo, Fuli Feng, Xiangnaan He, and Qi Tian. 2023a. A bi-step grounding paradigm for large language models in recommendation systems. arXiv:2308.08434.
|
| 308 |
+
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023b. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In RecSys. ACM.
|
| 309 |
+
Robin Burke, Nasim Sonboli, and Aldo Ordonez-Gauger. 2018. Balanced neighborhoods for multisided fairness in recommendation. In Conference on fairness, accountability and transparency, pages 202-214. PMLR.
|
| 310 |
+
Di Chai, Lye Wang, Kai Chen, and Qiang Yang. 2020. Secure federated matrix factorization. IEEE Intelligent Systems, 36(5):11-20.
|
| 311 |
+
Chaochao Chen, Xiaohua Feng, Jun Zhou, Jianwei Yin, and Xiaolin Zheng. 2023. Federated large language model: A position paper. arXiv:2307.08925.
|
| 312 |
+
Hong Chen, Yudong Chen, Xin Wang, Ruobing Xie, Rui Wang, Feng Xia, and Wenwu Zhu. 2021. Curriculum disentangled recommendation with noisy multi-feedback. Advances in Neural Information Processing Systems, 34:26924-26936.
|
| 313 |
+
Shuangyi Chen, Yue Ju, Hardik Dalal, Zhongwen Zhu, and Ashish Khisti. 2024. Robust federated finetuning
|
| 314 |
+
|
| 315 |
+
of foundation models via alternating minimization of lora. arXiv:2409.02346.
|
| 316 |
+
Zhixuan Chu, Yan Wang, Qing Cui, Longfei Li, Wenqing Chen, Sheng Li, Zhan Qin, and Kui Ren. 2024. Llm-guided multi-view hypergraph learning for human-centric explainable recommendation. arXiv preprint arXiv:2401.08217.
|
| 317 |
+
Sunhao Dai, Chen Xu, Shicheng Xu, Liang Pang, Zhenhua Dong, and Jun Xu. 2024. Bias and unfairness in information retrieval systems: New challenges in the llm era. In KDD, page 6437-6447. Association for Computing Machinery.
|
| 318 |
+
Tao Fan, Yan Kang, Guoqiang Ma, Weijing Chen, Wenbin Wei, Lixin Fan, and Qiang Yang. 2023. Fate-llm: A industrial grade federated learning framework for large language models. arXiv:2310.10049.
|
| 319 |
+
Zeno Gantner, Lucas Drumond, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2011. Personalized ranking for non-uniformly sampled items. In KDDCUP, page 231-247. JMLR.
|
| 320 |
+
Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. 2023. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv:2303.14524.
|
| 321 |
+
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In RecSys, pages 299-315.
|
| 322 |
+
Wes Gurnee and Max Tegmark. 2023. Language models represent space and time. arXiv:2310.02207.
|
| 323 |
+
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-dong Zhang, and Meng Wang. 2020. Lightgen: Simplifying and powering graph convolution network for recommendation. In SIGIR, pages 639-648.
|
| 324 |
+
Chris Jay Hoofnagle, Bart Van Der Sloot, and Frederik Zuiderveen Borgesius. 2019. The European union general data protection regulation: what it is and what it means. Information & Communications Technology Law, 28(1):65-98.
|
| 325 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv:2106.09685.
|
| 326 |
+
Mubashir Imran, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Alexander Zhou, and Kai Zheng. 2023. Refrs: Resource-efficient federated recommender system for dynamic and diversified user preferences. TOIS, 41(3):1-30.
|
| 327 |
+
Apple Inc. 2024. Introducing apple's on-device and server foundation models.
|
| 328 |
+
|
| 329 |
+
Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, and Seong Joon Oh. 2024. Propile: Probing privacy leakage in large language models. Advances in Neural Information Processing Systems, 36.
|
| 330 |
+
Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, and Julian McAuley. 2023. Text is all you need: Learning language representations for sequential recommendation. arXiv:2305.13731.
|
| 331 |
+
Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In International conference on machine learning, pages 6357-6368. PMLR.
|
| 332 |
+
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2:429-450.
|
| 333 |
+
Xinyu Lin, Wenjie Wang, Yongqi Li, Shuo Yang, Fuli Feng, Yinwei Wei, and Tat-Seng Chua. 2024. Data-efficient fine-tuning for llm-based recommendation. arXiv:2401.17197.
|
| 334 |
+
Ruixuan Liu, Yang Cao, Yanlin Wang, Lingjuan Lyu, Yun Chen, and Hong Chen. 2023. Privaterec: Differentially private model training and online serving for federated news recommendation. In SIGKDD, pages 4539-4548.
|
| 335 |
+
Jing Long, Tong Chen, Quoc Viet Hung Nguyen, Guandong Xu, Kai Zheng, and Hongzhi Yin. 2023a. Model-agnostic decentralized collaborative learning for on-device poi recommendation. In SIGIR, pages 423-432.
|
| 336 |
+
Jing Long, Tong Chen, Quoc Viet Hung Nguyen, and Hongzhi Yin. 2023b. Decentralized collaborative learning framework for next poi recommendation. TOIS, 41(3):1-25.
|
| 337 |
+
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273-1282. PMLR.
|
| 338 |
+
Khalil Muhammad, Qinqin Wang, Diarmuid O'Reilly-Morgan, Elias Tragos, Barry Smyth, Neil Hurley, James Geraci, and Aonghus Lawlor. 2020. Fedfast: Going beyond average for faster training of federated recommender systems. In KDD, pages 1234-1242.
|
| 339 |
+
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519.
|
| 340 |
+
|
| 341 |
+
Liang Qu, Ningzhi Tang, Ruiqi Zheng, Quoc Viet Hung Nguyen, Zi Huang, Yuhui Shi, and Hongzhi Yin. 2023. Semi-decentralized federated ego graph learning for recommendation. arXiv:2302.10900.
|
| 342 |
+
Shashank Rajput, Nikhil Mehta, Anima Singh, Raghunandan H Keshavan, Trung Vu, Lukasz Heldt, Lichan Hong, Yi Tay, Vinh Q Tran, Jonah Samost, et al. 2023. Recommender systems with generative retrieval. In NeurIPS. Curran Associates, Inc.
|
| 343 |
+
Zehua Sun, Yonghui Xu, Yong Liu, Wei He, Lanju Kong, Fangzhao Wu, Yali Jiang, and Lizhen Cui. 2022. A survey on federated recommendation systems. arXiv:2301.00767.
|
| 344 |
+
Lei Wang, Songheng Zhang, Yun Wang, Ee-Peng Lim, and Yong Wang. 2023a. Llm4vis: Explainable visualization recommendation using chatgpt. arXiv:2310.07652.
|
| 345 |
+
Qinyong Wang, Hongzhi Yin, Tong Chen, Junliang Yu, Alexander Zhou, and Xiangliang Zhang. 2022. Fast-adapting and privacy-preserving federated recommender system. The VLDB Journal, 31(5):877-896.
|
| 346 |
+
Song Wang, Xingbo Fu, Kaize Ding, Chen Chen, Huiyuan Chen, and Jundong Li. 2023b. Federated few-shot learning. In KDD, pages 2374-2385.
|
| 347 |
+
Chuhan Wu, Fangzhao Wu, Lingjuan Lyu, Tao Qi, Yongfeng Huang, and Xing Xie. 2022. A federated graph neural network framework for privacy-preserving personalization. Nature Communications, 13(1):3091.
|
| 348 |
+
Chen Xu, Sirui Chen, Jun Xu, Weiran Shen, Xiao Zhang, Gang Wang, and Zhenhua Dong. 2023a. Pmmf: Provider max-min fairness re-ranking in recommender system. In Proceedings of the ACM Web Conference 2023, WWW '23, page 3701-3711, New York, NY, USA. Association for Computing Machinery.
|
| 349 |
+
Chen Xu, Wenjie Wang, Yuxin Li, Liang Pang, Jun Xu, and Tat-Seng Chua. 2024a. A study of implicit ranking unfairness in large language models. In EMNLP Findings, pages 7957-7970. Association for Computational Linguistics.
|
| 350 |
+
Chen Xu, Xiaopeng Ye, Wenjie Wang, Liang Pang, Jun Xu, and Ji rong Wen. 2024b. A taxation perspective for fair re-ranking. In SIGIR. Association for Computing Machinery.
|
| 351 |
+
Chenhao Xu, Youyang Qu, Yong Xiang, and Longxiang Gao. 2023b. Asynchronous federated learning on heterogeneous devices: A survey. Computer Science Review, 50:100595.
|
| 352 |
+
He Yang, Wei Xi, Zizhao Wang, Yuhao Shen, Xinyuan Ji, Cerui Sun, and Jizhong Zhao. 2023. Fedrich: Towards efficient federated learning for heterogeneous clients using heuristic scheduling. Information Sciences, 645:119360.
|
| 353 |
+
|
| 354 |
+
Xu Yang, Yuchuan Luo, Shaojing Fu, Ming Xu, and Yingwen Chen. 2022. Dpmf: Decentralized probabilistic matrix factorization for privacy-preserving recommendation. Applied Sciences, 12(21):11118.
|
| 355 |
+
Hongzhi Yin, Liang Qu, Tong Chen, Wei Yuan, Ruiqi Zheng, Jing Long, Xin Xia, Yuhui Shi, and Chengqi Zhang. 2024. On-device recommender systems: A comprehensive survey. arXiv:2401.11441.
|
| 356 |
+
Chunxu Zhang, Guodong Long, Tianyi Zhou, Peng Yan, Zijian Zhang, Chengqi Zhang, and Bo Yang. 2023a. Dual personalization on federated recommendation. arXiv:2301.08143.
|
| 357 |
+
Honglei Zhang, He Liu, Haoxuan Li, and Yidong Li. 2024. Transfr: Transferable federated recommendation with pre-trained language models. arXiv:2402.01124.
|
| 358 |
+
Honglei Zhang, Fangyuan Luo, Jun Wu, Xiangnan He, and Yidong Li. 2023b. Lightfr: Lightweight federated recommendation with privacy-preserving matrix factorization. TOIS, 41(4):1-28.
|
| 359 |
+
Jizhi Zhang, Keqin Bao, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023c. Is chatgpt fair for recommendation? evaluating fairness in large language model recommendation. In RecSys. ACM.
|
| 360 |
+
Xiaolin Zheng, Zhongyu Wang, Chaochao Chen, Jiashu Qian, and Yao Yang. 2023. Decentralized graph neural network for privacy-preserving recommendation. In CIKM, pages 3494-3504.
|
| 361 |
+
|
| 362 |
+
# A Appendix
|
| 363 |
+
|
| 364 |
+
# A.1 Baselines
|
| 365 |
+
|
| 366 |
+
1) BIGRec (Bao et al., 2023a) using LLaMA-7B as the LLM backbone, utilizing the item title to present the user sequence. 2) RecFormer (Li et al., 2023) using LongFormer as the LLM backbone, utilizing both item titles and descriptions to represent user sequences. 3) FedAvg (McMahan et al., 2017) aggregates client model parameters without uploading their data. 4) FedProx (Li et al., 2020) extends FedAvg by adding a proximity term to the local optimization, allowing for more robust handling of heterogeneous data across clients. 5) Ditto is a personalized federated learning framework that simultaneously ensures fairness and robustness in statistically heterogeneous networks via a scalable solver. 6) RoLoRA employs an alternating minimization approach for LoRA to enhance robustness against reduced fine-tuning parameters and heightened data heterogeneity. 7) MF (Gantner et al., 2011) is a classical matrix factorization (MF) approach. 8) LightGCN (He et al., 2020) leverages high-order neighbor information to enhance the user and item representations. 9) FedMF (Chai et al., 2020) is a privacy-enhanced MF approach based on secure homomorphic encryption. 10) LightFR (Zhang et al., 2023b) is a lightweight federated recommendation framework with privacy-preserving MF. 11) FedPerGNN (Wu et al., 2022) designs a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection in GNN-based recommendation.
|
| 367 |
+
|
| 368 |
+
# A.2 Implementation
|
| 369 |
+
|
| 370 |
+
For all the baselines, we follow the original settings in their paper for implementation. Besides, we adopt the parameter-efficient fine-tuning technique LoRA to fine-tune BIGRec in 1024-shot and fully fine-tune RecFormer. For the client partition, we set the client number equal to 5, and cluster users based on pre-trained MF user embeddings, leveraging the premise that users with analogous characteristics and preferences are more likely to congregate in similar areas or platforms. For FedAvg, FedProx and FELLRec, we set the same local round number to ensure a fair comparison. The best hyper-parameters are selected with the searching scopes as follows: speed-related warm-up factor and time-related warm-up factor are tuned in $\{0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3\}$ and $\{1, 3, 5, 10, 15, 20\}$ . The experiments were con
|
| 371 |
+
|
| 372 |
+
Table 5: Statistics of three datasets.
|
| 373 |
+
|
| 374 |
+
<table><tr><td>Dataset</td><td>#User</td><td>#Item</td><td>#Interaction</td><td>Density</td></tr><tr><td>MicroLens</td><td>45,886</td><td>12,413</td><td>332,730</td><td>5e-04</td></tr><tr><td>Games</td><td>50,532</td><td>16,857</td><td>452,894</td><td>5e-04</td></tr><tr><td>Book</td><td>64,989</td><td>56,394</td><td>4,963,757</td><td>1.3e-03</td></tr></table>
|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
Figure 5: Contributions of dynamic parameter aggregation and learning speed to FELLRec.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+
duced under four NVIDIA V100.
|
| 382 |
+
|
| 383 |
+
# A.3 Attack Model Selection
|
| 384 |
+
|
| 385 |
+
For the attack model, there are two kinds of attack models: white-box (Gurnee and Tegmark, 2023) and black-box (Papernot et al., 2017). White-box means the attacker has complete knowledge of the target model, including the model's architecture, parameters, and training data. Black-box means attackers have very limited knowledge of the target model. They may only be able to speculate about the model's partial behavior through its inputs and outputs. To validate the robustness of our approach, we conducted experiments involving two typical types of white-box attacks: the linear probe attack and the Multilayer Perceptron (MLP) probe attack (Kim et al., 2024). These methods have better attack capabilities than black-box methods due to their access to more prior knowledge.
|
| 386 |
+
|
| 387 |
+
The linear probe attack involves training a linear regression model on the intermediate output embeddings of LLM. This model attempts to recover the original data from these intermediate embeddings. We then compare the similarity ratio between the reconstructed embeddings and the ground truth input embeddings, where a lower similarity ratio indicates greater difficulty for the attack model to extract useful information. The MLP probe attack is similar to the linear probe attack but employs a Multilayer Perceptron (MLP) instead of a simple linear regression model to probe the intermediate representations.
|
| 388 |
+
|
| 389 |
+
# A.4 In-depth Experimental analysis
|
| 390 |
+
|
| 391 |
+
# A.4.1 Ablation Study
|
| 392 |
+
|
| 393 |
+
In this section, we evaluate the unique contributions of dynamic parameter aggregation and dynamic learning speed in comparison with FedAvg,
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Figure 6: Influence of Client Number to RecFormer-based FELLRec and FedAvg in Games.
|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+
Table 6: Performance (R@10) of FedAvg and FELL-Rec under different heterogeneity degrees in Games. As concentration parameter $c$ increases, the heterogeneity degree decreases.
|
| 401 |
+
|
| 402 |
+
<table><tr><td>c</td><td>0.1</td><td>0.3</td><td>0.5</td><td>0.7</td><td>1</td></tr><tr><td>FedAvg</td><td>0.0134</td><td>0.0139</td><td>0.0141</td><td>0.0157</td><td>0.0162</td></tr><tr><td>FELLRec</td><td>0.0187</td><td>0.0211</td><td>0.0212</td><td>0.0216</td><td>0.0228</td></tr></table>
|
| 403 |
+
|
| 404 |
+
presenting the results in Figure 5 for the Games and Microlens datasets (excluding the Book dataset due to analogous trends). The analysis reveals that: 1) FELLRec with dynamic parameter aggregation consistently surpasses FedAvg. This highlights the benefits of an attention-based parameter aggregation method that tailors aggregation to the specific data distribution within FELLRec. 2) Similarly, FELLRec with dynamic learning speed invariably outperforms FedAvg, emphasizing the advantages of customizing the learning speed of each client based on their learning status. 3) The effectiveness of the two parts is consistent across different datasets and backend models, further demonstrating their robustness and generalizability.
|
| 405 |
+
|
| 406 |
+
# A.4.2 Influence of Client Heterogeneity Degree
|
| 407 |
+
|
| 408 |
+
To further demonstrate the robustness of our framework under different data distributions, we conduct additional experiments to analyze the influence of the heterogeneous data distribution degree across clients on the performance of federated learning baselines and our proposed method. Specifically, we follow the prevailing strategy (Wang et al., 2023b) and distribute samples to all clients based on the Dirichlet distribution. We report the performance under different heterogeneity degrees in Table 6, with $c$ as the concentration parameter determining the heterogeneity degree across clients. Intuitively, as the concentration parameter $c$ increases, the heterogeneity degree decreases. The results indicate that: 1) FELLRec consistently outperforms FedAvg when using the same backend model (BIGRec), demonstrating the superior capability of the dynamic balance strategy under different heterogeneity degree settings. 2) As the
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
Figure 7: Hyper-parameters analysis of Games.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+
heterogeneity degree increases, the performance of FedAvg decreases, corroborating the conclusion from Section 4.2 that the heterogeneity degree significantly impacts FedAvg's performance, causing fluctuations under different data distributions across clients.
|
| 416 |
+
|
| 417 |
+
# A.4.3 Influence of Client Number
|
| 418 |
+
|
| 419 |
+
To demonstrate the scalability of our approach with an increased number of clients, we expanded the client count from 5 to 100 and reported the comparative results of FELLRec and FedAvg in Figure 6. The findings indicate that: 1) With the escalation in client numbers, there is a noticeable decline in the performance of both FELLRec and FedAvg, likely due to the amplified diversity acorss client data distribution, which in turn aggravates the imbalance and adversely affects overall performance. 2) Nevertheless, FELLRec consistently outperforms FedAvg in every client count scenario. This enhanced performance is attributed to the dynamic aggregation strategy employed by FELLRec, effectively countering the imbalances stemming from the varied data distributions among clients.
|
| 420 |
+
|
| 421 |
+
# A.4.4 Hyper-parameter Analysis
|
| 422 |
+
|
| 423 |
+
We select sensitive hyper-parameters, adjusting them within the ranges delineated in Section 4.1. The experiment outcomes are visually represented in Figure 7. From our observations: The settings of the speed-related warm-up factor $\alpha$ and the time-related warm-up factor $\beta$ significantly affect the warm-up speed. Generally, enhancing the values of $\alpha$ and $\beta$ leads to improved performance, facilitating the integration of parameters from other clients to aid the learning process of the current client once it has adequately learned from its data. Nevertheless, overly aggressive acceleration in warm-up may prematurely incorporate parameters from other clients before the current client is prepared, potentially disrupting the learning trajectory and adversely affecting performance.
|
2025/A Federated Framework for LLM-based Recommendation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c10303c074f109bef6de6159b13ef90d7ffbbe17e368a0ce95003416a33520a4
|
| 3 |
+
size 445108
|
2025/A Federated Framework for LLM-based Recommendation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/4d81316e-a002-42ff-afb1-c684630b2a03_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:089c6f73e70e60285e4ae62d11edd6807248e97ccbaf877622bdf39078b8ba75
|
| 3 |
+
size 1884092
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/full.md
ADDED
|
@@ -0,0 +1,465 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization: Data Augmentation and Semi-supervised Approaches
|
| 2 |
+
|
| 3 |
+
Gaurav Sahu $^{1,2}$ Olga Vechtomova $^{1}$ Issam H. Laradji $^{1,2,3}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Cheriton School of Computer Science, University of Waterloo, Canada
|
| 6 |
+
|
| 7 |
+
$^{2}$ ServiceNow Research
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>University of British Columbia, Canada
|
| 10 |
+
|
| 11 |
+
gaurav.sahu@uwaterloo.ca
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Existing approaches for low-resource text summarization primarily employ large language models (LLMs) like GPT-3 or GPT-4 at inference time to generate summaries directly; however, such approaches often suffer from inconsistent LLM outputs and are difficult to adapt to domain-specific data in low-resource scenarios. In this work, we propose two novel methods to effectively utilize LLMs for low-resource text summarization: 1) MixSumm, an LLM-based data augmentation regime that synthesizes high-quality documents (short and long) for few-shot text summarization, and 2) PPSL, a prompt-based pseudolabeling strategy for sample-efficient semi-supervised text summarization. Specifically, MixSumm leverages the open-source LLaMA-3-70b-Instruct model to generate new documents by mixing topical information derived from a small seed set, and PPSL leverages the LLaMA-3-70b-Instruct model to generate high-quality pseudo-labels in a semi-supervised learning setup. We evaluate our methods on the TweetSumm, WikiHow, and ArXiv/PubMed datasets and use LEval, a LLaMA-3-based evaluation metric, and ROUGE scores to measure the quality of generated summaries. Our experiments on extractive and abstractive summarization show that MixSumm and PPSL achieve competitive ROUGE scores as a fully supervised method with $5\%$ of the labeled data. We release our codebase here: https://github.com/ServiceNow/text-summarization-with-llms/
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Text summarization is a crucial task in today's data-driven era, with applications ranging from news digests to summarizing scientific papers to summarizing customer chatlogs in enterprises (Cohan and Goharian, 2017; Zhong et al., 2020; Goyal et al., 2022; Feigenblat et al., 2021). Modern summarization systems can be broadly categorized into two types: abstractive, where the generated summaries
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: L-Eval scores of different methods on low-resource extractive text summarization. The proposed MixSumm approach generates new documents by combining topics from multiple examples and outperforms other methods, including a strong LLM-based DA method (MixSumm w/o mixup) and a prompt-based semi-supervised approach (PPSL).
|
| 23 |
+
|
| 24 |
+
are concise paraphrases of the source text (Barzilay and McKeown, 2005; Nallapati et al., 2016), and extractive, which select and arrange existing sentences in the source text (Wong et al., 2008; Kryscinski et al., 2019). While abstractive methods produce more fluent and natural-sounding summaries—particularly beneficial for longer documents, extractive methods are valued for their simplicity and reliability in preserving factual accuracy; however, the performance of these summarization systems is often constrained by the availability and diversity of training data.
|
| 25 |
+
|
| 26 |
+
Data augmentation (DA) has been successfully used to address data scarcity, mitigate data annotation costs, and enhance model robustness in various natural language processing (NLP) tasks like text classification, summarization, and grammatical error correction (Wei and Zou, 2019; Feng et al., 2021; Wang et al., 2022). Traditional augmentation methods involving synonym replacement, sentence shuffling, and back-translation are effective to some extent, but they quickly saturate as they do not fully capture the semantic nuances of the text; however, the recent surge in the development of LLMs like
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
Figure 2: MixSumm pipeline. We first group the documents into $T$ groups using the $k$ -means algorithm. Then, we construct the prompt for LLaMA-3-70b-Instruct by including documents from different groups and instructing the LLM to mix information from multiple topics when generating the new documents. Finally, we train a PreSumm extractive summarizer (Liu and Lapata, 2019) on the combined seed and the synthesized dataset. For abstractive summarization, we add a DistilBART model after PreSumm.
|
| 30 |
+
|
| 31 |
+
GPT-4 (Achiam et al., 2023), LLaMA-3 (Touvron et al., 2023), and Claude-3 (Anthropic, 2024), has given birth to the paradigm of LLM-based data augmentation techniques (Dai et al., 2023; Ding et al., 2024) that can generate contextually rich textual augmentations to enhance the performance of various NLP models such as dialog modeling systems (Chintagunta et al., 2021; Wan et al., 2022) and text classifiers (Yoo et al., 2021; Sahu et al., 2022). Real-life scenarios also often have a small labeled set alongside a large pool of unlabeled data, and semi-supervised learning (SSL) has been successfully used in such scenarios for images and text classification (Guillaumin et al., 2010; Gong et al., 2016; Liu et al., 2020; Miyato et al., 2016; Xu et al., 2017).
|
| 32 |
+
|
| 33 |
+
Despite the recent advancements in LLMs, neither LLM-based DA methods nor LLM-based semi-supervised methods have been extensively explored for low-resource text summarization. Therefore, in this work, we propose two novel methods to effectively utilize LLMs for low-resource text summarization: 1) MixSumm, an LLM-based data augmentation technique for few-shot text summarization, and 2) Prompt-based Pseudo-labeling for Semi-supervised Learning (PPSL), a pseudo-labeling strategy for sample-efficient semi-supervised text summarization. More specifically, MixSumm is a two-stage prompt-based data augmentation approach that first instructs an LLM to synthesize diverse documents by mixing topical information derived from a small set of seed documents, and then generates summaries for the synthesized doc
|
| 34 |
+
|
| 35 |
+
uments. On the other hand, $PPSL$ is a multi-step pseudo-labeling strategy for semi-supervised learning that generates high-quality pseudo-labels and selects most informative samples in an SSL pipeline.
|
| 36 |
+
|
| 37 |
+
To evaluate the effectiveness of our proposed framework, we conduct extensive experiments on the TweetSumm (Feigenblat et al., 2021), the WikiHow (Koupae and Wang, 2018), and the ArXiv/PubMed (Cohan et al., 2018) text summarization datasets. We use the open-source LLaMA-3-70b-Instruct LLM for our tasks instead of a closed-source LLM like the GPT family of LLMs. For evaluation, we use the standard ROUGE scores (Lin, 2004) as well as LEval, an open-source version of the promising LLM-based evaluator for text summarization, GEval (Liu et al., 2023b). Our experiments demonstrate that MixSumm and PPSL outperform strong data augmentation and semi-supervised baselines for low-resource summarization setups and we show a knowledge distillation effect, where the knowledge of a LLaMA-3-70b model is distilled into the a much smaller summarization model using BERT<sub>base</sub> and DistilBART backend (with 110M and 306M parameters, respectively).
|
| 38 |
+
|
| 39 |
+
To summarize the contributions of our work: 1) we propose MixSumm, a novel prompt-based data augmentation framework for the challenging low-resource setup of 50-shot text summarization, 2) we propose $PPSL$ , a novel pseudo-labeling strategy for sample-efficient semi-supervised text summarization, 3) we show the effectiveness of
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
Figure 3: PPSL pipeline. Step 1: train a teacher model $M$ on the limited labeled dataset. Step 2: generate pseudo-labels for the unlabeled set with $M$ and shortlist 50 based on teacher confidence (see Equation 2). Step 3: prompt an LLM to summarize the shortlisted documents. Step 4: score the pseudo-labels in Stage 3 by prompting an LLM and select the top 5. These summaries are then added to the training data for the next cycle.
|
| 43 |
+
|
| 44 |
+
LLaMA-3-70b-Instruct, an open-source LLM, instead of using expensive closed-source LLMs like GPT-4, and 4) we demonstrate effective knowledge distillation from LLaMA-3-70B (70B parameters) to BERT and DistilBART-based summarization models with 110M and 306M parameters.
|
| 45 |
+
|
| 46 |
+
# 2 Related Work
|
| 47 |
+
|
| 48 |
+
LLM-based Text Summarization. Fabbri et al. (2020) use round-trip back-translation to improve BART's abstractive summarization performance. On the other hand, Dou et al. (2021) propose GSum, a fully supervised transformer-based architecture that can use a guidance signal from an external source for improved abstractive text summarization. Goyal et al. (2022) employ zero-shot prompting on GPT-3 for open-ended news summarization and show that humans overwhelmingly prefer GPT-3 summaries over human summaries. Pu and Demberg (2023) use prompting on GPT-3 for controllable text summarization and show that while GPT-3 can follow simple constraints in the prompt like length, it shows a noticeably lower degree of change in styles compared to human-written summaries. Liu et al. (2024a) and Zhang et al. (2024) benchmark the zero-shot performance of LLMs on instruction-controlled summarization and news summarization. Chintagunta et al. (2021) use GPT-3 as a data annotator for 210-shot medical dialog summarization and show significant gains equivalent to using 6400 human-written labels. More re
|
| 49 |
+
|
| 50 |
+
cently, Liu et al. (2024b) fine-tune BART on LLM-generated summaries instead of human-generated summaries and show that LLMs are excellent references. Notably, these works prompt GPT-3 directly for summarization in their experiments. Except for the last two works, none of them use LLMs as data generators in low-resource setups. Additionally, they all use a closed-source LLM in their experiments. Zhang et al. (2023) propose an extract-then-generate method where they use incontext learning to generate extractive-summary-guided abstractive summaries. However, since they operate in a fully-supervised setting, the method suffers from scalability issues for large datasets. Mishra et al. (2023) propose LLM pseudo-labeling for semi-supervised dialog summarization, but our proposed $PPSL$ method is more sample-efficient as we use fewer labeled and unlabeled examples.
|
| 51 |
+
|
| 52 |
+
LLM-based Distillation and Data Augmentation in NLP. A large body of recent work uses LLMs as data generators for distilling a large teacher model's knowledge into smaller models for training instruction-tuned models and chain-of-thought reasoning, while reducing human annotation load (Ho et al., 2023; Shum et al., 2023; Meng et al., 2023; Liu et al., 2023a; Peng et al., 2023). Bonifacio et al. (2022) use few-shot prompting to construct training datasets with query-document pairs for information retrieval. In the landscape of few-shot text classification, Yoo et al. (2021) propose GPT3Mix and Sahu et al. (2022, 2023) propose
|
| 53 |
+
|
| 54 |
+
PromptMix, where both the methods use LLMs as data generators and data labelers. We are inspired by the success of LLM-based DA for these diverse NLP tasks and adopt the best prompting practices based on these works. For instance, we generate diverse examples by mixing examples from different classes or groups as in GPT3Mix and PromptMix; and specify concrete criteria when using LLMs for generation and evaluation as in Pu and Demberg (2023) and Liu et al. (2024b). Furthermore, we conduct extensive experiments to test the capabilities of an open-source LLM, LLaMA-3-70b, for low-resource text summarization, instead of using closed-source LLM like GPT-3 and GPT-4. Finally, we also test our LLM-based DA strategy on extremely long documents.
|
| 55 |
+
|
| 56 |
+
# 3 Notations
|
| 57 |
+
|
| 58 |
+
We denote an annotated, many-shot summarization dataset as $\mathcal{D} = (d_i,s_i)\forall i\in \{1,\ldots ,N\}$ , where $(d_{i},s_{i})$ denotes the $i$ -th datapoint with input text document $d_{i}$ and its ground truth summary $s_i$ . We refer to the training, validation, and testing parts of the dataset as $\mathcal{D}_{train}$ , $\mathcal{D}_{val}$ , and $\mathcal{D}_{test}$ , respectively. Given the many-shot training set $\mathcal{D}_{train}$ , we construct a few-shot version of the dataset with $k$ examples $\mathcal{D}_{F,train}$ and the unlabeled set $\mathcal{D}_{U,train}$ as follows:
|
| 59 |
+
|
| 60 |
+
Step 1. Given $\mathcal{D}_{F,train}$ , we group the training articles by topics. We do not define the topics explicitly and identify $T$ groups by applying the $k$ -means algorithm on the document embeddings (where $k = T$ ). We use the SBERT encodings (Reimers and Gurevych, 2019) of the input documents as document embeddings<sup>2</sup>. If an input document exceeds SBERT's context window length of 512 tokens (roughly 300-400 English words), we chunk the document into smaller pieces and then average the chunk embeddings to obtain the final document embedding.
|
| 61 |
+
|
| 62 |
+
Step 2. We construct our $k$ -shot dataset $\mathcal{D}_{F,train}$ by randomly sampling an equal number of datapoints from each of the $T$ clusters so that $\mathcal{D}_{F,train}$ has $k$ examples in total. In Section 6, we empirically show that our principled approach for constructing few-shot datasets is better than randomly sampling $k$ examples from $\mathcal{D}_{train}$ as it provides better topical coverage.
|
| 63 |
+
|
| 64 |
+
1to clarify, $k$ in $k$ -means is different from $k$ in $k$ -shot sentence-transformers/all-mpnet-base-v2 model was used from the sentence-transformers library
|
| 65 |
+
|
| 66 |
+
Step 3. We randomly select $m$ documents from $\mathcal{D}_{train} \setminus \mathcal{D}_{F,train}$ (without labels) to construct the unlabeled set of documents $\mathcal{D}_{U,train}$ to be used in the semi-supervised setup.
|
| 67 |
+
|
| 68 |
+
Problem Formulation (few-shot setup). Given a text summarization dataset $\mathcal{D}$ : 1) perform data augmentation on $\mathcal{D}_{F,train}$ to synthesize a labeled dataset $\mathcal{D}_{A,train}$ , and 2) train a text summarization model on the combined dataset $\mathcal{D}_{F + A,train}$ .
|
| 69 |
+
|
| 70 |
+
Problem Formulation (semi-supervised setup). Given a text summarization dataset $\mathcal{D}$ : 1) perform SSL on $\mathcal{D}_{F,train}$ and $\mathcal{D}_{U,train}$ to obtain a pseudo-labeled dataset $\mathcal{D}_{F+U,train}$ , and 2) train a text summarization model on the combined dataset $\mathcal{D}_{F+U,train}$ .
|
| 71 |
+
|
| 72 |
+
# 4 Methodology
|
| 73 |
+
|
| 74 |
+
# 4.1 MixSumm for Few-Shot Text Summarization
|
| 75 |
+
|
| 76 |
+
We now describe, MixSumm, a two-step approach for synthesizing labeled summarization documents. First, instruct an LLM to generate documents that cover multiple topics derived from a small set set. Next, we instruct an LLM to generate summaries for those documents. The following sections describe our two-step procedure in detail.
|
| 77 |
+
|
| 78 |
+
Step 1: Synthesizing New Documents. First, for every dataset, we manually write a short description that describes the type and approximate size of articles in the dataset. These descriptions enable our approach to be used in even zero-shot settings. Next, we construct $T$ pairs of clusters $(c_i, c_j) \forall i, j \in 1, \ldots, T, i \neq j$ , such that $c_j$ is the most distant cluster from $c_i$ . We use the centroids of the clusters obtained during $k$ -means clustering in Section 3 for our computation. We also ensure that all cluster pairs are unique as $(c_i, c_j) \equiv (c_j, c_i)$ . Finally, we combine the dataset description with $k$ examples from each cluster and instruct the LLM to generate new examples that cover topics from both clusters. Specifically, we instruct the LLM to generate examples that contain $\alpha \%$ topics from the first cluster $c_i$ and $(1 - \alpha) \%$ topics from the second cluster $c_j$ , where $\alpha$ is sampled from a uniform distribution between 1-100. This is similar to applying the mixup algorithm (Zhang et al., 2018) in a natural language space and has proven highly effective for data augmentation in low-resource text classification setups (Yoo et al., 2021; Sahu et al.,
|
| 79 |
+
|
| 80 |
+
2023). Prompt 1 in Appendix E shows the complete template for this step.
|
| 81 |
+
|
| 82 |
+
Step 2: Generating Summaries for the Synthesized Documents. Next, we instruct the LLM to generate extractive and abstractive summaries for the synthesized documents. For extractive summaries, we provide a generated document to the LLM and then instruct it to output a probability score for each sentence indicating whether that sentence should be included in the summary or not. We then rank the lines by the scores and choose the top $p$ lines, where $p$ is the summary size and depends on the dataset. We truncate the input document if it exceeds the LLM's context window length. This approach ensures the extractiveness of the generated summary labels as it mimics PreSumm (Liu and Lapata, 2019), a strong baseline for extractive text summarization. For abstractive summaries, instead of passing the entire source document and prompting the LLM to generate a summary, we ask it to summarize the previously generated extractive summaries. This approach is faster than passing the source document and summarizing as our input context is significantly smaller. More importantly, it enhances the factual correctness of the summaries.
|
| 83 |
+
|
| 84 |
+
# 4.2 PPSL for Semi-Supervised Text Summariation
|
| 85 |
+
|
| 86 |
+
This section describes our approach for semi-supervised text summarization. As shown in Figure 2, we employ a teacher-student training framework and divide our pipeline into four steps, where we first train a teacher model on $\mathcal{D}_{F,train}$ , then use it to generate pseudo-labels for $\mathcal{D}_{U,train}$ , prompt the LLM to relabel the teacher's pseudo-labels from the previous step, and lastly score the new pseudolabels with an LLM and select top 5 to include in the next training cycle.
|
| 87 |
+
|
| 88 |
+
Step 1: Training the Teacher Model First, we train a fully-supervised model $M$ (teacher) on the set of available labeled examples $\mathcal{D}_{F,train}$ . We use PreSumm (Liu and Lapata, 2019) as our extractive summarizer as it has been shown to perform well for extractive summarization. Notably, PreSumm reformulates the task of generating extractive summaries to binary classification, where, for each sentence in the input document, the model predicts if it will be present in the output summary. Then, the model combines the top- $n$ sentences with the highest probabilities in their order of appearance
|
| 89 |
+
|
| 90 |
+
in the original text to construct the extractive summary. For abstractive summarization, we follow an extractive-then-abstractive approach and add a DistilBART model that summarizes PreSumm's summary. The rest of the subsequent steps remain unchanged.
|
| 91 |
+
|
| 92 |
+
Step 2: Generating Pseudo-labels using the Teacher Model We use the teacher model $M$ to generate pseudo-labels for the unlabeled set $\mathcal{D}_{U,train}$ . Next, we shortlist a subset of 50 pseudolabels with the highest teacher confidence<sup>3</sup>. We describe confidence computation in detail in Appendix B. We will show in Section 6 that shortlisting a subset of pseudo-labels helps make our method more sample-efficient, as we avoid relabeling a large unlabeled pool. This ultimately minimizes our LLM usage cost in the subsequent steps.
|
| 93 |
+
|
| 94 |
+
Step 3: LLM Relabeling of Teacher's Pseudolabels After selecting the top 50 pseudo-labels using teacher confidence defined in Equation 2, we prompt the LLM to generate a summary for each shortlisted unlabeled example. This effectively relabels the pseudo-label from Step 2. Specifically, we follow the prompt template in Figure 6a when generating summaries, which uses the same mechanism as the teacher $M$ , i.e., for extractive summarization, we instruct the LLM to output probabilities for each sentence in the input document and then concatenate the top- $n$ lines in their order of appearance in the input text and for abstractive, we further ask the LLM to summarize the extractive summary.
|
| 95 |
+
|
| 96 |
+
Step 4: LLM Scoring of Pseudo-labels In the last step of our $PPSL$ , we prompt LLaMA-3 as shown in Figure 6b to output a rating between 0-100 for the pseudo-labels from Step 3. Finally, we choose the top 5 pseudo-labeled examples with the highest LLM scores and add them to the existing labeled set. We repeat Steps 1-4 $N_{cycles}$ times to improve the initial summarization model $M$ and use the model obtained after the last cycle for generating summaries for the unseen test set.
|
| 97 |
+
|
| 98 |
+
# 5 Experimental Setup
|
| 99 |
+
|
| 100 |
+
# 5.1 Datasets
|
| 101 |
+
|
| 102 |
+
We use three popular datasets in this work for extractive text summarization.
|
| 103 |
+
|
| 104 |
+
<table><tr><td></td><td>TweetSumm</td><td>WikiHow</td><td>ArXiv/PubMed</td></tr><tr><td># Train</td><td>858</td><td>168,000</td><td>300,000</td></tr><tr><td># Valid</td><td>100</td><td>6,000</td><td>12,500</td></tr><tr><td># Test</td><td>100</td><td>6,000</td><td>12,500</td></tr><tr><td>Avg. Doc. Length</td><td>245.01</td><td>579.8</td><td>4203.4</td></tr></table>
|
| 105 |
+
|
| 106 |
+
Table 1: Statistics of the text summarization datasets used in our experiments. Note: Avg. doc. length is reported in the number of tokens.
|
| 107 |
+
|
| 108 |
+
1) TweetSumm (Feigenblat et al., 2021) is a real-world customer service dataset that has 1100 conversations between a customer and an agent, and each conversation has three human-annotated extractive summaries. The training set has 858 dialogs, and the validation and test sets have 100 examples each. 2) WikiHow (Koupaee and Wang, 2018) contains WikiHow articles with their headlines as abstractive summaries. The dataset has over 180k articles, with around 168k training articles and 6000 test and validation articles. 3) ArXiv/PubMed (Cohan et al., 2018) is a collection of scientific articles from PubMed and ArXiv with their abstracts as summaries. The dataset has $\sim 325\mathrm{k}$ articles, with nearly 300k training articles and 12.5k test and validation articles.
|
| 109 |
+
|
| 110 |
+
Table 1 summarizes the dataset statistics. Since WikiHow and ArXiv/PubMed datasets do not have extractive labels, we follow the same steps as the original PreSumm paper (Liu and Lapata, 2019) and construct an extractive summary that maximizes the ROUGE score between the obtained extractive summary and the ground-truth abstractive summary. We chose the three datasets above as they cover diverse scenarios, from relatively short real-world customer-agent conversations in the TweetSumm dataset to long scientific articles in the ArXiv/PubMed dataset. We report the training implementation details in Appendix C.
|
| 111 |
+
|
| 112 |
+
# 5.2 Evaluation.
|
| 113 |
+
|
| 114 |
+
We evaluate the summary quality of the models using the following metrics:
|
| 115 |
+
|
| 116 |
+
ROUGE Scores. We use ROUGE-1 (R-1), ROUGE-2 (R-2), and ROUGE-L (R-L) F1 scores (Lin, 2004) for evaluation, where R-1 and R-2 measure the unigram and bigram overlap between the predicted and the ground truth summaries, respectively, while R-L also considers the order of $n$ -grams. We use the pyrouge Python package to compute ROUGE scores in our setup and report them in Table 2.
|
| 117 |
+
|
| 118 |
+
L-Eval Scores. In addition to ROUGE, we use an LLM-based evaluation metric for our task. Specifically, we use LLaMA-Eval (L-Eval), an open-source variant of the G-Eval metric (Liu et al., 2023b), where we prompt LLaMA-3-70b-Instruct instead of a GPT model. We use L-Eval as it better aligns with human preferences for text summarization, compared to ROUGE scores and other model-based evaluation metrics, such as BERTScore and BARTScore (Zhang et al., 2019; Yuan et al., 2021). It is also not biased towards LLM-generated content; however, since LLM-inference speed is low for long documents, we did not compute L-Eval scores during training and only computed them during final testing. When computing L-Eval scores, we provide the LLM with the input article and a (generated) summary and instruct it to score the summary on a scale of 1-10 (see Prompt 2 in Appendix E for the full L-Eval prompt template). Formally, given a test article $A$ and a summary $s$ , we compute the L-Eval score as follows:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\operatorname {L - E a v} (A, s) = \sum_ {r = 1} ^ {1 0} p _ {r} \cdot r, \tag {1}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
where $p_r$ is the probability of generating the rating $r$ . In practice, we can only look at the probabilities of top-5 tokens for LLaMA-3-70b-Instruct, so we assign a probability of 0 to the remaining ratings (that did not appear in the top-5).
|
| 125 |
+
|
| 126 |
+
In total, computing test L-Eval scores for all the summarization models included in Table 2 took $\sim 5.6$ hrs for TweetSumm, $\sim 2.1$ days for WikiHow, and $\sim 6$ days for the ArXiv/PubMed dataset.
|
| 127 |
+
|
| 128 |
+
# 5.3 Baselines
|
| 129 |
+
|
| 130 |
+
We run the following baselines: 1) MixSumm (Ours). We augment $\mathcal{D}_{F,train}$ using the proposed MixSumm approach then train a summarization model on $\mathcal{D}_{F + A,train}$ . We also run two variants of this baseline to determine the effect of applying data augmentation and mixup, denoted by MixSumm w/o Aug. and MixSumm w/o Mixup respectively. 2) Easy Data Augmentation (EDA). We use an edit-based data augmentation technique (Wei and Zou, 2019) to construct $\mathcal{D}_{A,train}$ instead of using MixSumm. Specifically, we apply the EDA technique to each sentence in an article to construct a new example. 3) MixSumm (rand.). Same as 1) but $\mathcal{D}_{F,train}$ is constructed by randomly selecting $k$ examples from the full
|
| 131 |
+
|
| 132 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">TweetSumm</td><td colspan="4">WikiHow</td><td colspan="4">ArXiv/Pubmed</td></tr><tr><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td></tr><tr><td colspan="13">Extractive Summarization</td></tr><tr><td>Oracle</td><td>65.7±0.3</td><td>56.6±0.4</td><td>64.9±0.4</td><td>86.2±0.3</td><td>30.5±0.5</td><td>8.7±0.3</td><td>19.2±0.6</td><td>87.3±0.1</td><td>34.6±0.4</td><td>12.4±0.2</td><td>19.6±0.4</td><td>78.1±0.5</td></tr><tr><td>TSL (50:500)</td><td>49.0</td><td>37.7</td><td>48.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TSL (500:500)</td><td>59.0</td><td>48.3</td><td>58.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>EDA</td><td>51.1±0.7</td><td>39.2±0.9</td><td>53.0±0.2</td><td>34.3±1.2</td><td>23.4±0.5</td><td>4.1±0.3</td><td>13.0±0.5</td><td>42.1±0.8</td><td>26.2±1.1</td><td>7.9±1.0</td><td>13.1±0.6</td><td>17.2±0.5</td></tr><tr><td>PPSL (50:250)</td><td>58.4±1.2</td><td>50.1±0.3</td><td>59.1±1.2</td><td>56.3±0.9</td><td>26.0±0.2</td><td>6.9±0.3</td><td>15.1±0.2</td><td>69.3±2.1</td><td>29.0±0.5</td><td>9.4±0.7</td><td>17.4±0.3</td><td>49.3±1.4</td></tr><tr><td>MixSumm (rand.)</td><td>58.6±3.2</td><td>50.6±2.1</td><td>59.7±2.3</td><td>60.3±0.9</td><td>26.4±1.0</td><td>7.5±1.2</td><td>15.8±0.2</td><td>72.5±1.2</td><td>30.7±1.7</td><td>10.6±1.5</td><td>18.5±1.1</td><td>48.4±1.1</td></tr><tr><td>w/o Aug.</td><td>49.4±0.7</td><td>36.9±1.0</td><td>49.0±0.2</td><td>31.5±0.5</td><td>21.3±0.4</td><td>3.2±0.4</td><td>11.4±0.5</td><td>34.2±1.5</td><td>23.4±1.1</td><td>7.5±1.4</td><td>12.3±0.8</td><td>13.5±1.2</td></tr><tr><td>MixSumm (ours)</td><td>59.1±1.7</td><td>52.7±1.6</td><td>60.5±1.3</td><td>65.3±1.2</td><td>27.3±2.1</td><td>7.8±1.3</td><td>16.6±1.8</td><td>81.1±1.7</td><td>31.2±1.2</td><td>10.7±1.1</td><td>18.3±1.1</td><td>53.1±0.5</td></tr><tr><td>w/o Mixup</td><td>56.1±1.1</td><td>47.3±1.2</td><td>55.3±1.1</td><td>57.3±0.5</td><td>25.7±1.4</td><td>6.2±1.2</td><td>14.7±0.7</td><td>67.3±2.1</td><td>28.4±1.9</td><td>8.3±1.3</td><td>16.8±1.6</td><td>52.3±1.2</td></tr><tr><td>w/o Aug.</td><td>50.1±0.6</td><td>38.1±1.0</td><td>49.9±0.6</td><td>32.3±3.1</td><td>21.9±0.3</td><td>3.5±0.2</td><td>12.1±0.9</td><td>33.3±1.7</td><td>24.1±0.9</td><td>7.9±1.0</td><td>12.7±0.5</td><td>19.0±2.5</td></tr><tr><td>LLaMA-3 (0-shot)</td><td>50.3±0.5</td><td>47.7±0.4</td><td>49.9±0.3</td><td>52.3±1.2</td><td>12.2±0.2</td><td>2.7±0.5</td><td>8.1±0.4</td><td>32.3±0.3</td><td>23.6±0.2</td><td>4.6±0.7</td><td>15.4±0.3</td><td>38.4±0.5</td></tr><tr><td>LLaMA-3 (1-shot)</td><td>51.7±0.2</td><td>49.2±0.3</td><td>51.9±0.3</td><td>58.7±1.1</td><td>14.3±0.2</td><td>4.1±0.5</td><td>10.6±0.2</td><td>39.4±0.5</td><td>32.6±0.4</td><td>6.5±0.7</td><td>17.2±0.3</td><td>38.3±1.8</td></tr><tr><td>LLaMA-3 (5-shot)</td><td>62.4±0.5</td><td>54.3±0.7</td><td>60.3±1.1</td><td>67.5±0.6</td><td>28.7±0.3</td><td>7.5±0.9</td><td>17.1±0.3</td><td>71.3±0.4</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="13">Abstractive Summarization</td></tr><tr><td>Oracle</td><td>44.7±0.2</td><td>20.1±0.4</td><td>36.8±0.2</td><td>72.3±0.6</td><td>28.7±0.3</td><td>6.2±0.7</td><td>13.6±0.4</td><td>78.4±0.8</td><td>28.4±0.2</td><td>10.2±0.4</td><td>15.8±0.8</td><td>64.3±0.5</td></tr><tr><td>EDA</td><td>41.5±1.2</td><td>15.0±0.8</td><td>32.2±1.1</td><td>44.2±1.6</td><td>14.7±1.8</td><td>3.2±1.0</td><td>6.8±1.5</td><td>40.5±1.4</td><td>16.3±1.5</td><td>5.9±0.8</td><td>8.1±1.7</td><td>36.8±1.3</td></tr><tr><td>PPSL (50:250)</td><td>42.7±1.5</td><td>18.1±1.1</td><td>33.8±1.3</td><td>58.1±1.3</td><td>26.9±1.8</td><td>5.7±1.0</td><td>12.1±1.5</td><td>62.1±1.4</td><td>26.7±1.5</td><td>9.5±0.8</td><td>13.8±1.7</td><td>61.3±1.3</td></tr><tr><td>MixSumm (ours)</td><td>43.1±1.1</td><td>18.4±1.5</td><td>34.7±1.0</td><td>62.3±1.4</td><td>26.7±1.7</td><td>5.3±0.9</td><td>11.3±1.4</td><td>67.5±1.3</td><td>27.1±1.4</td><td>9.8±0.7</td><td>13.5±1.6</td><td>61.4±1.2</td></tr><tr><td>w/o Mixup</td><td>37.5±1.0</td><td>16.0±1.3</td><td>31.2±0.9</td><td>58.2±1.2</td><td>23.2±1.5</td><td>4.6±0.8</td><td>9.8±1.2</td><td>58.7±1.1</td><td>23.6±1.2</td><td>8.5±0.6</td><td>11.7±1.4</td><td>55.8±1.0</td></tr><tr><td>w/o Aug.</td><td>23.7±1.2</td><td>10.1±1.7</td><td>18.3±1.1</td><td>34.9±1.5</td><td>14.0±1.9</td><td>2.9±1.0</td><td>6.2±1.6</td><td>36.4±1.4</td><td>14.5±1.3</td><td>5.4±0.8</td><td>7.4±1.8</td><td>19.2±1.3</td></tr><tr><td>LLaMA-3 (0-shot)</td><td>37.5±1.1</td><td>13.4±0.7</td><td>21.3±0.3</td><td>42.0±1.2</td><td>11.3±0.4</td><td>2.5±0.2</td><td>7.6±1.1</td><td>34.7±0.2</td><td>20.4±1.2</td><td>2.3±0.7</td><td>9.6±1.3</td><td>26.7±1.5</td></tr><tr><td>LLaMA-3 (1-shot)</td><td>37.8±1.0</td><td>13.5±0.8</td><td>21.5±0.4</td><td>41.7±1.3</td><td>11.5±0.5</td><td>2.4±0.2</td><td>7.8±1.0</td><td>34.9±0.3</td><td>20.2±1.1</td><td>2.4±0.6</td><td>9.5±1.2</td><td>26.9±1.4</td></tr><tr><td>LLaMA-3 (5-shot)</td><td>44.2±0.9</td><td>19.8±1.1</td><td>36.1±1.2</td><td>64.4±0.7</td><td>26.2±0.6</td><td>5.6±0.4</td><td>12.1±0.6</td><td>69.3±1.2</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 133 |
+
|
| 134 |
+
Table 2: Summarization Results. Comparison of different text summarization models on TweetSumm, WikiHow, and ArXiv/PubMed datasets. We report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L) $\mathrm{F}_1$ scores, and L-Eval scores. We report the mean±std. performance across 5 different seeds. Refer to Appendix C and Section 5.3 for metric and implementation details. Note. TSL results are reported from Zhuang et al. (2023). For EDA and MixSumm we use a 50-shot $\mathcal{D}_{F,\text{train}}$ and generate 1000 examples as $\mathcal{D}_{A,\text{train}}$ . Bold denotes the best-performing model in a given block and highlight denotes the overall best-performing model. For the ArXiv/PubMed dataset, we could fit only 2 documents into LLaMA-3's context (1 from $\mathcal{D}_{F,\text{train}} + 1$ generated), so we do not report LLaMA-3 (5-shot).
|
| 135 |
+
|
| 136 |
+
training set instead of selecting examples from the $T$ clusters. We also run MixSumm (rand.) w/o Aug. where we do not perform any data augmentation. 4) Teacher Student Learning (TSL). A semi-supervised setup proposed by Zhuang et al. (2023) that employs a teacher-student learning framework similar to us except they do not use LLM-based pseudo-labeling or relabeling. We report the performance of the TSL (50:500) and TSL (500:500) models 4. 5) PPSL. Proposed semi-supervised approach using teacher confidence and prompt-based pseudolabel scoring for text summarization. We report results for the PPSL (50:250) setting that uses LLaMA-3-70b-Instruct. 6) LLaMA-3-70b ( $k$ -shot.) An in-context learning-based approach where we prompt LLaMA-3-70b-Instruct with $k$ examples randomly selected from $\mathcal{D}_{F,train}$ and then instruct it to summarize a test article. We use the same prompt as the one we use for summarizing articles (Prompt 3 in Appendix E), except we remove the group information and directly populate it with $k$ examples. 7) Oracle. A fully supervised model trained on the complete training set $\mathcal{D}_{train}$ to gauge the upper-bound performance for this task.
|
| 137 |
+
|
| 138 |
+
# 6 Results
|
| 139 |
+
|
| 140 |
+
# 6.1 MixSumm Generates Diverse Documents.
|
| 141 |
+
|
| 142 |
+
Table 7 shows qualitative examples generated by EDA, MixSumm w/o mixup and MixSumm in Table 7. In the context of Table 7, we note that w/o mixup, MixSumm generates decent quality documents, but it only covers a single topic (phone/electronic device-related sentences.) MixSumm, on the other hand, generated an example that contains mention of terms from two topics (flight as well as a device-related issue.) EDA generates the lowest-quality documents with grammatical errors and other artifacts. However, we note that regardless of the quality of the original document, LLaMA-3-70b generates a high-quality summary in all cases.
|
| 143 |
+
|
| 144 |
+
Comparison w/ Other DA methods. From Table 2, we note that MixSumm achieves significantly higher L-Eval and ROUGE scores for both extractive and abstractive summarization tasks. This demonstrates the superior generation ability of LLMs compared to a simple edit-based DA technique like EDA. Next, we compare MixSumm with MixSumm w/o Mixup, a strong LLM-based data augmentation baseline, and note that removing the mixup component from MixSumm significantly
|
| 145 |
+
|
| 146 |
+
lowers ROUGE and L-Eval scores across the board (as verified by a T-test).
|
| 147 |
+
|
| 148 |
+
# 6.2 Effect of Clustering Documents.
|
| 149 |
+
|
| 150 |
+
We perform a student's T-Test comparing results from MixSumm and MixSumm (rand.) and note that while ROUGE scores for MixSumm are generally higher than MixSumm (rand.), the differences are not significant. The only exception was R-2 scores on TweetSumm, where MixSumm outperforms MixSumm (rand.) by 2.1 points (R-2 of 52.7 v/s 50.6). On the other hand, the difference in L-Eval scores for the two methods was found to be significant by the T-test for all the datasets. This further suggests that ROUGE scores might not be able to capture the semantic correctness of the generated summaries and highlights the importance of an LLM-based evaluator that can discern between nuanced semantics in natural language text. We observe a similar trend after removing the augmentation component from both methods (MixSumm w/o Aug. v/s MixSumm (rand.) w/o Aug.).
|
| 151 |
+
|
| 152 |
+
Overall, we conclude that MixSumm is better than MixSumm (rand.), and we should include diverse examples, if possible, in the prompt as it leads to direct improvements in generation quality.
|
| 153 |
+
|
| 154 |
+
# 6.3 DA v/s SSL Methods
|
| 155 |
+
|
| 156 |
+
Comparing MixSumm with PPSL and TSL in Table 2, we note that our 50-shot MixSumm and MixSumm (rand.) methods outperform TSL (50:500), which uses 50 labeled examples and 500 unlabeled examples. Next, our two methods outperform TSL (500:500) on all the metrics except the R-1 score (where the different was not found to be significant). Overall, MixSumm is better than TSL for extractive summarization in extreme data-scarce settings. Next, we note that MixSumm achieves slightly higher ROUGE scores and significantly higher L-Eval scores than PPSL (50:250) for extractive summarization; however, for abstractive summarization, MixSumm and PPSL achieve very similar performance for the three datasets. Overall, we conclude that prompt-based data augmentation might be better than using a semi-supervised method for extractive summarization in data-scarce setups, but both methods are equally performant for abstractive summarization.
|
| 157 |
+
|
| 158 |
+
# 6.4 Knowledge Distillation from LLaMA-3
|
| 159 |
+
|
| 160 |
+
First, we note that increasing the number of examples for the LLaMA-3 method leads to expected
|
| 161 |
+
|
| 162 |
+
improvements in performance except L-Eval scores on the ArXiv/PubMed dataset, where 0-shot and 1-shot LLaMA-3 models achieve similar L-Eval scores. This may suggest that LLaMA-3 struggles with understanding very long documents. Next, we note that 0-shot LLaMA-3 outperforms 50-shot MixSumm w/o Aug baseline on the TweetSumm dataset in terms of ROUGE scores and L-Eval scores, and it achieves competitive results on ArXiv/PubMed. Lastly, we note that MixSumm achieves competitive performance against LLaMA-3 as a summarizer for both extractive and abstractive tasks, whereas, $PPSL$ is competitive with LLaMA-3 on only the abstractive task. Additionally, we note that our methods achieve comparable ROUGE scores to the Oracle model despite using just 50 labels compared to 1000 examples used by the oracle (95% less). Overall, we conclude that both MixSumm and $PPSL$ are highly performant models compared to LLaMA-3-70b model, demonstrating effective distillation effect from LLaMA-3-70b to BERT- and DistilBART-based models. We include additional ablation studies in Appendix D that demonstrate the sample efficiency of $PPSL$ and show the importance of relabeling and the specific pseudo-labeling strategy used in $PPSL$ .
|
| 163 |
+
|
| 164 |
+
# 7 Conclusion
|
| 165 |
+
|
| 166 |
+
In this work, we focus on low-resource text summarization and propose two novel approaches to effectively employ an LLM for the task: MixSumm, a two-step data augmentation method for few-shot summarization, and $PPSL$ , a multi-step prompt-based semi-supervised framework for sample-efficient semi-supervised text summarization. Our experiments show that our methods are better than existing approaches for low-resource summarization and that they knowledge transfer from a large teacher model LLaMA-3-70b-Instruct into much smaller BERT- and DistilBART-based models. LLM-based approaches are underexplored for low-resource text summarization, and through this work, we hope to spark an interest in the research community to address various challenges of this task.
|
| 167 |
+
|
| 168 |
+
# 8 Limitations
|
| 169 |
+
|
| 170 |
+
We use LLaMA-3-70b-Instruct for our experiments, which has a context window size of 8192 tokens, so it is not possible to fit many long documents in the model's context (like articles in the
|
| 171 |
+
|
| 172 |
+
ArXiv/PubMed dataset). We can explore using position interpolation (PI) to increase the context window length of LLaMA (Chen et al., 2023) or switch to more recent LLaMA-3.1 family of models.
|
| 173 |
+
|
| 174 |
+
Currently, we only consider text summarization for the English language. Moving forward, we can expand our method to multiple languages. More research on efficiently handling long documents during the training process is also needed, as we currently rely on a chunk-and-summarize subroutine to train our models for long documents, which results in significant delays in document processing. We can consider using alternative transformer architectures such as LongFormer (Beltagy et al., 2020) as PreSumm's backend.
|
| 175 |
+
|
| 176 |
+
# 9 Ethics Statement
|
| 177 |
+
|
| 178 |
+
We generate large textual datasets using LLMs, and even though we use an instruction-tuned model, we need to be careful about any bias it might exhibit, or any potentially harmful content that it might generate. Language model debiasing is a common potential solution to address this issue (Meade et al., 2021; Guo et al., 2022). Additionally, we suggest involving a human moderator if these systems are to be made public-facing.
|
| 179 |
+
|
| 180 |
+
# References
|
| 181 |
+
|
| 182 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 183 |
+
AI Anthropic. 2024. The claude 3 model family: Opus, sonnet, haiku. *Claude-3 Model Card*, 1.
|
| 184 |
+
Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297-328.
|
| 185 |
+
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
|
| 186 |
+
Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. Inpars: Data augmentation for information retrieval using large language models. arXiv preprint arXiv:2202.05144.
|
| 187 |
+
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595.
|
| 188 |
+
|
| 189 |
+
Bharath Chintagunta, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2021. Medically aware GPT-3 as a data generator for medical dialogue summarization. In Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations, pages 66–76, Online. Association for Computational Linguistics.
|
| 190 |
+
Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615-621, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 191 |
+
Arman Cohan and Nazli Goharian. 2017. Scientific article summarization using citation-context and article's discourse structure. arXiv preprint arXiv:1704.06619.
|
| 192 |
+
Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, et al. 2023. Auggpt: Leveraging chatgpt for text data augmentation. arXiv preprint arXiv:2302.13007.
|
| 193 |
+
Bosheng Ding, Chengwei Qin, Ruochen Zhao, Tianze Luo, Xinze Li, Guizhen Chen, Wenhan Xia, Junjie Hu, Anh Tuan Luu, and Shafiq Joty. 2024. Data augmentation using llms: Data perspectives, learning paradigms and challenges. arXiv preprint arXiv:2403.02990.
|
| 194 |
+
Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, and Graham Neubig. 2021. Gsum: A general framework for guided neural abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4830-4842.
|
| 195 |
+
Alexander R Fabbri, Simeng Han, Haoyuan Li, Hao ran Li, Marjan Ghazvininejad, Shafiq Joty, Dragomir Radev, and Yashar Mehdad. 2020. Improving zero and few-shot abstractive summarization with intermediate fine-tuning and data augmentation. arXiv preprint arXiv:2010.12836.
|
| 196 |
+
Guy Feigenblat, Chulaka Gunasekara, Benjamin Sznajder, Sachindra Joshi, David Konopnicki, and Ranit Aharonov. 2021. TWEETSUMM - a dialog summarization dataset for customer service. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 245-260, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 197 |
+
Steven Y Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Edward Hovy. 2021. A survey of data augmentation approaches for nlp. arXiv preprint arXiv:2105.03075.
|
| 198 |
+
|
| 199 |
+
Chen Gong, Dacheng Tao, Stephen J Maybank, Wei Liu, Guoliang Kang, and Jie Yang. 2016. Multi-modal curriculum learning for semi-supervised image classification. IEEE Transactions on Image Processing, 25(7):3249-3260.
|
| 200 |
+
Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of gpt-3. arXiv preprint arXiv:2209.12356.
|
| 201 |
+
Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. 2010. Multimodal semi-supervised learning for image classification. In 2010 IEEE Computer society conference on computer vision and pattern recognition, pages 902-909. IEEE.
|
| 202 |
+
Yue Guo, Yi Yang, and Ahmed Abbasi. 2022. Autodebias: Debiasing masked language models with automated biased prompts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1012-1023.
|
| 203 |
+
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023. Large language models are reasoning teachers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14852-14882, Toronto, Canada. Association for Computational Linguistics.
|
| 204 |
+
Mahnaz Koupae and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305.
|
| 205 |
+
Wojciech Krysciński, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840.
|
| 206 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 207 |
+
Hanmeng Liu, Zhiyang Teng, Leyang Cui, Chaoli Zhang, Qiji Zhou, and Yue Zhang. 2023a. LogiCoT: Logical chain-of-thought instruction tuning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2908-2921, Singapore. Association for Computational Linguistics.
|
| 208 |
+
Quande Liu, Lequan Yu, Luyang Luo, Qi Dou, and Pheng Ann Heng. 2020. Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE transactions on medical imaging, 39(11):3429-3440.
|
| 209 |
+
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023b. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics.
|
| 210 |
+
|
| 211 |
+
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
|
| 212 |
+
Yixin Liu, Alexander Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, and Arman Cohan. 2024a. Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4481-4501, Mexico City, Mexico. Association for Computational Linguistics.
|
| 213 |
+
Yixin Liu, Kejian Shi, Katherine He, Longtian Ye, Alexander Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. 2024b. On learning to summarize with large language models as references. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8647-8664, Mexico City, Mexico. Association for Computational Linguistics.
|
| 214 |
+
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
|
| 215 |
+
Nicholas Meade, Elinor Poole-Dayan, and Siva Reddy. 2021. An empirical survey of the effectiveness of debiasing techniques for pre-trained language models. arXiv preprint arXiv:2110.08527.
|
| 216 |
+
Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, and Jiawei Han. 2023. Tuning language models as training data generators for augmentation-enhanced few-shot learning. In International Conference on Machine Learning, pages 24457-24477. PMLR.
|
| 217 |
+
Nishant Mishra, Gaurav Sahu, Iacer Calixto, Ameen Abu-Hanna, and Issam H Laradji. 2023. Llm aided semi-supervision for extractive dialog summarization. arXiv preprint arXiv:2311.11462.
|
| 218 |
+
Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725.
|
| 219 |
+
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280-290.
|
| 220 |
+
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277.
|
| 221 |
+
|
| 222 |
+
Dongqi Pu and Vera Demberg. 2023. Chatgpt vs human-authored text: Insights into controllable text summarization and sentence style transfer. arXiv preprint arXiv:2306.07799.
|
| 223 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
|
| 224 |
+
Gaurav Sahu, Pau Rodriguez, Issam Laradji, Parmida Atighehchian, David Vazquez, and Dzmitry Bahdanau. 2022. Data augmentation for intent classification with off-the-shelf large language models. In Proceedings of the 4th Workshop on NLP for Conversational AI, pages 47-57, Dublin, Ireland. Association for Computational Linguistics.
|
| 225 |
+
Gaurav Sahu, Olga Vechtomova, Dzmitry Bahdanau, and Issam H Laradji. 2023. Promptmix: A class boundary augmentation method for large language model distillation. arXiv preprint arXiv:2310.14192.
|
| 226 |
+
Kashun Shum, Shizhe Diao, and Tong Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, pages 12113–12139, Singapore. Association for Computational Linguistics.
|
| 227 |
+
Leslie N Smith. 2015. Cyclic learning rates for training neural networks. arxiv. Preprint at https://arxiv.org/abs/1506.01186.
|
| 228 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 229 |
+
Dazhen Wan, Zheng Zhang, Qi Zhu, Lizi Liao, and Min-lie Huang. 2022. A unified dialogue user simulator for few-shot data augmentation. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 3788–3799, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 230 |
+
Yiming Wang, Qianren Mao, Junnan Liu, Weifeng Jiang, Hongdong Zhu, and Jianxin Li. 2022. Noise-injected consistency training and entropy-constrained pseudo labeling for semi-supervised extractive summarization. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6447-6456.
|
| 231 |
+
Jason Wei and Kai Zou. 2019. EDA: Easy data augmentation techniques for boosting performance on text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language
|
| 232 |
+
|
| 233 |
+
Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6382-6388, Hong Kong, China. Association for Computational Linguistics.
|
| 234 |
+
Kam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Extractive summarization using supervised and semi-supervised learning. In Proceedings of the 22nd international conference on computational linguistics (Coling 2008), pages 985-992.
|
| 235 |
+
Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
|
| 236 |
+
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, and Woomyoung Park. 2021. Gpt3mix: Leveraging large-scale language models for text augmentation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2225-2239.
|
| 237 |
+
Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. Advances in Neural Information Processing Systems, 34:27263-27277.
|
| 238 |
+
Haopeng Zhang, Xiao Liu, and Jiawei Zhang. 2023. Extractive summarization via chatgpt for faithful summary generation. arXiv preprint arXiv:2304.04193.
|
| 239 |
+
Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations.
|
| 240 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
|
| 241 |
+
Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2024. Benchmarking large language models for news summarization. Transactions of the Association for Computational Linguistics, 12:39-57.
|
| 242 |
+
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197-6208.
|
| 243 |
+
Yingying Zhuang, Jiecheng Song, Narayanan Sadagopan, and Anurag Beniwal. 2023. Self-supervised pre-training and semi-supervised learning for extractive dialog summarization. In Companion Proceedings of the ACM Web Conference 2023, pages 1069-1076.
|
| 244 |
+
|
| 245 |
+
# A Setting $T$
|
| 246 |
+
|
| 247 |
+
We experiment with different values of $T$ (number of groups to divide the training set into) and report the validation performance reported in Table 3.
|
| 248 |
+
|
| 249 |
+
We find that $T = 10$ provides the best trade-off between the number of clusters and model performance as increasing $T$ further leads to minimal gains or sometimes no gain at all.
|
| 250 |
+
|
| 251 |
+
<table><tr><td rowspan="2">T</td><td colspan="2">TweetSumm</td><td colspan="2">WikiHow</td></tr><tr><td>ROUGE-2</td><td>L-Eval</td><td>ROUGE-2</td><td>L-Eval</td></tr><tr><td>5</td><td>52.1</td><td>67.7</td><td>6.1</td><td>65.3</td></tr><tr><td>10</td><td>54.3</td><td>69.2</td><td>7.2</td><td>70.2</td></tr><tr><td>15</td><td>54.2</td><td>69.6</td><td>7.6</td><td>70.5</td></tr><tr><td>20</td><td>54.4</td><td>69.6</td><td>7.7</td><td>71.1</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Table 3: Validation ROUGE-2 and L-Eval scores for different values of $T$ on the TweetSumm datasets.
|
| 254 |
+
|
| 255 |
+
# B Additional Details for PPSL.
|
| 256 |
+
|
| 257 |
+
Computing Confidence. We compute the teacher confidence for a generated summary (a.k.a pseudo-label) as follows: for extractive summarization, and a PreSumm teacher model, let $p_{ij}$ denote the probability with which the $i$ -th sentence $s_i$ in an unlabeled document $u_j$ is present in its summary $S_j$ , and let $\mathbb{1}$ denote the
|
| 258 |
+
|
| 259 |
+
indicator function: $\mathbb{1}(s_i) = \left\{ \begin{array}{ll}1, & \text{if } s_i \in S \\ 0, & \text{otherwise} \end{array} \right.$ .
|
| 260 |
+
|
| 261 |
+
We then compute the teacher confidence for the pseudo-label $S_{j}$ by averaging the probabilities of selected sentences. We define the teacher confidence $(C_j)$ for an input text $u_{j}$ as follows:
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
C _ {j} = \frac {\sum_ {i = 1} ^ {| u _ {j} |} \mathbb {1} \left(s _ {i}\right) \cdot p _ {i j}}{n}, \tag {2}
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
where $|u_j|$ denotes the number of sentences in the unlabeled document $u_j$ and $n$ is the number of sentences in the generated summary.
|
| 268 |
+
|
| 269 |
+
Baselines We compare our $PPSL$ with the following baselines: 1) PreSumm (Liu and Lapata, 2019). The original PreSumm model that pretrains a BERT model for summarization. We train two PreSumm models - one on a limited training set with 50 labeled examples to match the starting point of our semi-supervised setting and another with 300 labeled examples, the same as the dataset size at the end of our training cycle. 2) Teacher-Student Learning (TSL) (Zhuang et al., 2023). Current state-of-the-art semi-supervised method on TweetSumm. The teacher-student learning framework uses a similar formulation for computing model confidence to ours, as follows: $C_j = \sum_{i=1}^{n}(C_{ij}) / n_j$ . Here, $C_{ij} = p_{ij}q_{ij} + (1 - p_{ij})(1 - q_{ij})$ , where $p_{ij}$ is the probability of sentence $i$ being
|
| 270 |
+
|
| 271 |
+
selected for summary for dialog $j$ estimated by the teacher model, and $q_{ij} = 1$ if $p_{ij}$ in top 4, else 0. We report the performance of the TSL (50:500) and TSL (500:500) models from the paper, as they are the closest to our setup (50/500 labeled examples + 500 unlabeled examples). 3) Confidence + G-4 relabeling + G-4 score (Ours). Our proposed method following the methodology in Section 4. We first use the PreSumm teacher model to shortlist 50 pseudo-labels (Stage 1 and 2), relabel them using GPT-4 (Stage 3), and then select the top 5 using GPT-4 score (Stage 4). 4) Confidence + G-4 score. We skip Stage 3 from 3) to directly score the top 50 PreSumm pseudo-labels using GPT-4. We run this baseline to measure the effect of relabeling in our pipeline. 5) Confidence + G-4 relabeling. We skip Stage 4 from 3) and select the final 5 pseudo-labels based on PreSumm confidence. 6) Confidence + L-3 relabeling + L-3 score (Ours). Same as 3) but using LLaMA-3. 7) Confidence + L-3 score (Ours). Same as 4) but using LLaMA-3. 8) Confidence. We skip Stage 3 and 4 from from 3) and select 5 PreSumm pseudo-labels based on PreSumm confidence. 9) Random. Same as 6) but instead of using teacher confidence defined in Equation 2, we randomly select five PreSumm pseudo-labels to include in each cycle. The results for these baselines is shown in Table 4.
|
| 272 |
+
|
| 273 |
+
# C Implementation Details
|
| 274 |
+
|
| 275 |
+
Data Augmentation. We set the number of groups $T = 10$ for all datasets and randomly sample 5 examples from each group to get a 50-shot $\mathcal{D}_{F,train}$ . Then, we obtain $\mathcal{D}_{A,train}$ by generating 1000 examples using the procedure described in Section 4. In the data generation prompt, we include five examples for each group for TweetSumm and WikiiHow, but for ArXiv/PubMed, we could only fit two documents at a time in LLaMA-3's context window after applying the following truncation heuristic to the text. We include $l$ lines before and after each sentence in the ground truth summary such that we are able to fit two examples in the prompt. The average value of $l$ was 5.21 (so approximately $\sim 90$ for an average summary size of 8 sentences for the ArXiv/PubMed dataset sentences were selected for example<sup>6</sup>). Here, we set the summary size $p$ to 4 sentences for Tweet
|
| 276 |
+
|
| 277 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">TweetSumm</td><td colspan="3">WikiHow</td><td colspan="3">ArXiv/Pubmed</td></tr><tr><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td></tr><tr><td colspan="10">DistilBERTbase (50 labels)</td></tr><tr><td>Random</td><td>36.7 (1.5)</td><td>25.4 (1.4)</td><td>36.7 (1.3)</td><td>19.7 (1.4)</td><td>1.5 (1.1)</td><td>7.2 (1.3)</td><td>19.5 (1.3)</td><td>2.9 (0.9)</td><td>7.8 (1.2)</td></tr><tr><td>Confidence</td><td>43.5 (1.4)</td><td>35.1 (1.2)</td><td>46.8 (1.1)</td><td>21.3 (0.4)</td><td>3.7(0.8)</td><td>10.3 (1.0)</td><td>23.4 (1.1)</td><td>5.2 (0.7)</td><td>12.5 (1.1)</td></tr><tr><td>+ G-4 relabeling</td><td>55.4 (1.3)</td><td>46.7 (0.6)</td><td>56.1 (0.9)</td><td>22.1 (0.4)</td><td>5.7 (0.6)</td><td>13.5 (0.7)</td><td>23.8 (0.8)</td><td>7.3 (1.3)</td><td>15.3 (0.8)</td></tr><tr><td>Confidence + G-4 score</td><td>46.8 (1.3)</td><td>37.4 (0.4)</td><td>48.3 (1.2)</td><td>21.7 (0.9)</td><td>4.6 (0.4)</td><td>12.1 (1.1)</td><td>24.1 (0.9)</td><td>6.7 (0.3)</td><td>13.8 (1.4)</td></tr><tr><td>+ G-4 relabeling (Ours)</td><td>57.6 (1.2)</td><td>46.3 (1.7)</td><td>56.2 (1.3)</td><td>22.7 (0.3)</td><td>5.9 (0.4)</td><td>13.8 (0.5)</td><td>24.7 (0.9)</td><td>8.1 (1.3)</td><td>15.9 (0.8)</td></tr><tr><td>Confidence + L-3 score</td><td>45.7 (1.1)</td><td>36.9 (0.2)</td><td>47.8 (1.2)</td><td>21.6 (0.4)</td><td>4.1 (0.5)</td><td>11.1 (0.8)</td><td>23.9 (0.9)</td><td>6.1 (0.3)</td><td>12.9 (1.3)</td></tr><tr><td>+ L-3 relabeling (Ours)</td><td>56.2 (1.1)</td><td>45.1 (1.2)</td><td>55.9 (1.1)</td><td>22.3 (0.1)</td><td>5.8 (0.2)</td><td>13.6 (0.3)</td><td>24.5 (0.6)</td><td>7.7 (1.4)</td><td>15.7 (0.3)</td></tr><tr><td colspan="10">BERTbase (50 labels)</td></tr><tr><td>TSL (50:500)</td><td>49.0</td><td>37.7</td><td>48.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Random</td><td>45.4 (1.4)</td><td>32.4 (1.9)</td><td>42.5 (1.8)</td><td>22.1 (1.7)</td><td>2.4 (1.5)</td><td>9.6 (1.5)</td><td>23.3 (1.4)</td><td>6.1 (1.2)</td><td>12.4 (1.3)</td></tr><tr><td>Confidence</td><td>49.7 (1.6)</td><td>39.5 (1.4)</td><td>49.4 (1.3)</td><td>24.5 (0.6)</td><td>4.8 (1.1)</td><td>12.8 (1.0)</td><td>27.6 (1.1)</td><td>7.7 (0.7)</td><td>14.2 (1.2)</td></tr><tr><td>+ G-4 relabeling</td><td>57.8 (1.2)</td><td>50.3 (0.5)</td><td>58.9 (1.2)</td><td>26.4 (0.3)</td><td>7.3 (0.5)</td><td>16.4 (0.8)</td><td>28.7 (0.9)</td><td>9.5 (1.1)</td><td>17.1 (0.8)</td></tr><tr><td>Confidence + G-4 score</td><td>52.3 (1.6)</td><td>42.8 (0.7)</td><td>51.0 (1.4)</td><td>25.2 (0.7)</td><td>5.6 (0.5)</td><td>13.1 (1.0)</td><td>27.7 (0.9)</td><td>7.9 (0.2)</td><td>15.5 (1.3)</td></tr><tr><td>+ G-4 relabeling (Ours)</td><td>58.9 (1.4)</td><td>50.4 (0.8)</td><td>59.4 (1.5)</td><td>26.1 (0.4)</td><td>7.2 (0.6)</td><td>15.9 (0.9)</td><td>29.1 (0.7)</td><td>9.7 (1.2)</td><td>17.7 (0.6)</td></tr><tr><td>Confidence + L-3 score</td><td>51.7 (1.2)</td><td>41.6 (1.2)</td><td>50.3 (1.2)</td><td>25.9 (0.3)</td><td>5.2 (0.2)</td><td>13.0 (0.8)</td><td>27.6 (0.4)</td><td>7.9 (0.5)</td><td>15.3 (1.1)</td></tr><tr><td>+ L-3 relabeling (Ours)</td><td>58.4 (1.2)</td><td>50.1 (0.3)</td><td>59.1 (1.2)</td><td>26.0 (0.2)</td><td>6.9 (0.3)</td><td>15.1 (0.2)</td><td>29.0 (0.5)</td><td>9.4 (0.7)</td><td>17.4 (0.3)</td></tr><tr><td colspan="10">BERTbase (500 labels)</td></tr><tr><td>TSL (500:500)</td><td>59.0</td><td>48.3</td><td>58.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Random</td><td>55.1 (1.4)</td><td>42.7 (1.1)</td><td>50.3 (1.2)</td><td>25.6 (1.3)</td><td>4.5 (1.1)</td><td>15.2 (1.3)</td><td>25.4 (1.5)</td><td>9.5 (1.2)</td><td>24.1 (1.2)</td></tr><tr><td>Confidence</td><td>61.8 (0.7)</td><td>54.9 (0.8)</td><td>60.3 (0.9)</td><td>28.4 (0.6)</td><td>8.0 (1.1)</td><td>22.5 (1.0)</td><td>29.4 (0.5)</td><td>11.5 (0.6)</td><td>27.7 (0.8)</td></tr><tr><td>+ L-3 score</td><td>63.4 (0.5)</td><td>55.6 (0.8)</td><td>62.1 (0.7)</td><td>28.9 (0.4)</td><td>8.2 (0.5)</td><td>28.4 (0.4)</td><td>31.7 (0.3)</td><td>11.8 (0.2)</td><td>29.4 (0.4)</td></tr><tr><td>+ L-3 relabeling (Ours)</td><td>64.2 (0.2)</td><td>56.2 (0.4)</td><td>62.8 (0.6)</td><td>30.7 (0.4)</td><td>8.8 (0.3)</td><td>29.5 (0.3)</td><td>33.5 (0.3)</td><td>12.3 (0.2)</td><td>32.2 (0.3)</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 4: Mean (Std.) ROUGE F-1 scores of different pseudo-labeling strategies. R-1, R-2, and R-L denote ROUGE-1, ROUGE-2, and ROUGE-L metrics, respectively. TSL results from (Zhuang et al., 2023). Refer to Section 5 for method details. Bold indicates the best-performing and underline denotes the second-best performing method, respectively.
|
| 280 |
+
|
| 281 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">TweetSumm</td><td colspan="2">WikiHow</td><td colspan="2">ArXiv/Pubmed</td></tr><tr><td>R-2</td><td>L-Eval</td><td>R-2</td><td>L-Eval</td><td>R-2</td><td>L-Eval</td></tr><tr><td>PreSumm (50 labels)</td><td>37.1 (1.1)</td><td>31.2 (0.5)</td><td>3.2 (0.8)</td><td>34.2 (1.5)</td><td>7.3 (0.9)</td><td>13.5 (1.2)</td></tr><tr><td>PreSumm (300 labels)</td><td>51.1 (2.1)</td><td>60.5 (1.2)</td><td>7.6 (0.6)</td><td>68.1 (1.1)</td><td>10.8 (0.9)</td><td>49.5 (2.4)</td></tr><tr><td>PreSumm (500 labels)</td><td>54.4 (1.2)</td><td>67.1 (0.3)</td><td>7.9 (0.5)</td><td>74.4 (0.6)</td><td>11.3 (0.5)</td><td>58.2 (1.1)</td></tr><tr><td>PreSumm (750 labels)</td><td>56.1 (0.7)</td><td>70.3 (0.5)</td><td>8.5 (0.4)</td><td>76.5 (0.4)</td><td>12.1 (0.7)</td><td>62.8 (0.7)</td></tr><tr><td colspan="7">50 labels</td></tr><tr><td>Random</td><td>32.4 (1.9)</td><td>32.1 (1.1)</td><td>2.4 (1.5)</td><td>37.7 (1.6)</td><td>6.1 (0.2)</td><td>15.1 (2.3)</td></tr><tr><td>Confidence + G-4 score</td><td>42.8 (0.7)</td><td>46.2 (0.2)</td><td>5.6 (0.5)</td><td>59.4 (1.3)</td><td>7.9 (0.2)</td><td>40.1 (1.9)</td></tr><tr><td>+ G-4 relabeling (Ours)</td><td>50.4 (0.8)</td><td>58.4 (0.4)</td><td>7.2 (0.6)</td><td>70.3 (1.4)</td><td>9.7 (1.2)</td><td>52.5 (1.3)</td></tr><tr><td>Confidence + L-3 score</td><td>41.6 (1.2)</td><td>45.8 (0.7)</td><td>5.2 (0.2)</td><td>57.5 (1.4)</td><td>7.9 (0.5)</td><td>37.1 (1.8)</td></tr><tr><td>+ L-3 relabeling (Ours)</td><td>50.1 (0.3)</td><td>56.3 (0.9)</td><td>6.9 (0.3)</td><td>69.3 (2.1)</td><td>9.4 (0.7)</td><td>49.3 (1.4)</td></tr><tr><td colspan="7">500 labels</td></tr><tr><td>Random</td><td>42.7 (1.1)</td><td>52.3 (1.2)</td><td>4.5 (1.1)</td><td>52.7 (1.8)</td><td>9.5 (1.2)</td><td>44.1 (0.9)</td></tr><tr><td>Confidence + L-3 score</td><td>55.6 (0.8)</td><td>69.2 (0.7)</td><td>8.2 (0.5)</td><td>75.2 (1.4)</td><td>11.8 (0.2)</td><td>60.2 (0.5)</td></tr><tr><td>+ L-3 relabeling (Ours)</td><td>56.2 (0.8)</td><td>71.2 (0.9)</td><td>8.8 (0.3)</td><td>77.3 (1.3)</td><td>12.3 (0.2)</td><td>65.7 (0.3)</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 5: Fully-supervised methods (first four rows) semi-supervised approaches (remaining rows). All models use $\mathrm{BERT}_{\mathrm{base}}$ as PreSumm's backbone. The number of labeled examples for fully supervised models is shown in brackets. The semi-supervised methods use 50/500 labeled and 250 unlabeled examples.
|
| 284 |
+
|
| 285 |
+
Summ and WikiHow datasets, and 8 sentences for the Arxiv/PubMed dataset. We determine these summary sizes based on the average summary size in the few-shot training data $\mathcal{D}_{F,train}$ . We host LLaMA-3-70b-Instruct on $4\times$ A100 GPUs with 80G VRAM each and use it as the backbone LLM for all our experiments. Generating $\mathcal{D}_{A,train}$ took $\sim 4.2$ hrs for TweetSumm, $\sim 11.3$ hrs for WikiHow,
|
| 286 |
+
|
| 287 |
+
and $\sim 1.4$ days for ArXiv/PubMed dataset.
|
| 288 |
+
|
| 289 |
+
Training. For extractive summarization, we train a PreSumm model on the combined MixSumm-generated and seed few-shot dataset $\mathcal{D}_{F + A,\text{train}}$ . We use the TransformerSum repository to implement our training pipeline. To handle long documents that cannot be fed to the PreSumm at once, we introduce a subroutine that iteratively chunks and summarizes the document until we obtain a summary of size $p$ . The iterative subroutine is crucial to train PreSumm models on the WikiHow and ArXiv/PubMed datasets with long input documents. For abstractive summarization, we follow an extractive-then-abstractive approach, where for a given input document, we first obtain its extractive summary using the full-trained PreSumm model from the previous step. Then, we finetune a DistilBART model that summarizes the PreSumm summaries to generate abstractive summaries.
|
| 290 |
+
|
| 291 |
+
We initialize the training process with a learning rate of $2 \times 10^{-5}$ and use a cyclic learning rate scheduler (Smith, 2015). We train all our models for 100 epochs with an early stopping criterion, where we
|
| 292 |
+
|
| 293 |
+
stop the training process if the validation ROUGE-2 score does not improve for more than 10 epochs. We use the AdamW optimizer (Loshchilov and Hutter, 2017) with $\epsilon = 1\times 10^{-8}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.99$ and train all our models on one V100 GPU with 12G VRAM. We use distilbart-12-6-cnn backbone for abstractive summarization and experiment with two backbones for the PreSumm model: DistilBERT<sub>base</sub> and BERT<sub>base</sub> (results in Table 6) and find BERT<sub>base</sub> to be better. Training a model on MixSumm-generated data took $\sim 2.5$ hrs for TweetSumm, $\sim 13.4$ hrs, for WikiHow, and $\sim 2.7$ days for ArXiv/PubMed. Crucially, we repeat each experiment (data augmentation+model training) for 5 random seeds and report the mean and standard deviations for all models unless otherwise stated.
|
| 294 |
+
|
| 295 |
+
Semi-Supervised Text Summarization. We use the TransformerSum repository<sup>8</sup> to implement our training pipeline. We use PreSumm as our teacher model $M$ and experiment with two backbones: distilbert-base-uncased and bert-base-uncased. We perform experiments in two settings: 1) data-scarce setting where fix the size of the labeled set $D_l$ to 50 for all the datasets, and 2) data-abundant setting where we set the size of $D_l$ to 500. We set $N_{cycles}$ to 50 for all experiments. We add 5 pseudo-labels to the training set in each cycle, thus resulting in a final training set size of 300 (50 labeled + 250 pseudo-labeled examples.) We set the summary size $k$ to 4 for TweetSumm and 8 for WikiHow and ArXiv/PubMed. We base these summary sizes on the average summary size of the labeled training set. For training, we start with a learning rate of $2 \times 10^{-5}$ on all the datasets and use a cyclic learning rate scheduler during training (Smith, 2015), which is the default setting in TransformerSum. Additionally, we use AdamW as our optimizer with $\epsilon = 1 \times 10^{-8}$ , $\beta_1 = 0.9$ , $\beta_2 = 0.99$ . We train all our models on a single V100 GPU with 12G VRAM. We repeat each experiment for three different seeds and report the mean and standard deviation in our results unless otherwise stated.
|
| 296 |
+
|
| 297 |
+
# D Qualitative Results
|
| 298 |
+
|
| 299 |
+
# D.1 On the Sample Efficiency of PPSL
|
| 300 |
+
|
| 301 |
+
We now compare the sample efficiency of $PPSL$ against other methods. Referring to Table 5, for
|
| 302 |
+
|
| 303 |
+
fully supervised methods, we note that including more labeled examples improves L-Eval and ROUGE scores across the board ("PreSumm (50 labels)" v/s "PreSumm (300 labels)"). Our semi-supervised approach using 50 labels with GPT-4 relabeling and GPT-4 score achieves competitive performance to the fully supervised PreSumm model trained on 300 labels. Notably, we get better L-Eval scores than "PreSumm (300 labels)" on WikiHow and ArXiv/PubMed datasets and are competitive on TweetSumm. Note that the "PreSumm (300 labels)" model approximates the best-case scenario when all the labels in the training set are high-quality. This is encouraging, as "PreSumm (300 labels)" approximates the best-case scenario of $100\%$ high-quality labels in the training set. In the data-abundant setting, our proposed method with LLaMA outperforms the respective fully supervised model in terms of both ROUGE and L-Eval. From Table 2, we further note that our approach outperforms TSL (50:500) while using half the number of pseudo-labels. We may further improve the model performance by including some examples in the prompt. Our proposed method outperforms TSL (50:500) and TSL (500:500) despite working in a more challenging labeled:unlabeled dataset ratio of 50:250. We plot the R-1 scores against the number of training cycles for $PPSL$ and other semi-supervised baselines (refer to Section B in Appendix B for more details) in Figure 4. Overall, "Random" setting is highly unstable, "Confidence + G-4 score" slightly improves over "Confidence" on TweetSumm and WikiHow, but more importantly, it is consistently more stable. Finally, our method with GPT-4 scoring and relabeling not only significantly boosts the R-1 scores (visible gap between "Ours" and the rest) but also does so at a much faster rate. For all the datasets, our method peaks and stabilizes under 20 cycles (100 pseudolabels), further endorsing the sample efficiency of our method compared to other approaches.
|
| 304 |
+
|
| 305 |
+
# D.2 Comparison of Pseudo-label Selection Strategies
|
| 306 |
+
|
| 307 |
+
Referring to Table 2, we note that all pseudo-label selection strategies outperform the random baseline. The "Random" baseline performs worse than the fully supervised counterpart on all datasets (R-2 in Table 2 v/s R-2 in Table 5), meaning that the majority of the shortlisted PreSumm pseudo-labels are low-quality. Using teacher confidence leads to slight performance gains on all the datasets, and
|
| 308 |
+
|
| 309 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">TweetSumm</td><td colspan="4">WikiHow</td><td colspan="4">ArXiv/Pubmed</td></tr><tr><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td><td>R-1 (%)</td><td>R-2 (%)</td><td>R-L (%)</td><td>L-Eval (%)</td></tr><tr><td colspan="13">DistilBERTbase</td></tr><tr><td>Oracle</td><td>62.8±0.6</td><td>53.1±1.2</td><td>59.3±0.7</td><td>83.6±0.5</td><td>30.7±0.4</td><td>8.6±0.8</td><td>19.1±0.7</td><td>81.6±1.2</td><td>34.2±0.6</td><td>12.3±1.2</td><td>19.4±0.4</td><td>71.1±0.4</td></tr><tr><td>PPSL (50:250)</td><td>56.2±1.1</td><td>45.1±1.2</td><td>55.9±1.1</td><td>-</td><td>22.3±0.1</td><td>5.8±0.2</td><td>13.6±0.3</td><td>-</td><td>24.5±0.6</td><td>7.7±1.4</td><td>15.7±0.3</td><td>-</td></tr><tr><td>EDA</td><td>47.3±1.3</td><td>36.1±1.2</td><td>48.7±1.2</td><td>51.3±0.3</td><td>21.6±0.8</td><td>3.3±0.8</td><td>11.8±1.2</td><td>54.1±0.3</td><td>23.3±1.3</td><td>5.4±0.6</td><td>12.6±1.3</td><td>39.6±0.2</td></tr><tr><td>MixSumm (rand.)</td><td>56.9±2.5</td><td>46.1±3.4</td><td>58.7±3.1</td><td>56.7±0.6</td><td>22.7±2.1</td><td>6.1±1.2</td><td>14.8±1.3</td><td>65.9±0.4</td><td>24.8±1.5</td><td>8.3±1.7</td><td>16.0±1.3</td><td>48.1±0.6</td></tr><tr><td>w/o Aug.</td><td>41.7±1.6</td><td>32.4±1.2</td><td>43.6±2.1</td><td>23.4±1.2</td><td>19.2±1.8</td><td>2.1±0.6</td><td>9.1±1.4</td><td>20.4±1.0</td><td>21.4±1.2</td><td>4.7±0.3</td><td>10.4±1.2</td><td>13.3±0.5</td></tr><tr><td>MixSumm (ours)</td><td>57.3±2.4</td><td>46.8±3.1</td><td>57.2±2.7</td><td>60.3±0.5</td><td>23.4±1.7</td><td>6.5±1.6</td><td>15.2±1.1</td><td>68.4±1.3</td><td>25.7±1.7</td><td>8.6±2.1</td><td>16.6±1.4</td><td>51.2±0.6</td></tr><tr><td>w/o Mixup</td><td>54.2±1.7</td><td>44.3±1.4</td><td>53.5±1.4</td><td>55.3±1.2</td><td>22.1±1.3</td><td>4.7±0.2</td><td>12.8±1.2</td><td>62.3±0.7</td><td>23.8±1.2</td><td>6.1±0.9</td><td>14.1±1.3</td><td>42.1±1.1</td></tr><tr><td>w/o Aug.</td><td>42.8±1.1</td><td>34.1±1.1</td><td>44.2±1.4</td><td>28.4±0.8</td><td>19.7±1.2</td><td>2.8±0.4</td><td>10.2±1.1</td><td>31.4±0.4</td><td>22.6±1.3</td><td>4.9±0.6</td><td>11.3±1.3</td><td>18.6±0.5</td></tr><tr><td colspan="13">BERTbase</td></tr><tr><td>Oracle</td><td>65.7±0.3</td><td>56.6±0.4</td><td>64.9±0.4</td><td>86.2±0.3</td><td>30.5±0.5</td><td>8.7±0.3</td><td>19.2±0.6</td><td>87.3±0.1</td><td>34.6±0.4</td><td>12.4±0.2</td><td>19.6±0.4</td><td>78.1±0.5</td></tr><tr><td>TSL (50:500)</td><td>49.0</td><td>37.7</td><td>48.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TSL (500:500)</td><td>59.0</td><td>48.3</td><td>58.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>EDA</td><td>51.1±0.7</td><td>39.2±0.9</td><td>53.0±0.2</td><td>34.3±1.2</td><td>23.4±0.5</td><td>4.1±0.3</td><td>13.0±0.5</td><td>42.1±0.8</td><td>26.2±1.1</td><td>7.9±1.0</td><td>13.1±0.6</td><td>17.2±0.5</td></tr><tr><td>PPSL (50:250)</td><td>58.4±1.2</td><td>50.1±0.3</td><td>59.1±1.2</td><td>56.3±0.9</td><td>26.0±0.2</td><td>6.9±0.3</td><td>15.1±0.2</td><td>69.3±2.1</td><td>29.0±0.5</td><td>9.4±0.7</td><td>17.4±0.3</td><td>49.3±1.4</td></tr><tr><td>MixSumm (rand.)</td><td>58.6±3.2</td><td>50.6±2.1</td><td>59.7±2.3</td><td>60.3±0.9</td><td>26.4±1.0</td><td>7.5±1.2</td><td>15.8±0.2</td><td>72.5±1.2</td><td>30.7±1.7</td><td>10.6±1.5</td><td>18.5±1.1</td><td>48.4±1.1</td></tr><tr><td>w/o Aug.</td><td>49.4±0.7</td><td>36.9±1.0</td><td>49.0±0.2</td><td>31.5±0.5</td><td>21.3±0.4</td><td>3.2±0.4</td><td>11.4±0.5</td><td>34.2±1.5</td><td>23.4±1.1</td><td>7.5±1.4</td><td>12.3±0.8</td><td>13.5±1.2</td></tr><tr><td>MixSumm (ours)</td><td>59.1±1.7</td><td>52.7±1.6</td><td>60.5±1.3</td><td>65.3±1.2</td><td>27.3±2.1</td><td>7.8±1.3</td><td>16.6±1.8</td><td>81.1±1.7</td><td>31.2±1.2</td><td>10.7±1.1</td><td>18.3±1.1</td><td>53.1±0.5</td></tr><tr><td>w/o Mixup</td><td>56.1±1.1</td><td>47.3±1.2</td><td>55.3±1.1</td><td>57.3±0.5</td><td>25.7±1.4</td><td>6.2±1.2</td><td>14.7±0.7</td><td>67.3±2.1</td><td>28.4±1.9</td><td>8.3±1.3</td><td>16.8±1.6</td><td>52.3±1.2</td></tr><tr><td>w/o Aug.</td><td>50.1±0.6</td><td>38.1±1.0</td><td>49.9±0.6</td><td>32.3±3.1</td><td>21.9±0.3</td><td>3.5±0.2</td><td>12.1±0.9</td><td>33.3±1.7</td><td>24.1±0.9</td><td>7.9±1.0</td><td>12.7±0.5</td><td>19.0±2.5</td></tr><tr><td>LLaMA-3 (0-shot)</td><td>50.3±0.5</td><td>47.7±0.4</td><td>49.9±0.3</td><td>52.3±1.2</td><td>12.2±0.2</td><td>2.7±0.5</td><td>8.1±0.4</td><td>32.3±0.3</td><td>23.6±0.2</td><td>4.6±0.7</td><td>15.4±0.3</td><td>38.4±0.5</td></tr><tr><td>LLaMA-3 (1-shot)</td><td>51.7±0.2</td><td>49.2±0.3</td><td>51.9±0.3</td><td>58.7±1.1</td><td>14.3±0.2</td><td>4.1±0.5</td><td>10.6±0.2</td><td>39.4±0.5</td><td>32.6±0.4</td><td>6.5±0.7</td><td>17.2±0.3</td><td>38.3±1.8</td></tr><tr><td>LLaMA-3 (5-shot)</td><td>62.4±0.5</td><td>54.3±0.7</td><td>60.3±1.1</td><td>67.5±0.6</td><td>28.7±0.3</td><td>7.5±0.9</td><td>17.1±0.3</td><td>71.3±0.4</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 310 |
+
|
| 311 |
+
Table 6: Extractive Summarization Results. Comparison of different text summarization models on TweetSumm, WikiHow, and ArXiv/PubMed datasets. We report ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L) $F_1$ scores, and L-Eval scores. We report the mean $\pm$ std. performance across 5 different seeds. Refer to Appendix C and Section 5.3 for metric and implementation details. Note. TSL results are reported from Zhuang et al. (2023). For EDA and MixSumm we use a 50-shot $\mathcal{D}_{F,train}$ and generate 1000 examples as $\mathcal{D}_{A,train}$ . Bold denotes the best-performing model in a given block and highlight denotes the overall best-performing model. For the ArXiv/PubMed dataset, we could fit only 2 documents into LLaMA-3's context (1 from $\mathcal{D}_{F,train} + 1$ generated), so we do not report LLaMA-3 (5-shot).
|
| 312 |
+
|
| 313 |
+
adding GPT-4 score further improves the results ("Confidence" v/s "Confidence + G-4 score" in Table 2). These improvements indicate that the shortlisted PreSumm pseudo-labels include some good-quality pseudo-labels, too, and using GPT-4 to rate those pseudo-labels is crucial to picking them. We see similar trends when using LLaMA-3.
|
| 314 |
+
|
| 315 |
+
To further confirm our findings, we conduct a qualitative study in the data-scarce setup, where we compute the ROUGE scores of the final 5 pseudo-labels for each method against the respective ground truth summaries, and Figure 5 shows the mean ROUGE-2 of the five selected pseudo-labels. To clarify, we obtained the "Oracle" results by directly selecting the final 5 pseudo-labels using ROUGE-2 scores computed against the ground truth. We note a stark difference between "Confidence" and "Oracle," which shows that relying solely on teacher confidence consistently leads to a selection of low-quality pseudo-labels. Combining GPT-4 score with teacher confidence is effective ("Confidence + G-4 score"), and adding the GPT-4 relabeling greatly boosts the quality of selected pseudo-labels ("Ours").
|
| 316 |
+
|
| 317 |
+
# D.3 Effect of Relabeling
|
| 318 |
+
|
| 319 |
+
Referring to Tables 2 and 5, we observe that relabeling with LLMs leads to a significant boost in the summarization performance in terms of both ROUGE scores and L-Eval. When using BERT<sub>base</sub> as the backbone, we note that the ROUGE-1 improves from 52.3 to 58.9 on the TweetSumm dataset, 25.2 to 26.1 on the WikiHow dataset, and 27.7 to 29.1 on the ArXiv/PubMed dataset. GPT-4 relabeling is also effective when using teacher confidence without GPT-4 score ("Confidence" v/s "Confidence + G-4 relabeling"). Our previous qualitative study also supports these results, showing that relabeling improves the quality of pseudo-labels. We observe similar trends when using DistilBERT<sub>base</sub> as PreSumm's backbone and LLaMA-3 instead of GPT-4. When using 500 labels, we note boosts in performance but the relative scale is smaller compared to when using 50 labels.
|
| 320 |
+
|
| 321 |
+
We conduct additional testing to analyze the performance of our best- and second-best-performing models, both of which involve relabeling. We find that the p-value $< 0.016$ for Welch's test for an R-1 of 58.9 (1.4) for "Confidence + G-4 score + G-4 relabelling" v/s 57.8 (1.2) for "Confidence + G-4
|
| 322 |
+
|
| 323 |
+
# EDA
|
| 324 |
+
|
| 325 |
+
Generated Document: Customer: @VerizonSupport I penury need help. I do not have your service (6+ iodine hours to setup and cancel an install!) but the premature homeowner did. There is an ONT in the basement that every out a loud, very annoying beep sends 15 minutes? This is ridiculous. How do I iodine stop it? Agent: @633638 Press and hold the alarm silence button/ONT reboot button on the bring up unit for 30 seconds. That bequeath will silence it. Customer: @VerizonSupport done. Will reply back respond later if it beeps again. @VerizonSupport comprise I expected to do this every 24 hours? That's how long the Alarm Silence lasted. Bedden back to beeping every 15 moment since this morning. Agent: @633638 assay unplugging the power and removing the battery. indium After 3mins, put the battery back in and power on. This should assist with the beeping. Let the states know. DDG Customer: @VerizonSupport I no service with you at all. Can't I just unplug it and remove assault and battery the battery forever? Agent: @633638 Yes, you can do that since you do not inspection and repair have service with us. personify When the battery is dead the beeping will stop. HSB Customer: @VerizonSupport you! Thank Agent: @633638 personify You are welcome!
|
| 326 |
+
|
| 327 |
+
LLaMA-3 Summary: Customer::@VerizonSupport I penury need help. There is an ONT in the basement that every out a loud, very annoying beep sends 15 minutes? This is ridiculous. How do I iodine stop it? Agent: @633638 Press and hold the alarm silence button/ONT reboot button on the bring up unit for 30 seconds. That bequeath will silence it. Customer::@VerizonSupport done. Will reply back respond later if it beeps again. @VerizonSupport comprise I expected to do this every 24 hours? Agent: @633638 assay unplugging the power and removing the battery.
|
| 328 |
+
|
| 329 |
+
# MixSumm w/o mixup
|
| 330 |
+
|
| 331 |
+
Generated Document: Customer: My iPhone's battery is draining much faster than usual, and it's overheating. Any suggestions? Here's a link to a screenshot of my battery usage: https://t.co/xyzABC123 @AppleSupport can you help with this issue? Agent: @UserHelp123 We're here to assist you. First, could you let us know which iOS version your iPhone is currently using? Customer: @AppleSupport It's updated to iOS 14.4. Agent: @UserHelp123 Thanks for the information. Have you installed any new apps or updates around the time you started noticing these issues? Customer: @AppleSupport Yes, I installed a few new games last week. Agent: @UserHelp123 It's possible that those games are causing the battery drain and overheating. Could you try removing them to see if the situation improves? Customer: @AppleSupport I've uninstalled the games, but my phone is still overheating. Agent: @UserHelp123 Understood. Let's perform a soft reset on your iPhone. This won't erase your data but can help with such issues. If the problem persists, we should continue this conversation in DM to get more details and possibly set up a service appointment. Please reach out in DM with the exact time you started experiencing these issues and if you've noticed any specific patterns. https://t.co/NmDpIWxYfO
|
| 332 |
+
|
| 333 |
+
LLaMA-3 Summary: Customer: My iPhone's battery is draining much faster than usual, and it's overheating. Any suggestions? Agent: Have you installed any new apps or updates around the time you started noticing these issues? Customer: @AppleSupport I've uninstalled the games, but my phone is still overheating. Agent: Let's perform a soft reset on your iPhone. This won't erase your data but can help with such issues.
|
| 334 |
+
|
| 335 |
+
# MixSumm
|
| 336 |
+
|
| 337 |
+
Generated Document: Customer: @ VerizonSupport I recently moved into a house that has your equipment but I don't use your service. The device in the basement beeps every 20 minutes. How do I stop it? Agent: @ VerizonHelper Please press and hold the alarm silence button on the ONT for about 30 seconds. That should silence it temporarily. Customer: @ VerizonSupport I did that, but it started beeping again after a few hours. I need a permanent fix. I have a flight next week, and I can't deal with this noise. Agent: @ VerizonHelper If the alarm silence is temporary, try unplugging the device and removing its battery. After a few minutes, reconnect the battery and power it on. Customer: @ VerizonSupport I need this sorted out before I leave. Is there no faster way? Agent: @ VerizonHelper For a quick solution, you can permanently disconnect the power and remove the battery. This will stop the beeping entirely. Customer: @ VerizonSupport Thank you for the advice! I'll handle it immediately.
|
| 338 |
+
|
| 339 |
+
LLaMA-3 Summary: Customer: @ VerizonSupport I recently moved into a house that has your equipment but I don't use your service. The a device in the basement beeps every 20 minutes. How do I stop it? Agent: @ VerizonHelper Please press and hold the alarm silence button on the ONT for about 30 seconds. That should silence it temporarily. Customer: @ VerizonSupport I did that, but it started beeping again after a few hours. I need a permanent fix. I have a flight next week, and I can't deal with this noise. Agent: @ VerizonHelper For a quick solution, you can permanently disconnect the power and remove the battery. This will stop the beeping entirely.
|
| 340 |
+
|
| 341 |
+
Table 7: Full qualitative comparison of EDA, MixSumm w/o mixup, and MixSumm examples. For each method, the first row shows the generated document and the second row shows its LLaMA-3-generated summary. For these examples, group 1 contained customer conversations with phone companies, such as Verizon, and group 2 contained customer interactions with airline representatives, such as AirAsia and Delta. Note. For brevity, we do not include the input examples used in the prompt, and for EDA, we perform augmentations one sentence in the document at a time. We note that w/o mixup, MixSumm generated a decent quality document but it only covers phone/electronic device-related sentences. MixSumm, on the other hand, generated an example that contains mention of flight as well as a device-related issue. Additionally, we note that no matter the quality of the document, LLaMA-3-70b generates a high-quality summary in all cases.
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
Figure 4: ROUGE-1 curves v/s # cycles for data-scarce setting. Each cycle denotes an addition of 5 new pseudo-labels to the training set. All results use $\mathrm{BERT}_{\text {base }}$ as the backbone for PreSumm. The curves are averaged for three seeds (the width denotes the std). Note that we report the GPT-4 version of our method here.
|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
|
| 348 |
+

|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
Figure 5: Quality of pseudo-labels by different strategies (data-scarce setup). The y-axis denotes the ROUGE-2 scores of the top 5 pseudo-labels computed against the respective ground truths. All results are for $\mathrm{BERT}_{\text {base }}$ as the backbone for PreSumm and three random seeds. Refer to Section D.2 for complete details.
|
| 352 |
+
|
| 353 |
+
relabeling" on the TweetSumm dataset, denoting the differences are significant.
|
| 354 |
+
|
| 355 |
+
# E Prompt Designs
|
| 356 |
+
|
| 357 |
+
In this section, we show our prompts to synthesize new documents and their summaries.
|
| 358 |
+
|
| 359 |
+
Prompt 1: Prompt used for Generating New Articles
|
| 360 |
+
```markdown
|
| 361 |
+
## Instruction:
|
| 362 |
+
You are an expert data generator tasked with synthesizing new documents for a summarization task. {dataset_description}
|
| 363 |
+
Below are some example documents and their summaries from a group in the dataset (group 1):
|
| 364 |
+
{gp1 Documents}
|
| 365 |
+
Below are some example documents and their summaries from another group in the dataset (group 2):
|
| 366 |
+
{gp2 Documents}
|
| 367 |
+
Given the above documents, follow these instructions:
|
| 368 |
+
* Synthesize a new document that follows a similar format to the examples provided.
|
| 369 |
+
* The document should contain {document_size}.
|
| 370 |
+
* The document should be coherent and relevant to the topic.
|
| 371 |
+
* The document should be original and not copied from the examples.
|
| 372 |
+
* Ensure that the document covers {alpha} % topics from group 1 and 1 - {alpha} % topics from group 2.
|
| 373 |
+
* Wrap your response in the <document></document> tags.
|
| 374 |
+
* Do NOT include anything else like the examples in your output.
|
| 375 |
+
```java
|
| 376 |
+
## Response:
|
| 377 |
+
```
|
| 378 |
+
|
| 379 |
+
Prompt 2: Prompt used for Scoring a Generated Summary
|
| 380 |
+
```txt
|
| 381 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
| 382 |
+
## Instruction: Given the document: {document}
|
| 383 |
+
Provided Summary: {summary}
|
| 384 |
+
Follow these instructions when writing your response: \* On a scale of 1-10, provide a numerical rating for the provided summary, with 10 denoting that the provided answer perfectly surmises the main points of the document. \* Your response should contain only the numerical rating. DO NOT include anything else like the provided answer, the ground truth answer, or an explanation of your rating scale in your response. \* Wrap your numerical rating inside $<$ rating></rating> tags. \* Check very carefully before answering. \* Follow the output format as shown in the example below: Example response: <rating>7</rating>
|
| 385 |
+
### Response:
|
| 386 |
+
```
|
| 387 |
+
|
| 388 |
+
# Input Prompt:
|
| 389 |
+
|
| 390 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
| 391 |
+
|
| 392 |
+
# Text to summarize:
|
| 393 |
+
|
| 394 |
+
1. <Line 1 of the input text>
|
| 395 |
+
2. <Line 2 of the input text>
|
| 396 |
+
3. <Line 3 of the input text>
|
| 397 |
+
|
| 398 |
+
.
|
| 399 |
+
|
| 400 |
+
N. <Line N of the input text>
|
| 401 |
+
|
| 402 |
+
For each sentence in the input text, output the probability of it appearing in the summary such that the summary captures the key points of the text. Output in the format <line id>. <probability>.
|
| 403 |
+
|
| 404 |
+
# Response:
|
| 405 |
+
|
| 406 |
+
1.0.9
|
| 407 |
+
2.0.1
|
| 408 |
+
3.0.8
|
| 409 |
+
|
| 410 |
+
.
|
| 411 |
+
|
| 412 |
+
N. 0.2
|
| 413 |
+
|
| 414 |
+
(a) Generating pseudo-labels. We attach a line ID to each sentence in the input document and instruct the LLM to use those line IDs in its response.
|
| 415 |
+
|
| 416 |
+
# Input Prompt:
|
| 417 |
+
|
| 418 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
| 419 |
+
|
| 420 |
+
Text to summarize:
|
| 421 |
+
|
| 422 |
+
<Input Conversation/Text to Summarize>
|
| 423 |
+
|
| 424 |
+
Summary:
|
| 425 |
+
|
| 426 |
+
<Summary to evaluate>
|
| 427 |
+
|
| 428 |
+
Provide a rating between 0-100 based on the following criteria:
|
| 429 |
+
|
| 430 |
+
1) The summary should be concise...
|
| 431 |
+
2) The summary should capture ...
|
| 432 |
+
3) The summary should be extractive, i.e.,...
|
| 433 |
+
4) The response should be strictly numerical.
|
| 434 |
+
|
| 435 |
+
Check very carefully before giving your rating.
|
| 436 |
+
|
| 437 |
+
# Response:
|
| 438 |
+
|
| 439 |
+
85
|
| 440 |
+
|
| 441 |
+
(b) Scoring pseudo-labels. The two-part prompt contains a text and summary pair (Part 1), and a list the evaluation criteria (Part 2). Note: Refer to Section 4.2 for complete details on the evaluation criteria.
|
| 442 |
+
|
| 443 |
+
Figure 6: Different prompts used in the experiments.
|
| 444 |
+
|
| 445 |
+
Prompt 3: Prompt used for Summarizing an Article in MixSumm
|
| 446 |
+
```txt
|
| 447 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
| 448 |
+
# Instruction:
|
| 449 |
+
You are an expert data annotator tasked with summarizing documents for a summarization task. {dataset_description}
|
| 450 |
+
Below are some example documents and their summaries from a group in the dataset ( group 1): {gp1_documents}
|
| 451 |
+
Below are some example documents and their summaries from another group in the dataset (group 2): {gp2uments}
|
| 452 |
+
A document is composed of the following sentences:
|
| 453 |
+
{sentences}
|
| 454 |
+
Given the sentences above:
|
| 455 |
+
\* You are to construct an extractive summary for the document by selecting some sentences from above.
|
| 456 |
+
\* The summary captures the main points of the article.
|
| 457 |
+
\* Now, output the probability of a sentence being included in the summary.
|
| 458 |
+
\* Do NOT include anything else like the sentence in your output.
|
| 459 |
+
\* Output your probabilities in the format <line id>. <probability>. Refer to the example below:
|
| 460 |
+
1. 0.73
|
| 461 |
+
2. 0.65
|
| 462 |
+
3. 0.95
|
| 463 |
+
etc.
|
| 464 |
+
## Response:
|
| 465 |
+
```
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a2e9d6cc31e4a6a9dc4407f7926c6388a9c5d8e61b82b8a095a725584f5ed50
|
| 3 |
+
size 761836
|
2025/A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization_ Data Augmentation and Semi-supervised Approaches/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/57e20527-a109-4d68-8e0b-0c23a110c2ae_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8288cf398f4537053d82e68bf59a43e404141bfe9f6fb5174b243dc03abe4d5a
|
| 3 |
+
size 1523109
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/full.md
ADDED
|
@@ -0,0 +1,453 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Large-Scale Benchmark for Vietnamese Sentence Paraphrases
|
| 2 |
+
|
| 3 |
+
Sang Quang Nguyen<sup>1,2</sup>, Kiet Van Nguyen<sup>1,2</sup>
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of Information Technology, Ho Chi Minh City, Vietnam
|
| 6 |
+
|
| 7 |
+
$^{2}$ Vietnam National University, Ho Chi Minh City, Vietnam
|
| 8 |
+
|
| 9 |
+
sangnq.19@grad.uit.edu.vn
|
| 10 |
+
|
| 11 |
+
kietnv@uit.edu.vn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
This paper presents ViSP, a high-quality Vietnamese dataset for sentence paraphrasing, consisting of 1.2M original-paraphrase pairs collected from various domains. The dataset was constructed using a hybrid approach that combines automatic paraphrase generation with manual evaluation to ensure high quality. We conducted experiments using methods such as back-translation, EDA, and baseline models like BART and T5, as well as large language models (LLMs), including GPT-4o, Gemini-1.5, Aya, Qwen-2.5, and Meta-Llama-3.1 variants. To the best of our knowledge, this is the first large-scale study on Vietnamese paraphrasing. We hope that our dataset and findings will serve as a valuable foundation for future research and applications in Vietnamese paraphrase tasks. The dataset is available for research purposes at https://github.com/ngwgsang/ViSP.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Sentences or phrases that express the same idea but use different words are called paraphrases (Bhagat and Hovy, 2013). Paraphrase helps create a richer amount of data, but still retains the main meaning of the sentence used.
|
| 20 |
+
|
| 21 |
+
Paraphrases generation is crucial for various tasks such as: In question answering (Bernhard and Gurevych, 2008; Dong et al., 2017; Gan and Ng, 2019), by generating paraphrases of the retrieved answers, QA systems can provide more comprehensive and nuanced responses; In information retrieval (Wallis, 1993; Zukerman et al., 2002), paraphrasing can help search engines find relevant documents even if the user's query doesn't match the exact wording of the documents; Machine translation (Callison-Burch et al., 2006; Russo-Lassner et al., 2005), paraphrasing techniques can enhance translation accuracy by generating more natural and semantically equivalent translations and chat bot (Marceau et al., 2022), paraphrasing enables
|
| 22 |
+
|
| 23 |
+
chat bot to respond more flexibly and naturally to user queries, adapting to variations in phrasing.
|
| 24 |
+
|
| 25 |
+
Although Vietnamese is widely spoken languages, Vietnamese is referred to as a low-resource language in NLP. Most previous work in paraphrase generation has focused mainly on English, such as MS-COCO (Lin et al., 2014), PAWS (Alzantot et al., 2018), QQP<sup>1</sup>, ParaSCI (Dong et al., 2021). Although there are multilingual datasets such as TaPaCo (Scherrer, 2020), the number of Vietnamese sentence pairs is only 962, the number of sentence pairs is too small and because they are translated from English, the meaning will not be fluent. Some other works related to paraphrasing, such as ViQP (Nguyen et al., 2023b), have the limitation that their scope is only in questions, and questions in Vietnamese have a completely different structure than normal sentences.
|
| 26 |
+
|
| 27 |
+
In this paper, two our main contributions are described as follows:
|
| 28 |
+
|
| 29 |
+
1. The creation of ViSP, the first large-scale dataset for Vietnamese sentence paraphrasing. We developed a dataset containing over 1.2 million pairs of Vietnamese sentences across diverse topics. Each original sentence is accompanied by multiple paraphrases, all manually verified by a team of annotators to ensure high quality and accuracy.
|
| 30 |
+
2. Comprehensive exploration of Vietnamese sentence paraphrasing. We evaluated baseline models and compared their performance with traditional methods, such as rule-based approaches and back translation, as well as human performance. This analysis highlights the relative strengths and limitations of automated paraphrase generation for Vietnamese.
|
| 31 |
+
|
| 32 |
+
We hope ViSP together with our empirical study
|
| 33 |
+
|
| 34 |
+
$^{1}$ https://quoradata.quora.com/
|
| 35 |
+
|
| 36 |
+
First-Quora-Dataset-Release-Question-Pairs
|
| 37 |
+
|
| 38 |
+
can serve as a starting point for future Vietnamese paraphrase research and applications.
|
| 39 |
+
|
| 40 |
+
# 2 Dataset Creation
|
| 41 |
+
|
| 42 |
+
In this section, we introduce the process of constructing the ViSP dataset (see Figure 1), which includes Collecting, Preprocessing, Exemplating, Generating and Validating.
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
Figure 1: The overview process of creating our dataset ViSP.
|
| 46 |
+
|
| 47 |
+
# 2.1 Data Collection
|
| 48 |
+
|
| 49 |
+
We collect sentences from publicly available resources that contain original Vietnamese documents, including the UIT-ViQuAD (Nguyen et al., 2020b), UIT-ViNewsQA (Van Nguyen et al., 2022), ALQAC (Nguyen et al., 2023a) and ViNLI (Huynh et al., 2022) datasets. These datasets provide a diverse range of data sourced from Vietnamese news articles and Wikipedia, offering valuable material for sentence paraphrasing task, respectively.
|
| 50 |
+
|
| 51 |
+
After collecting data from the available datasets, we proceed to extract sentences from context segments of the above data sources using underthesea, a Vietnamese NLP toolkit<sup>2</sup>.
|
| 52 |
+
|
| 53 |
+
# 2.2 Preprocessing
|
| 54 |
+
|
| 55 |
+
First, we manually filtered the sentences to remove those that were incorrect, unsuitable for Vietnamese language norms, or contained offensive language.
|
| 56 |
+
|
| 57 |
+
Next, we classified the sentences based on their topic using the Gemini (Team et al., 2023). The model categorized sentences into various labels, including health, society, lifestyle, science, culture,
|
| 58 |
+
|
| 59 |
+
computer, law, sports, business and other. This step allowed us to organize the sentences by their subject matter, offering a comprehensive overview of different domains within the Vietnamese language context.
|
| 60 |
+
|
| 61 |
+
# 2.3 Exampleing
|
| 62 |
+
|
| 63 |
+
To evaluate the generative performance of the Gemini model, we divided the team into two groups: the generation group $\{H_1, H_2\}$ and the evaluation group $\{H_3, H_4, H_5, H_6, H_7\}$ . We randomly selected 350 sentences, consisting of 300 for testing and 50 for generate the Few-shot prompt, referred to as the few-shot corpus. The annotators in the evaluation group were tasked with manually generating paraphrases for the selected sentences, followed by cross-validation of the paraphrases among the evaluators. The generation group individually crafted paraphrases manually, providing a direct comparison against the AI group $\{G_1, G_2, G_3\}$ . We split the dataset into 6 rounds $\{R1, R2, R3, R4, R5, R6\}$ , each consisting of 50 sentences. The Few-shot prompts were randomly selected from 10 out of the 50 samples in the Few-shot corpus, which had been created by the evaluation group.
|
| 64 |
+
|
| 65 |
+
<table><tr><td></td><td>R1</td><td>R2</td><td>R3</td><td>R4</td><td>R5</td><td>R6</td></tr><tr><td>G1</td><td>73.10</td><td>68.30</td><td>70.78</td><td>69.56</td><td>70.87</td><td>68.18</td></tr><tr><td>G2</td><td>69.51</td><td>68.74</td><td>68.98</td><td>69.45</td><td>70.44</td><td>65.55</td></tr><tr><td>G3</td><td>70.28</td><td>66.93</td><td>68.45</td><td>69.50</td><td>69.16</td><td>65.09</td></tr><tr><td>H1</td><td>72.11</td><td>66.79</td><td>70.64</td><td>70.25</td><td>70.56</td><td>67.13</td></tr><tr><td>H2</td><td>71.08</td><td>69.13</td><td>70.75</td><td>68.30</td><td>69.55</td><td>69.48</td></tr></table>
|
| 66 |
+
|
| 67 |
+
Table 1: Compare Gemini with Few-shot examples performance and human performance across six rounds on the BLEU-4.
|
| 68 |
+
|
| 69 |
+
Table 1 demonstrate that Gemini significantly outperforms human efforts in paraphrase generation across multiple rounds. Specifically, the model achieved a win rate of $83.33\%$ against $H_{1}$ and $66.67\%$ against $H_{2}$ . These results underscore the effectiveness of AI in replacing manual paraphrase generation, offering both cost savings and greater coverage.
|
| 70 |
+
|
| 71 |
+
# 2.4 Data Generation
|
| 72 |
+
|
| 73 |
+
We used the highest-performing prompt from section 2.3 to generate paraphrases from the cleaned and labeled dataset of original sentences from section 2.2. The paraphrase generation task can be
|
| 74 |
+
|
| 75 |
+
# Input
|
| 76 |
+
|
| 77 |
+
S: Berlin tro thanh dia diem thanh pho duoc vieng tham nhieu tu ba tai chau $\hat{\mathrm{Au}}$ (English: Berlin becomes the third most visited city in Europe.)
|
| 78 |
+
|
| 79 |
+
k:2
|
| 80 |
+
|
| 81 |
+
# Output
|
| 82 |
+
|
| 83 |
+
$p_1$ : Berlin la thanh pho duoc du khach vieng tham nhieu tu ba tai chau $\hat{\mathrm{Au}}$ . (English: Berlin is the third most visited city in Europe by tourists.) $p_2$ : Xep thu ba ve so luong du khach vieng tham tai chau $\hat{\mathrm{Au}}$ la thanh pho Berlin. (English:Ranked third in terms of number of visitors in Europe is the city of Berlin.)
|
| 84 |
+
|
| 85 |
+
Table 2: Example of input and output of sentence paraphrase task.
|
| 86 |
+
|
| 87 |
+
formalized as follows. For each input, consisting of an original sentence $s$ , the number $k$ of paraphrases to be generated, and the chosen Few-shot prompt $f$ , the model $M$ generates a set of paraphrases using the formula 1:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
M _ {f} (s, k) = \left\{p _ {1}, p _ {2}, \dots , p _ {k} \right\} \tag {1}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
In this setup, the task is to generate $k$ paraphrases $\{p_1, p_2, \ldots, p_k\}$ that convey the original meaning while varying the structure and wording of the sentence $s$ .
|
| 94 |
+
|
| 95 |
+
# 2.5 Data Validation
|
| 96 |
+
|
| 97 |
+
Automatic evaluation of the generation results from large language models (LLMs) can be easily achieved when ground truths from existing datasets are available (Zhu et al., 2023). However, open-ended data like paraphrasing or translation, human validation is necessary (Long et al., 2024). A straightforward idea is to provide some generated samples to human experts, who will then determine whether they are correct.
|
| 98 |
+
|
| 99 |
+
We established a review process involving seven annotators to ensure the quality of the paraphrased sentences generated by Gemini (Team et al., 2023). Each original-paraphrase sentence pair was evaluated by three annotators, corresponding to three votes. Annotators assessed each pair as True or False. A pair was considered valid if it received at least two True votes out of three. Sentence pairs were marked as False if their meaning was not preserved after paraphrasing or if they contained grammatical or spelling errors, based on a checklist (See Appendix C). A pair that received two or more False votes were removed from the dataset.
|
| 100 |
+
|
| 101 |
+
# Original
|
| 102 |
+
|
| 103 |
+
SpaceX dang thú,nghiem cac nguyen māu tαi co só cúa ho ǒ nam Texas, tuy nhién ca 4 phien bàn bay thu gāniday déu két thuc bang vu n°
|
| 104 |
+
|
| 105 |
+
(English: SpaceX is testing the prototypes of the spacecraft at their facility in southern Texas; however, all four recent test flights have ended in explosions.)
|
| 106 |
+
|
| 107 |
+
# Paraphrase
|
| 108 |
+
|
| 109 |
+
Cac māu tau dang duoc thu nghiem tai co so SpaceX o phía nam Texas, nhùng ca 4 phiên bàn thu nghiem bay gàniday dā khòng thanh cóng.
|
| 110 |
+
|
| 111 |
+
(English: The spacecraft prototypes are being tested at SpaceX's facility in southern Texas, but all four recent test flights have failed.)
|
| 112 |
+
|
| 113 |
+
Table 3 presents an example of SEMANTIC EQUIVALENCE error for the generated paraphrases. Sentences with errors were removed from the dataset to ensure high-quality standards. Across the entire dataset, the average error rate was $4.49\%$ (See Appendix C).
|
| 114 |
+
|
| 115 |
+
# 2.6 Dataset Analysis
|
| 116 |
+
|
| 117 |
+
# 2.6.1 Overall Statistics
|
| 118 |
+
|
| 119 |
+
The statistics of the training, validation and test sets of the ViSP dataset are described in Table 4. In the table, we present number of original, the average number of paraphrases per original, the average lengths of original and paraphrased sentences, as well as the vocabulary sizes for both original and paraphrased sentences across all sets.
|
| 120 |
+
|
| 121 |
+
Table 3: Example of an incorrect paraphrase pair violating the SEMANTIC EQUIVALENCE constraint.
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Statistics</td><td>Train</td><td>Val</td><td>Test</td></tr><tr><td>Sentence Pair†</td><td>406,308</td><td>391,044</td><td>380,590</td></tr><tr><td>Original</td><td>33,030</td><td>6,929</td><td>6,963</td></tr><tr><td>Avg. paraphrase per original</td><td>2.97</td><td>6.91</td><td>6.80</td></tr><tr><td>Avg. original length</td><td>21.90</td><td>21.47</td><td>21.53</td></tr><tr><td>Avg. paraphrase length</td><td>22.95</td><td>23.36</td><td>23.35</td></tr><tr><td>Original vocab</td><td>42,135</td><td>15,826</td><td>15,952</td></tr><tr><td>Paraphrase vocab</td><td>45,460</td><td>20,277</td><td>20,248</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 4: Statistics of the training, validation, and test sets of the ViSP dataset. $\dagger$ denotes that total number of paraphrase pairs generated from all possible combinations.
|
| 126 |
+
|
| 127 |
+
# 2.6.2 Data Faithfulness and Diversity
|
| 128 |
+
|
| 129 |
+
We evaluate the dataset using BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) to measure semantic similarity between generated paraphrases and original sentences by comparing n-grams. As
|
| 130 |
+
|
| 131 |
+
shown in Table 5, BLEU-4 scores for the Train, Val, and Test sets are 63.66, 67.24, and 66.83, while ROUGE-2 scores are 72.4, 73.32 and 72.99, indicating strong semantic alignment across all subsets.
|
| 132 |
+
|
| 133 |
+
To assess paraphrase diversity, we use DIST-1 and DIST-2(Li et al., 2016), which measure unique unigrams and bigrams, as well as Entropy-based metrics ENT-4, which capture the distributional richness of generated paraphrases, and Jaccard, which gauges lexical overlap. The DIST-1 scores are 94.94, 95.56, and 95.42, and the DIST-2 scores are 94.74, 95.01, and 94.96 for the Train, Validation, and Test sets, respectively. Additionally, the ENT-4 scores are 5.71, 6.52, and 6.51, while the Jaccard scores are 53.85, 51.61, and 51.39 across the same sets. The consistently high Distinct and ENT-4 values, accompanied by the lower Jaccard on the validation and test sets, suggest that the paraphrases exhibit a diverse lexical distribution, minimizing redundancy while maintaining coherence. The slightly higher diversity metrics in these sets also indicate that the paraphrases are more varied, improving evaluation robustness by ensuring broader linguistic diversity.
|
| 134 |
+
|
| 135 |
+
<table><tr><td>Type</td><td>Metric</td><td>Train</td><td>Val</td><td>Test</td></tr><tr><td rowspan="2">Semantic</td><td>BLEU-4</td><td>63.66</td><td>67.24</td><td>66.83</td></tr><tr><td>ROUGE-2</td><td>72.40</td><td>73.32</td><td>72.99</td></tr><tr><td rowspan="4">Diversity</td><td>DIST-1</td><td>94.94</td><td>95.56</td><td>95.42</td></tr><tr><td>DIST-2</td><td>94.74</td><td>95.01</td><td>94.96</td></tr><tr><td>ENT-4</td><td>5.71</td><td>6.52</td><td>6.51</td></tr><tr><td>Jaccard</td><td>53.85</td><td>51.61</td><td>51.39</td></tr><tr><td rowspan="4">Human Eval</td><td>INF</td><td>4.74</td><td>4.73</td><td>4.78</td></tr><tr><td>REL</td><td>4.64</td><td>4.50</td><td>4.71</td></tr><tr><td>FLU</td><td>4.86</td><td>4.83</td><td>4.80</td></tr><tr><td>COH</td><td>4.86</td><td>4.90</td><td>4.89</td></tr></table>
|
| 136 |
+
|
| 137 |
+
Additionally, we conduct a manual evaluation by human experts on 200 randomly selected samples from each of the train, validation, and test sets. Human evaluators assess the paraphrases using a 5-point scale across four key dimensions, based on (Grusky et al., 2018): INF, REL, FLU and COH (see Appendix B.2). Before the evaluation, we measured inter-annotator agreement using Fleiss' Kappa (Fleiss, 1971) for the task of rating paraphrase sentence pairs with five labels corresponding to the 5-point scale (ratings from 1 to 5). The
|
| 138 |
+
|
| 139 |
+
Fleiss' Kappa values for four human metrics was 0.7252, 0.7144, 0.7634, and 0.7481, respectively. According to the interpretation guidelines by (Landis, 1977), these Kappa values indicate substantial agreement among the annotators. As shown in Table 5, the scores were rated quite well, ranging from 4.71 to 4.89.
|
| 140 |
+
|
| 141 |
+
# 2.6.3 Topic Based Analysis
|
| 142 |
+
|
| 143 |
+
In Table 6, Health and Society are the most common topics, making up about $33\%$ and $19\%$ of the total dataset, respectively. This disparity occurs because the dataset originates from UIT-ViNewsQA and UIT-ViQuAD (see Appendix A), which primarily focus on these two topics. The other topics are more evenly spread, each covering around $3\%$ to $6\%$ of the data.
|
| 144 |
+
|
| 145 |
+
Table 5: Evaluation of semantic faithfulness, diversity, and human evaluation metrics on the Train, Validation, and Test sets.
|
| 146 |
+
|
| 147 |
+
<table><tr><td>Topic</td><td>Train</td><td>Val</td><td>Test</td><td>All</td></tr><tr><td>Health</td><td>11,381</td><td>2,443</td><td>2,367</td><td>16,193</td></tr><tr><td>Society</td><td>7,088</td><td>1,183</td><td>1,222</td><td>9,495</td></tr><tr><td>Culture</td><td>2,189</td><td>425</td><td>407</td><td>3,023</td></tr><tr><td>Computer</td><td>1,669</td><td>494</td><td>475</td><td>2,640</td></tr><tr><td>World</td><td>3,192</td><td>399</td><td>415</td><td>4,008</td></tr><tr><td>Sports</td><td>1,401</td><td>387</td><td>405</td><td>2,195</td></tr><tr><td>Science</td><td>2,001</td><td>593</td><td>616</td><td>3,212</td></tr><tr><td>Lifestyle</td><td>1,947</td><td>522</td><td>507</td><td>2,978</td></tr><tr><td>Law</td><td>1,804</td><td>327</td><td>377</td><td>2,510</td></tr><tr><td>Business</td><td>1,045</td><td>342</td><td>330</td><td>1,719</td></tr></table>
|
| 148 |
+
|
| 149 |
+
Table 6: Distribution of topics across the Train, Validation, Test, and All sets in the dataset, statistics based on the number of original sentences. For examples of each topic of sentence, see the Appendix Table 13.
|
| 150 |
+
|
| 151 |
+
# 2.6.4 Length Based Analysis
|
| 152 |
+
|
| 153 |
+
Table 7 shows the combined distribution of sentence lengths across both the original and paraphrased sentences. The majority of sentences are between 11 and 20 words, accounting for approximately $43.05\%$ of the dataset, which is the highest percentage among all length ranges. In contrast, sentences with more than 51 words represent the lowest percentage, comprising only about $0.65\%$ of the dataset.
|
| 154 |
+
|
| 155 |
+
# 3 Experiments and Results
|
| 156 |
+
|
| 157 |
+
# 3.1 Human Performance
|
| 158 |
+
|
| 159 |
+
Following human performance concept of other study like (Nguyen et al., 2020b; Huynh et al.,
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
(a) Business
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
(b) Computer
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
(c) Culture
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
(d) Health
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
(e)Law
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
(f) Lifestyle
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
(g) Society
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
(h) Science
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
(i) Sports
|
| 187 |
+
Figure 2: Word clouds illustrating the most frequent words for each topic in the ViSP dataset.
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
(j) Other
|
| 191 |
+
|
| 192 |
+
<table><tr><td>Words</td><td>Train</td><td>Validation</td><td>Test</td><td>All</td></tr><tr><td>1-10</td><td>10,111</td><td>4,009</td><td>3,983</td><td>18,103</td></tr><tr><td>11-20</td><td>56,043</td><td>23,800</td><td>23,657</td><td>103,500</td></tr><tr><td>21-30</td><td>42,945</td><td>18,737</td><td>18,419</td><td>80,101</td></tr><tr><td>31-40</td><td>16,291</td><td>6,414</td><td>6,255</td><td>28,960</td></tr><tr><td>41-50</td><td>4,975</td><td>1,514</td><td>1,651</td><td>8,140</td></tr><tr><td>51+</td><td>826</td><td>355</td><td>372</td><td>1,553</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 7: Combined distribution of sentence lengths across the Train, Validation, Test, and All sets, including both original and paraphrase sentences.
|
| 195 |
+
|
| 196 |
+
2022), we recruited five native Vietnamese speakers to perform the paraphrasing task. These individuals had no prior experience with paraphrasing tasks. Each annotator was asked to generate three paraphrases for a given set of sentences. Before starting, they were trained on the concept of paraphrasing and provided with guidelines to ensure that the paraphrases retained the original meaning while introducing lexical and structural variations.
|
| 197 |
+
|
| 198 |
+
Next, we randomly selected a subset of 300 samples, with 150 drawn from the test set and 150 from the validation set. This subset was designated as $\mathrm{Test}_{300}$ for further evaluation.
|
| 199 |
+
|
| 200 |
+
# 3.2 Re-Implemented Methods and Baselines
|
| 201 |
+
|
| 202 |
+
In this section, we re-implemented the following method and models on our dataset as described in Section 3.
|
| 203 |
+
|
| 204 |
+
EDA (Wei and Zou, 2019) applies simple transformations such as random deletion (RD), random
|
| 205 |
+
|
| 206 |
+
swap (RS), random insertion (RI), and synonym replacement (SR). For RI and SR, we replace WordNet with the PhoW2V model (Nguyen et al., 2020a) to generate Vietnamese synonyms.
|
| 207 |
+
|
| 208 |
+
Back Translation leverages translation between languages to produce semantically similar sentences. We use the en2vi and vi2en models from (Nguyen et al., 2022) for this process.
|
| 209 |
+
|
| 210 |
+
We experiment with several pre-trained sequence-to-sequence models for paraphrase generation, including mBART (Tang et al., 2020), BARTpho (Tran et al., 2021), mT5 (Xue, 2020), and ViT5 (Phan et al., 2022). These models were chosen for their strengths in both multilingual and Vietnamese-specific tasks. mBART and mT5 provide robust multilingual capabilities, while BARTpho and ViT5 are optimized for Vietnamese, offering language-specific nuances.
|
| 211 |
+
|
| 212 |
+
# 3.3 Evaluation Metrics
|
| 213 |
+
|
| 214 |
+
We use BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTScore (Zhang et al., 2019) to evaluate paraphrase quality, and Distinct-N (Li et al., 2016), Entropy-N (Shannon, 1948), and Jaccard (Jaccard, 1901) to measure diversity. For a detailed breakdown of evaluation metrics, see Appendix B.
|
| 215 |
+
|
| 216 |
+
# 3.4 Experimental Settings
|
| 217 |
+
|
| 218 |
+
We use a single NVIDIA Tesla A100 GPU via Google Colaboratory<sup>3</sup> to fine-tune all models on
|
| 219 |
+
|
| 220 |
+
our dataset. When fine-tuning, we set the length of max length of sentence is 96 tokens, learning rate is 1e-5, batch size is 16 and training with five epochs.
|
| 221 |
+
|
| 222 |
+
# 3.5 Experimental Results
|
| 223 |
+
|
| 224 |
+
# 3.5.1 Single Paraphrase Evaluation
|
| 225 |
+
|
| 226 |
+
In the realm of single paraphrase generation, Table 8 shows that BARTpho-wordlarge leads in BLEU-4 and ROUGE-2, with values of 72.06 and 76.06 on the Val set, respectively, indicating that the generated sentences are more similar to the reference paraphrases. It maintains strong performance on Test and Test300, achieving BLEU-4 of 71.61 and 71.70, and ROUGE-2 of 75.78 and 76.22. While mBARTlarge performs well, it achieves higher BERTScore across all sets, with 85.84 on Val and 86.17 on Test300, suggesting that although the generated sentences differ more from the references, they retain better semantic similarity. ViT5-base also shows strong semantic preservation, with BERTScore reaching 85.37 on Test300. Among monolingual models, BARTphosyllablelarge performs well, with BERTScore of 85.62 on Test300, reinforcing its effectiveness in generating faithful paraphrases. Human performance remains the upper bound, with BERTScore of 88.30, highlighting the gap between model-generated and human paraphrases. Among augmentation methods, Back Translation performs best, achieving a BERTScore of 79.99 on Test300, while simpler methods like Random Deletion and Synonym Replacement show notably lower scores.
|
| 227 |
+
|
| 228 |
+
# 3.5.2 Multiple Paraphrases Evaluation
|
| 229 |
+
|
| 230 |
+
For multiple paraphrase generation, as shown in Table 9, BARTpho-wordlarge leads the performance across all K values, with the highest BLEU-4 and ROUGE-2 scores, indicating that its generated paraphrases closely resemble the reference sentences. $\mathrm{mBART}_{\mathrm{large}}$ , while slightly behind in BLEU-4 and ROUGE-2, achieves higher BERTScore for $\mathrm{K} = 3$ and $\mathrm{K} = 5$ , suggesting strong semantic similarity. However, its performance declines at $\mathrm{K} = 10$ , showing reduced accuracy when generating more paraphrases. Among monolingual models with base architecture, ViT5base performs well at $\mathrm{K} = 10$ , achieving a BLEU-4 score of 66.92 and a BERTScore of 83.68. ViT5base also outperforms ViT5large, showing greater stability and less degradation in accuracy with increasing paraphrase numbers.
|
| 231 |
+
|
| 232 |
+
# 3.5.3 Topic Based Evaluation
|
| 233 |
+
|
| 234 |
+
As shown in Figure 3, the results indicate that BARTpho-wordlarge and mBARTlarge consistently perform best across all topics. Within the T5 family, ViT5-base still outperforms other T5 models but also consistently surpasses all base BART models, including BARTpho-syllablebase and BARTpho-wordbase. This is evident across multiple categories, especially in culture and sports. However, the challenge appears more pronounced when evaluated by BLEU, whereas BERTScore remains relatively similar across topics. Additionally, lifestyle, culture, and sports emerge as the most challenging domains for all models, with the highest BLEU-4 scores in these categories hovering only around 70.
|
| 235 |
+
|
| 236 |
+
# 3.5.4 Length Based Evaluation
|
| 237 |
+
|
| 238 |
+
As shown in Figure 4, BARTpho-wordlarge consistently outperforms other models in BLEU across all sentence lengths, achieving the highest scores in the 41-50 word range. Meanwhile, BARTphosyllablelarge tends to yield stronger BERTScore values, highlighting the overall effectiveness of BARTpho-based models. $\mathrm{mBART}_{\mathrm{large}}$ follows closely in most cases; for instance, in the 31-40 word range, BARTpho-syllablebase achieves 69.38 BLEU-4 and 85.32 BERTScore, slightly below mBART. In contrast, T5-based models (ViT5, mT5) show weaker performance, particularly in shorter sentences (1-20 words). Notably, $\mathrm{mT5}_{\mathrm{large}}$ scores as low as 18.56 BLEU-4 and 70.96 BERTScore in the 11-20 word range, significantly below the BART-based models, which consistently perform well across all sentence lengths.
|
| 239 |
+
|
| 240 |
+
# 3.5.5 Diversity Based Evaluation
|
| 241 |
+
|
| 242 |
+
Table 10 reports the diversity DIST-1, DIST-2 and entropy ENT-4 metrics, as well as the Jaccard scores, of the five beam-searched paraphrases generated by various models on the Test and $\mathrm{Test}_{300}$ sets. Overall, BARTpho-word models exhibit strong performance, particularly BARTpho-wordlarge, which achieves competitive bigram diversity with a DIST-2 score of 94.70 on the Test set and 94.63 on $\mathrm{Test}_{300}$ , along with a high ENT-4 score of 5.28 on the Test set. Notably, BARTphosyllablelarge attains the highest DIST-1 score of 95.75 on the Test set, while $\mathrm{mBART}_{\text{large}}$ reaches the highest DIST-1 score of 95.72 on $\mathrm{Test}_{300}$ . Meanwhile, ViT5large leads in ENT-4 for the Test set with a score of 5.29. Additionally, although $\mathrm{mT5}_{\text{base}}$
|
| 243 |
+
|
| 244 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">Val</td><td colspan="3">Test</td><td colspan="3">Test300</td></tr><tr><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td></tr><tr><td>RD</td><td>26.82</td><td>52.49</td><td>66.19</td><td>26.87</td><td>52.42</td><td>66.10</td><td>26.23</td><td>52.93</td><td>66.27</td></tr><tr><td>RS</td><td>14.79</td><td>48.89</td><td>63.75</td><td>14.76</td><td>48.70</td><td>63.76</td><td>14.69</td><td>48.86</td><td>64.12</td></tr><tr><td>RI + PhoW2V</td><td>29.50</td><td>57.79</td><td>69.79</td><td>29.43</td><td>57.63</td><td>69.69</td><td>29.86</td><td>58.49</td><td>70.05</td></tr><tr><td>SR + PhoW2V</td><td>22.81</td><td>46.48</td><td>63.30</td><td>22.67</td><td>46.46</td><td>63.28</td><td>22.86</td><td>46.14</td><td>63.10</td></tr><tr><td>BT + vinaitranslate-v2</td><td>54.33</td><td>63.84</td><td>79.23</td><td>53.97</td><td>63.65</td><td>21.60</td><td>54.03</td><td>64.33</td><td>79.99</td></tr><tr><td>mBARTlarge</td><td>71.71</td><td>76.02</td><td>85.84</td><td>71.12</td><td>75.76</td><td>85.74</td><td>72.23</td><td>76.20</td><td>86.17</td></tr><tr><td>mT5base</td><td>60.22</td><td>70.20</td><td>81.46</td><td>59.58</td><td>69.69</td><td>81.27</td><td>60.84</td><td>71.00</td><td>81.85</td></tr><tr><td>mT5large</td><td>27.04</td><td>46.59</td><td>72.02</td><td>26.86</td><td>46.23</td><td>71.84</td><td>26.83</td><td>46.48</td><td>72.38</td></tr><tr><td>BARTpho-syllablebase</td><td>68.39</td><td>73.97</td><td>84.34</td><td>67.66</td><td>73.51</td><td>84.15</td><td>70.39</td><td>75.03</td><td>85.06</td></tr><tr><td>BARTpho-syllablelarge</td><td>70.29</td><td>75.35</td><td>85.22</td><td>69.81</td><td>74.89</td><td>85.10</td><td>70.83</td><td>75.75</td><td>85.62</td></tr><tr><td>BARTpho-wordbase</td><td>69.61</td><td>74.63</td><td>79.25</td><td>68.76</td><td>74.18</td><td>79.15</td><td>70.23</td><td>75.47</td><td>79.72</td></tr><tr><td>BARTpho-wordlarge</td><td>72.06</td><td>76.06</td><td>79.97</td><td>71.61</td><td>75.78</td><td>79.99</td><td>71.70</td><td>76.22</td><td>80.10</td></tr><tr><td>ViT5base</td><td>70.20</td><td>74.91</td><td>85.08</td><td>69.75</td><td>74.58</td><td>85.00</td><td>71.24</td><td>75.69</td><td>85.37</td></tr><tr><td>ViT5large</td><td>67.10</td><td>71.83</td><td>82.68</td><td>66.70</td><td>71.53</td><td>82.53</td><td>67.73</td><td>72.92</td><td>82.98</td></tr><tr><td>Human performance</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>94.97</td><td>88.29</td><td>88.30</td></tr></table>
|
| 245 |
+
|
| 246 |
+
Table 8: Evaluation of various models and methods on the Val, Test, and ${\mathrm{{Test}}}_{300}$ sets of the ViSP dataset,assessing the best single paraphrased sentence generated by each model. The best overall results are highlighted in bold.
|
| 247 |
+
|
| 248 |
+
<table><tr><td rowspan="2">Model</td><td colspan="3">K = 3</td><td colspan="3">K = 5</td><td colspan="3">K = 10</td></tr><tr><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td></tr><tr><td>mBARTlarge</td><td>70.35</td><td>75.23</td><td>85.29</td><td>69.76</td><td>74.84</td><td>85.06</td><td>66.16</td><td>71.50</td><td>81.57</td></tr><tr><td>mT5base</td><td>58.68</td><td>68.85</td><td>80.32</td><td>57.78</td><td>68.27</td><td>79.88</td><td>54.68</td><td>66.36</td><td>78.36</td></tr><tr><td>mT5large</td><td>26.98</td><td>46.39</td><td>71.51</td><td>27.07</td><td>46.48</td><td>71.46</td><td>27.29</td><td>46.70</td><td>71.09</td></tr><tr><td>BARTpho-syllablebase</td><td>66.81</td><td>72.99</td><td>83.63</td><td>66.20</td><td>72.56</td><td>83.37</td><td>62.26</td><td>68.94</td><td>79.54</td></tr><tr><td>BARTpho-syllablelarge</td><td>69.02</td><td>74.43</td><td>84.65</td><td>68.48</td><td>74.07</td><td>84.44</td><td>67.06</td><td>73.20</td><td>83.80</td></tr><tr><td>BARTpho-wordbase</td><td>68.03</td><td>73.70</td><td>78.50</td><td>67.42</td><td>73.31</td><td>78.34</td><td>64.09</td><td>70.21</td><td>75.54</td></tr><tr><td>BARTpho-wordlarge</td><td>70.88</td><td>75.28</td><td>79.39</td><td>70.31</td><td>74.91</td><td>79.23</td><td>69.10</td><td>74.09</td><td>78.88</td></tr><tr><td>ViT5base</td><td>68.93</td><td>74.08</td><td>84.52</td><td>68.35</td><td>73.73</td><td>84.34</td><td>66.92</td><td>72.81</td><td>83.68</td></tr><tr><td>ViT5large</td><td>66.19</td><td>71.24</td><td>81.93</td><td>65.63</td><td>70.89</td><td>81.68</td><td>63.98</td><td>69.87</td><td>80.85</td></tr></table>
|
| 249 |
+
|
| 250 |
+
Table 9: Evaluation of various models on the ViSP dataset's test set, focusing on K paraphrased sentences generated per input sentence (where K is the number of paraphrases). The BLEU-4, ROUGE-2 and BERTScore are averaged across all K paraphrases for each model. The best overall results are highlighted in bold. For a detailed breakdown of multiple paraphrase outputs, See Appendix D, Table 15.
|
| 251 |
+
|
| 252 |
+
achieves high Jaccard scores of 82.72 on the Test set and 87.98 on $\mathrm{Test}_{300}$ , BARTpho-wordlarge obtains the lowest Jaccard scores of 57.99 (Test) and 58.26 (Test<sub>300</sub>), signifying the greatest lexical diversity in its paraphrasing. These findings suggest that BARTpho-word excels at generating lexically varied and distributionally rich paraphrases, making it a robust option for applications requiring both diversity and consistency in paraphrasing.
|
| 253 |
+
|
| 254 |
+
# 4 Discussion
|
| 255 |
+
|
| 256 |
+
Multilingual vs. Monolingual. As (Conneau, 2019) highlight, while multilingual models offer flexibility across languages, monolingual models often excel in specialized tasks due to their focus on a single language's nuances. This distinction becomes apparent in our test results, where
|
| 257 |
+
|
| 258 |
+
BARTpho-wordlarge demonstrates a clear advantage in both single and multi-paraphrase generation. Initially, BARTpho-wordlarge holds a clear edge in single paraphrase generation, achieving the highest BLEU-4 and ROUGE-2 scores. Although mBARTlarge achieves higher BERTScore in single paraphrase generation, it experiences a notable decline in performance as the number of required paraphrases increases (K=5, K=10), with BLEU-4 dropping to 66.16. This performance drop aligns with findings from (Hu et al., 2020), which show that multilingual models struggle with generating numerous high-quality paraphrases. In contrast, monolingual models like BARTpho and ViT5 maintain strong performance in both single and multiple paraphrase tasks. Their focused training makes them better suited for tasks requiring high
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
Figure 3: BLEU-4 and BERTScore across different topics.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+

|
| 266 |
+
Figure 4: BLEU-4 and BERTScore across different lengths.
|
| 267 |
+
|
| 268 |
+

|
| 269 |
+
|
| 270 |
+
paraphrase diversity, consistently producing multiple outputs without losing quality.
|
| 271 |
+
|
| 272 |
+
Impact of Model Architecture. Research suggests that larger architectures do not always ensure better accuracy. For example, ViT5large, mT5large does not surpass ViT5base, mT5base across all metrics, reflecting findings by (Kaplan et al., 2020) that size increases do not guarantee performance gains. Particularly when generating less common words, larger models may perform worse in paraphrasing due to vocabulary deviations (Brown, 2020). In contrast, BART-based models, such as BARTpho-word and BARTpho-syllable, consistently show improvements in both accuracy and diversity with increased model size, as shown by (Lewis, 2019), affirming the benefits of larger architectures in generating diverse paraphrases.
|
| 273 |
+
|
| 274 |
+
Impact of Sentence Structure. The structure of a sentence significantly influences the performance of paraphrase models. Different sentence types in Vietnamese—such as simple, compound, complex, and special—present varying levels of diffi
|
| 275 |
+
|
| 276 |
+
culty. As shown in Table 11, monolingual models like BARTpho-wordlarge consistently achieve higher BLEU-4 scores for simple, compound, and complex sentences. This aligns with findings from (Isabelle et al., 2017), which suggest that models trained on a single language excel in capturing syntactic nuances. However, models often struggle with compound and complex sentences, which frequently include metaphorical and metonymic (See Appendix A, Table 13) expressions in Vietnamese, as noted by (Shutova et al., 2013), highlighting the challenge of paraphrasing non-standard and figurative structures. These results suggest the need for improved handling of complex syntactic and figurative forms in paraphrasing tasks.
|
| 277 |
+
|
| 278 |
+
LLM Performance. Table 12 shows that Vietnamese-specific LLMs, such as Vistral-7B-Chat (Van Nguyen et al., 2023), lag behind significantly, indicating weaker paraphrase generation capabilities compared to general-purpose models. This suggests that current Vietnamese-focused models may require further optimization
|
| 279 |
+
|
| 280 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">Val</td><td colspan="4">Test</td><td colspan="4">Test300</td></tr><tr><td>DIST-1</td><td>DIST-2</td><td>ENT-4</td><td>Jaccard</td><td>DIST-1</td><td>DIST-2</td><td>ENT-4</td><td>Jaccard</td><td>DIST-1</td><td>DIST-2</td><td>ENT-4</td><td>Jaccard</td></tr><tr><td>mBARTlarge</td><td>95.67</td><td>94.61</td><td>5.23</td><td>69.05</td><td>95.62</td><td>94.59</td><td>5.22</td><td>68.82</td><td>95.72</td><td>94.60</td><td>4.34</td><td>70.31</td></tr><tr><td>mT5base</td><td>95.11</td><td>93.33</td><td>5.01</td><td>82.99</td><td>94.91</td><td>93.27</td><td>5.01</td><td>82.72</td><td>95.37</td><td>93.47</td><td>4.15</td><td>87.98</td></tr><tr><td>mT5large</td><td>56.30</td><td>58.94</td><td>5.02</td><td>72.25</td><td>56.23</td><td>58.93</td><td>5.02</td><td>72.11</td><td>54.62</td><td>56.77</td><td>4.51</td><td>72.75</td></tr><tr><td>BARTpho-syllablebase</td><td>95.75</td><td>94.32</td><td>5.12</td><td>76.99</td><td>95.62</td><td>94.30</td><td>5.12</td><td>76.71</td><td>96.17</td><td>94.26</td><td>4.25</td><td>78.91</td></tr><tr><td>BARTpho-syllablelarge</td><td>95.90</td><td>94.53</td><td>5.21</td><td>66.38</td><td>95.75</td><td>94.48</td><td>5.20</td><td>66.35</td><td>96.02</td><td>94.41</td><td>4.27</td><td>66.83</td></tr><tr><td>BARTpho-wordbase</td><td>93.95</td><td>94.60</td><td>5.21</td><td>62.13</td><td>93.80</td><td>94.51</td><td>5.21</td><td>62.10</td><td>94.09</td><td>94.36</td><td>4.41</td><td>63.75</td></tr><tr><td>BARTpho-wordlarge</td><td>93.85</td><td>94.77</td><td>5.27</td><td>58.00</td><td>93.72</td><td>94.70</td><td>5.28</td><td>57.99</td><td>93.82</td><td>94.63</td><td>4.47</td><td>58.26</td></tr><tr><td>ViT5base</td><td>95.33</td><td>94.15</td><td>5.14</td><td>75.70</td><td>95.14</td><td>94.05</td><td>5.14</td><td>75.49</td><td>95.48</td><td>94.01</td><td>4.27</td><td>77.36</td></tr><tr><td>ViT5large</td><td>93.99</td><td>93.59</td><td>5.30</td><td>68.78</td><td>93.95</td><td>93.63</td><td>5.29</td><td>68.53</td><td>94.75</td><td>94.04</td><td>4.39</td><td>69.89</td></tr><tr><td>Human performance</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>95.54</td><td>94.96</td><td>6.48</td><td>52.33</td></tr></table>
|
| 281 |
+
|
| 282 |
+
Table 10: Evaluation of the 5 beam-searched paraphrases in terms of DIST-1, DIST-2, ENT-4, and Jaccard on the Val, Test, and $\text{Test}_{300}$ sets. The best overall results are in bold, and the second best are underlined.
|
| 283 |
+
|
| 284 |
+
<table><tr><td>Model</td><td>simple</td><td>compound</td><td>complex</td></tr><tr><td>mBARTlarge</td><td>73.06</td><td>67.63</td><td>69.73</td></tr><tr><td>mT5base</td><td>61.92</td><td>54.77</td><td>57.96</td></tr><tr><td>mT5large</td><td>24.80</td><td>27.23</td><td>31.14</td></tr><tr><td>BARTpho-syllablebase</td><td>69.58</td><td>64.24</td><td>66.27</td></tr><tr><td>BARTpho-syllablelarge</td><td>71.42</td><td>67.07</td><td>68.56</td></tr><tr><td>BARTpho-wordbase</td><td>70.44</td><td>65.45</td><td>67.69</td></tr><tr><td>BARTpho-wordlarge</td><td>73.40</td><td>68.66</td><td>70.27</td></tr><tr><td>ViT5base</td><td>71.60</td><td>66.56</td><td>68.39</td></tr><tr><td>ViT5large</td><td>67.05</td><td>63.70</td><td>66.58</td></tr></table>
|
| 285 |
+
|
| 286 |
+
or fine-tuning for paraphrase tasks. Among multilingual and general-purpose LLMs, Meta-Llama-3.1-70B (Dubey et al., 2024) achieves the best results, followed by Meta-Llama-3.1-8B (Dubey et al., 2024), GPT-4o (Hurst et al., 2024), and Qwen2.5-7B (Yang et al., 2024), which all demonstrate strong lexical and semantic alignment with human paraphrases. However, none of the models reach human performance, indicating room for improvement in semantic fidelity and lexical variation. Since all models are evaluated without finetuning, their performance is reasonable, especially for high-resource models like Meta-Llama-3.1 and GPT-4o. However, fine-tuning on Vietnamese-specific paraphrase datasets could further narrow the gap between AI-generated and human paraphrases.
|
| 287 |
+
|
| 288 |
+
# 5 Conclusion and Future Work
|
| 289 |
+
|
| 290 |
+
We introduced ViSP, a Vietnamese paraphrase dataset created using human annotations and LLM outputs, for evaluating and benchmarking para
|
| 291 |
+
|
| 292 |
+
Table 11: BLEU-4 scores of various models on different sentence structures in the ViSP Test set. The best overall results are highlighted in bold. For a detailed breakdown of sentence structures, see Appendix A, Table 13.
|
| 293 |
+
|
| 294 |
+
<table><tr><td>Model</td><td>BLEU-4</td><td>ROUGE-2</td><td>BERTScore</td></tr><tr><td>GPT-4o Mini</td><td>52.73</td><td>65.55</td><td>81.82</td></tr><tr><td>Gemini 1.5 Flask</td><td>50.98</td><td>63.02</td><td>79.61</td></tr><tr><td>Vistral-7B-Chat</td><td>29.16</td><td>49.46</td><td>70.71</td></tr><tr><td>Aya-23-8B</td><td>42.15</td><td>59.52</td><td>75.21</td></tr><tr><td>Qwen2.5-7B</td><td>54.38</td><td>65.71</td><td>80.72</td></tr><tr><td>Meta-Llama-3.1-8B</td><td>60.32</td><td>69.34</td><td>82.40</td></tr><tr><td>Meta-Llama-3.1-70B</td><td>65.51</td><td>73.21</td><td>84.27</td></tr><tr><td>Human Performance</td><td>94.97</td><td>88.29</td><td>88.30</td></tr></table>
|
| 295 |
+
|
| 296 |
+
Table 12: Evaluation of various LLMs on the Test $_{300}$ using BLEU-4, ROUGE-2, and BERTScore metrics. The best overall results are highlighted in bold.
|
| 297 |
+
|
| 298 |
+
phrase generation models. We tested models like mBART, BARTpho, ViT5, and mT5 across various sentence lengths and topics, highlighting the strengths and weaknesses of multilingual and monolingual approaches. Our evaluation covered accuracy (BLEU, ROUGE, BERTScore) and diversity (Distinct-N, Entropy-N, Jaccard), and we compared model-generated paraphrases with human performance to assess the gap between automated systems and human paraphrasing.
|
| 299 |
+
|
| 300 |
+
In future, we plan to extend ViSP to tasks like machine translation, question answering and retrieval augmented generation. Additionally, we aim to pretrain a Vietnamese paraphrasing model, addressing a key gap in domain-specific models. This model will target complex linguistic phenomena, including metaphor and metonymy in Vietnamese, which present significant challenges for natural language understanding and generation tasks. ViSP will also support developing robust sentence similarity models like SBERT (Reimers, 2019), advancing Vietnamese NLP research.
|
| 301 |
+
|
| 302 |
+
# Limitations
|
| 303 |
+
|
| 304 |
+
While our models were fine-tuned on the ViSP dataset, they were trained under low-resource conditions, which means the overall performance may not be fully optimized. With more computational resources, further improvements could be achieved. During data creation, we employed the Few-shot method to guide the generation process. However, we have not yet compared this approach with other advanced techniques like Chain-of-Thought (Wei et al., 2022) or Tree-of-Thought (Yao et al., 2024), which could potentially yield better results in generating higher-quality paraphrases. Additionally, the current dataset lacks representation from certain specialized domains such as metaphor, mathematics and programming. This absence may affect the models ability to generalize to these specific areas.
|
| 305 |
+
|
| 306 |
+
# Ethics Statement
|
| 307 |
+
|
| 308 |
+
The ViSP dataset was developed with adherence to ethical guidelines. Human annotators were informed and compensated fairly. All datasets used, including UIT-ViQuAD (Nguyen et al., 2020b), UIT-ViNewsQA (Van Nguyen et al., 2022), ALQAC (Nguyen et al., 2023a) and ViNLI (Huynh et al., 2022), were utilized in compliance with their respective licenses and terms of use. Additionally, in generating paraphrases with large language models (LLMs), we took steps to review and mitigate potential errors in the outputs, ensuring fairness and representativeness across different domains.
|
| 309 |
+
|
| 310 |
+
# Acknowledgement
|
| 311 |
+
|
| 312 |
+
We sincerely appreciate the insightful comments and constructive feedback provided by the anonymous reviewers. This research is funded by Vietnam National University Ho Chi Minh City (VNUHCM) under the grant number DS2025-26-01.
|
| 313 |
+
|
| 314 |
+
# References
|
| 315 |
+
|
| 316 |
+
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples.
|
| 317 |
+
Delphine Bernhard and Iryna Gurevych. 2008. Answering learners' questions by retrieving question paraphrases from social q&a sites.
|
| 318 |
+
Rahul Bhagat and Eduard Hovy. 2013. What Is a Paraphrase? Computational Linguistics, 39(3):463-472.
|
| 319 |
+
|
| 320 |
+
Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
|
| 321 |
+
Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 17-24.
|
| 322 |
+
A Conneau. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
|
| 323 |
+
Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering.
|
| 324 |
+
Qingxiu Dong, Xiaojun Wan, and Yue Cao. 2021. Parasci: A large scientific paraphrase dataset for longer paraphrase generation.
|
| 325 |
+
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
|
| 326 |
+
Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378.
|
| 327 |
+
Wee Chung Gan and Hwee Tou Ng. 2019. Improving the robustness of question answering systems to question paraphrasing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6065-6075, Florence, Italy. Association for Computational Linguistics.
|
| 328 |
+
Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. arXiv preprint arXiv:1804.11283.
|
| 329 |
+
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, pages 4411-4421. PMLR.
|
| 330 |
+
Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276.
|
| 331 |
+
Tin Van Huynh, Kiet Van Nguyen, and Ngan Luu-Thuy Nguyen. 2022. ViNLI: A Vietnamese corpus for studies on open-domain natural language inference. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3858-3872, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
| 332 |
+
Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. arXiv preprint arXiv:1704.07431.
|
| 333 |
+
|
| 334 |
+
Paul Jaccard. 1901. Étude comparative de la distribution florale dans une portion des alpes et des jura. Bull Soc Vaudoise Sci Nat, 37:547-579.
|
| 335 |
+
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
|
| 336 |
+
JR Landis. 1977. The measurement of observer agreement for categorical data. Biometrics.
|
| 337 |
+
M Lewis. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 338 |
+
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119, San Diego, California. Association for Computational Linguistics.
|
| 339 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 340 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer International Publishing.
|
| 341 |
+
Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, and Haobo Wang. 2024. On llms-driven synthetic data generation, curation, and evaluation: A survey.
|
| 342 |
+
Louis Marceau, Raouf Belbahar, Marc Queudot, Nada Naji, Eric Charton, and Marie-Jean Meurs. 2022. Quick starting dialog systems with paraphrase generation.
|
| 343 |
+
Anh Tuan Nguyen, Mai Hoang Dao, and Dat Quoc Nguyen. 2020a. A Pilot Study of Text-to-SQL Semantic Parsing for Vietnamese. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4079-4085.
|
| 344 |
+
Chau Nguyen, Son T Luu, Thanh Tran, An Triieu, Anh Dang, Dat Nguyen, Hiep Nguyen, Tin Pham, Trang Pham, Thien-Trung Vo, et al. 2023a. A summary of the alqac 2023 competition. In 2023 15th International Conference on Knowledge and Systems Engineering (KSE), pages 1-6. IEEE.
|
| 345 |
+
Kiet Van Nguyen, Duc-Vu Nguyen, Anh Gia-Tuan Nguyen, and Ngan Luu-Thuy Nguyen. 2020b. A Vietnamese dataset for evaluating machine reading comprehension.
|
| 346 |
+
|
| 347 |
+
Sang Quang Nguyen, Thuc Dinh Vo, Duc PA Nguyen, Dang T Tran, and Kiet Van Nguyen. 2023b. Viqp: Dataset for Vietnamese question paraphrasing. In 2023 International Conference on Multimedia Analysis and Pattern Recognition (MAPR), pages 1-6. IEEE.
|
| 348 |
+
Thien Hai Nguyen, Tuan-Duy H. Nguyen, Duy Phung, Duy Tran-Cong Nguyen, Hieu Minh Tran, Manh Luong, Tin Duy Vo, Hung Hai Bui, Dinh Phung, and Dat Quoc Nguyen. 2022. A Vietnamese-English Neural Machine Translation System. In Proceedings of the 23rd Annual Conference of the International Speech Communication Association: Show and Tell (INTERSPEECH).
|
| 349 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 350 |
+
Long Phan, Hieu Tran, Hieu Nguyen, and Trieu H Trinh. 2022. Vit5: Pretrained text-to-text transformer for Vietnamese language generation. arXiv preprint arXiv:2205.06457.
|
| 351 |
+
N Reimers. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
|
| 352 |
+
Grazia Russo-Lassner, Jimmy Lin, and Philip Resnik. 2005. A paraphrase-based approach to machine translation evaluation. LAMP-TR-125 CS-TR-4754 UMIACS-TR-2005-57, University of Maryland, College Park, MD.
|
| 353 |
+
Yves Scherrer. 2020. TaPaCo: A corpus of sentential paraphrases for 73 languages. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6868-6873, Marseille, France. European Language Resources Association.
|
| 354 |
+
Claude Elwood Shannon. 1948. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423.
|
| 355 |
+
Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301-353.
|
| 356 |
+
Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401.
|
| 357 |
+
Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.
|
| 358 |
+
|
| 359 |
+
Nguyen Luong Tran, Duong Minh Le, and Dat Quoc Nguyen. 2021. Bartpho: pre-trained sequence-to-sequence models for Vietnamese. arXiv preprint arXiv:2109.09701.
|
| 360 |
+
Chien Van Nguyen, Thuat Nguyen, Quan Nguyen, Huy Nguyen, Björn Plüster, Nam Pham, Huu Nguyen, Patrick Schramowski, and Thien Nguyen. 2023. Vistral-7b-chat-towards a state-of-the-art large language model for Vietnamese.
|
| 361 |
+
Kiet Van Nguyen, Tin Van Huynh, Duc-Vu Nguyen, Anh Gia-Tuan Nguyen, and Ngan Luu-Thuy Nguyen. 2022. New Vietnamese corpus for machine reading comprehension of health news articles. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 21(5).
|
| 362 |
+
Peter Wallis. 1993. Information retrieval based on paraphrase. In Proceedings of PACING conference. Cite-seer.
|
| 363 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837.
|
| 364 |
+
Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196.
|
| 365 |
+
L Xue. 2020. mt5: A massively multilingual pretrained text-to-text transformer. arXiv preprint arXiv:2010.11934.
|
| 366 |
+
An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115.
|
| 367 |
+
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2024. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36.
|
| 368 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
|
| 369 |
+
Yiming Zhu, Peixian Zhang, Ehsan-Ul Haq, Pan Hui, and Gareth Tyson. 2023. Can chatgpt reproduce human-generated labels? a study of social computing tasks.
|
| 370 |
+
Ingrid Zukerman, Bhavani Raskutti, and Yingying Wen. 2002. Experiments in query paraphrasing for information retrieval. In Australian Joint Conference on Artificial Intelligence, pages 24-35. Springer.
|
| 371 |
+
|
| 372 |
+
# Appendix
|
| 373 |
+
|
| 374 |
+
In this section, we provide supplementary information to support the main content of this paper. This includes additional details about the datasets, models, evaluation metrics, and methods used throughout our experiments.
|
| 375 |
+
|
| 376 |
+
# A Dataset Details
|
| 377 |
+
|
| 378 |
+
The ViSP dataset is compiled from several publicly available sources, including:
|
| 379 |
+
|
| 380 |
+
1. UIT-ViQuAD (Nguyen et al., 2020b) This machine reading comprehension dataset includes over 23,000 human-generated question-answer pairs. These pairs are derived from 5,109 passages extracted from 174 Vietnamese Wikipedia articles, providing a rich source of information and ensuring a diverse range of topics and contexts.
|
| 381 |
+
2. UIT-ViNewsQA (Van Nguyen et al., 2022) This corpus consists of 22,057 question-answer pairs created by crowd-workers. These pairs are based on a collection of 4,416 Vietnamese healthcare news articles, with answers being textual spans directly taken from the corresponding articles.
|
| 382 |
+
3. ALQAC (Nguyen et al., 2023a): The ALQAC dataset contains thousands of multiple-choice question-answer pairs, sourced from Vietnamese legal documents. Each pair is carefully reviewed for clarity and accuracy, making it an essential resource for testing question answering models in the legal domain.
|
| 383 |
+
4. ViNLI (Huynh et al., 2022) The ViNLI corpus comprises over 30,000 human-annotated premise-hypothesis sentence pairs. These pairs are extracted from more than 800 online news articles, offering a substantial and varied dataset for natural language inference tasks.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
Figure 5 shows the distribution of original sentence sources in the ViSP dataset, with UIT-ViNewsQA and ViNLI contributing the largest proportion. This suggests that ViSP is heavily influenced by news-related content, which may impact the linguistic patterns and domain coverage of the paraphrases.
|
| 387 |
+
Figure 5: Distribution of sentence source in the ViSP dataset.
|
| 388 |
+
|
| 389 |
+
<table><tr><td>Sentence</td><td>Source</td><td>Topic</td><td>Structure</td></tr><tr><td>Vào ngày 23 tháng 9 nám 1846, nhà thieu vân Johann Galle dã phát hiên ra Sào
|
| 390 |
+
Hái Vùng ò vi tri lêch 1 do so vói tièn doán一则 Urbain Le Verrier. (English:
|
| 391 |
+
On September 23, 1846, astronomer Johann Galle discovered Neptune 1 degree
|
| 392 |
+
off from Urbain Le Verrier's prediction.)</td><td>ViQuAD</td><td>Science</td><td>Complex</td></tr><tr><td>Theo giáo su Long, mõi nuóc có khuyén cáo khác nhau khi diùu tri vi khuǎn HP.
|
| 393 |
+
(English: According to Professor Long, each country has different recommenda-
|
| 394 |
+
tions when treating HP bacteria.)</td><td>ViNewsQA</td><td>Health</td><td>Simple</td></tr><tr><td>Hazard gia nháp Real he 2019 tù Chelsea, theo bàn hop dong tri giú 190 trièu
|
| 395 |
+
USD - trong do có 112 trièu USD trà truóc. (English: Hazard joined Real in the
|
| 396 |
+
summer of 2019 from Chelsea, in a contract worth 190 million USD - including
|
| 397 |
+
112 million USD in advance.) #metonym</td><td>ViNLI</td><td>Sports</td><td>Compound</td></tr><tr><td>Cộng ty dãHung rát nhieu cuoc tan cóng mang, pân tû bó nhieu dich vú chu chôt
|
| 398 |
+
truCô khi 'bán minh' cho dai gaia viēn thòng Mý Verizon. (English: The company
|
| 399 |
+
faced numerous cyberattacks and had to abandon several key services before
|
| 400 |
+
'selling itself' to the American telecom giant Verizon.) #metonym #metaphor</td><td>ViNLI</td><td>Business</td><td>Compound</td></tr><tr><td>Trong khi do, Diên Kremlin tuyén bó dang nhiens cùu kha nám tõ chuc hói
|
| 401 |
+
nghi nay. (English: Meanwhile, the Kremlin announced that it is studying the
|
| 402 |
+
possibility of holding this conference.) #metonym</td><td>ViQuAD</td><td>Society</td><td>Single</td></tr><tr><td>Nhùng nguòi có lan da ngām den táo cám giac khoe mângh, goi cām cho nguòi
|
| 403 |
+
dǒi dièn. (English: People with dark skin give a feeling of health and sexiness to
|
| 404 |
+
the other person.)</td><td>ViNewsQA</td><td>Lifestyle</td><td>Simple</td></tr><tr><td>Đón khiếu nai pân kèm theo bàn sao quyét dinh giaï quyét khiếu nai lan dàu và
|
| 405 |
+
cac tài lièu kèm theo. (English: The complaint must be accompanied by a copy
|
| 406 |
+
of the initial complaint resolution decision and accompanying documents.)</td><td>ALQAC</td><td>Law</td><td>Complex</td></tr><tr><td>Ellen DeGeneres, sinh nám 1958, lâ ngôi sao truyen hinhHang dàu tai Mý.
|
| 407 |
+
(English: Ellen DeGeneres, born in 1958, is a top television star in America.)</td><td>ViNLI</td><td>Culture</td><td>Simple</td></tr><tr><td>Video dàu tiên hiên có hon 2,1 trièu luot xem chi sai mòt ngay dang tãi. (English:
|
| 408 |
+
The first video now has more than 2.1 million views after just one day of posting.)</td><td>ViNLI</td><td>Other</td><td>Simple</td></tr></table>
|
| 409 |
+
|
| 410 |
+
Table 13: Examples of classifying sentences by topic in ViSP dataset.
|
| 411 |
+
|
| 412 |
+
# B Metrics
|
| 413 |
+
|
| 414 |
+
# B.1 Sematic and Diversity
|
| 415 |
+
|
| 416 |
+
1. BLEU-4 (Papineni et al., 2002): This metric measures the precision of 4-grams between the generated paraphrase and the reference. A higher BLEU-4 value indicates greater syntactic and lexical alignment with the reference.
|
| 417 |
+
2. ROUGE-2 (Lin, 2004): This metric calculates the recall of bigrams (2-grams) in the generated paraphrase compared to the reference. A higher ROUGE-2 value reflects better preservation of key content from the reference.
|
| 418 |
+
3. BERTScore (Zhang et al., 2019): This measure uses contextual embeddings to compare each token in the generated paraphrase with those in the reference. A higher BERTScore implies stronger semantic similarity and fidelity to the reference text.
|
| 419 |
+
4. DIST-1 and DIST-2 (Li et al., 2016): These metrics capture the distinctiveness of unigrams and bigrams, respectively. Higher values indicate more diverse and less repetitive paraphrases.
|
| 420 |
+
5. ENT-4 (Shannon, 1948): This is the entropy of 4-grams, reflecting the diversity and unpredictability of word combinations. A higher ENT-4 value suggests more varied and creative paraphrases.
|
| 421 |
+
6. Jaccard (Jaccard, 1901): This score measures the lexical overlap between the original sentence and its paraphrase. A lower Jaccard value indicates less overlap with the source, and hence greater
|
| 422 |
+
|
| 423 |
+
paraphrase diversity.
|
| 424 |
+
|
| 425 |
+
# B.2 Human Eval
|
| 426 |
+
|
| 427 |
+
We conduct manual evaluations where human reviewers assess the quality of paraphrased sentences. Each paraphrase is evaluated based on four key criteria, with reviewers assigning a score from 1 (poor) to 5 (excellent) for each criterion:
|
| 428 |
+
|
| 429 |
+
1. INF (Informativeness): How well does the paraphrase retain the original meaning?
|
| 430 |
+
2. REL (Relevance): To what extent are the important facts and details preserved?
|
| 431 |
+
3. FLU (Fluency): How fluent and natural does the sentence sound?
|
| 432 |
+
4. COH (Coherence): How well do the sentence parts fit together to form a coherent whole?
|
| 433 |
+
|
| 434 |
+
# C Paraphrase Verification Checklist
|
| 435 |
+
|
| 436 |
+
To ensure the quality and accuracy of the paraphrased sentences in our dataset, we implemented a verification process where annotators assessed whether each sentence pair constituted a valid paraphrase. Annotators were instructed to evaluate each sentence pair based on the above criteria in Table 14. If the paraphrased sentence met all the criteria, it was marked as a valid paraphrase. If it failed to meet any of the criteria, it was marked as invalid.
|
| 437 |
+
|
| 438 |
+
<table><tr><td>Rule</td><td>Question</td></tr><tr><td>SEMANTIC EQUIVALENCE</td><td>Does the paraphrased sentence convey the same meaning as the original sentence, preserving all key information without adding or omitting important details?</td></tr><tr><td>FLUENCY & GRAMMATICAL STYLE & TONE CONSISTENCY</td><td>Is the paraphrased sentence grammatically correct and fluent in Vietnamese?</td></tr><tr><td>NO CONTRADictions</td><td>Does the paraphrase maintain the same style and tone as the original sentence?</td></tr><tr><td></td><td>Does the paraphrase avoid contradicting any facts or statements in the original sentence?</td></tr></table>
|
| 439 |
+
|
| 440 |
+
Table 14: Checklist used by annotators to verify if a sentence pair is a valid paraphrase.
|
| 441 |
+
|
| 442 |
+
Figure 6 presents the distribution of errors across different paraphrase verification rules. The analysis reveals that SEMANTIC EQUIVALENCE is the most common source of errors, indicating that paraphrased sentences often fail to fully preserve the meaning of the original text. This suggests that maintaining semantic consistency remains a significant challenge in paraphrase generation. Additionally, while NO CONTRADICTIONS and STYLE & TONE CONSISTENCY exhibit lower error rates, FLUENCY & GRAMMATICAL still accounts for a noticeable portion of errors.
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
Figure 6: Error rate distribution across different paraphrase verification rule.
|
| 446 |
+
|
| 447 |
+
# D Model Outputs
|
| 448 |
+
|
| 449 |
+
In Table 15, we present example outputs from the models across various experiments to further clarify and support the findings discussed in the main text. These supplementary results include detailed paraphrases generated by models like mBART, BARTpho, ViT5, and mT5.
|
| 450 |
+
|
| 451 |
+
<table><tr><td>Model</td><td>Paraphrases</td></tr><tr><td>Ground Truth</td><td>Dai lê sân thóc Phuc sinhnam nay la lan dàu tièn duoc tó chuc kê túnam 2017. (English: This year's Easter Bunny Hunt is the first to be held since 2017.)Sau 2017, não nay la lan dàu tièn Dài lê sân thóc phuc sinh duoc tó chuc. (English: After 2017, this year is the first time the Easter Bunny Hunt is held.)Dai lê sân thóc Phuc sinh dā duoc tó chuc tro lái sau 5nam váng bóng, kê túnam 2017. (English: The Easter Bunny Hunt has been held again after a 5-year absence, since 2017.)Tú nám 2017 dén nay, Dài lê sân thóc Phuc sinh dā khòng duoc tó chuc, cho dén nám nay. (English: Since 2017, the Easter Bunny Hunt has not been held, until this year.)</td></tr><tr><td>mBARTlarge</td><td>Dai lê sân thóc Phuc sinh dā duoc tó chuc lán dàu tièn kê túnam 2017.Dai lê sân thóc Phuc sinh duoc tó chuc lán dàu tièn kê túnam 2017.Dai lê sân thóc Phuc sinh nám nay lán dàu tièn duoc tó chuc kê túnam 2017.</td></tr><tr><td>mT5base</td><td>Nám nay, Dài lê sân thóc Phuc sinh duoc tó chuc kê tú 2017.Vào nám nay, Dài lê sân thóc Phuc sinh duoc tó chuc kê tú 2017.Nám nay, Dài lê sân thóc Phuc sinh dā duoc tó chuc kê tú 2017.</td></tr><tr><td>mT5large</td><td>Sân thóc Phuc sinh duoc tó chuc kê tú 2017.Này la lan dàu tièn Dài lê sân thóc Phuc sinh duoc tó chuc kê tú 2017.Sân thóc Phuc sinh duoc tó chuc kê tú 2017 nám nay.</td></tr><tr><td>BARTpho-syllablebase</td><td>Dáy la lan dàu tièn Dài lê sân thóc Phuc sinh duoc tó chuc kê túnam 2017.Kê tú nám 2017, Dài lê sân thóc Phuc sinh dā duoc tó chuc lán dàu tièn .Dài lê sân thóc Phuc sinh nám nay lán dàu tièn duoc tó chuc kê tú 2017.</td></tr><tr><td>BARTpho-syllablelarge</td><td>Dái lê sân thóc Phuc sinhnam nay dánh dàu lán dàu tièn Dài lê duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay dánh dàu lán dàu tièn duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay dánh dàu lán dàu tièn Dài lê nay duoc tó chuc kê túnam 2017.</td></tr><tr><td>BARTpho-wordbase</td><td>Dái lê sân thóc Phuc sinh dā duoc tó chuc lán dàu tièn kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay lán dàu tièn duoc tó chuc kê tú 2017.Dái lê sân thóc Phuc sinh dā duoc tó chuc thùng miè túnam 2017.</td></tr><tr><td>BARTpho-wordlarge</td><td>Dái lê sân thóc Phuc sinhnam nay lán dàu tièn su kîen nay duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay dánh dàu lán dàu tièn le hòi nay duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay lán su kîen dàu tièn duoc tó chuc kê túnam 2017.</td></tr><tr><td>ViT5base</td><td>Dái lê sân thóc Phuc sinhnam nay lán dàu tièn duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay lán dàu tièn duoc tó chuc kê túnam 2017.Dái lê sân thóc Phuc sinhnam nay lán dàu tièn duoc tó chuc tui Viet Nam kê túnam 2017.</td></tr><tr><td>ViT5large</td><td>Viếc tó chuc Dái lê sân thóc Phuc sinhnam nay lán dàu tièn duoc tó chuc kê túnam 2017.Viếc tó chuc Dái lê sân thóc Phuc sinh lán dàu tièn duoc tó chuc kê túnam 2017.Nguǒi ta dā tó chuc Dái lê sân thóc Phuc sinh lán dàu tièn kê túnam 2017.</td></tr></table>
|
| 452 |
+
|
| 453 |
+
Table 15: Examples of paraphrases generated by various models for the original sentence "Namd nay la lan dau tien Daji lé san tho Phuc sinh duoc to chuc ké tu 2017." (English: This year is the first time the Easter Bunny Hunt has been held since 2017.). The highlighted show the lexical differences compared to the original sentence.
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ffea6f4090d8a2ef04884a43bb8dab7a020fd5cfca81a7aa43d950bbf8ac386e
|
| 3 |
+
size 1449672
|
2025/A Large-Scale Benchmark for Vietnamese Sentence Paraphrases/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_content_list.json
ADDED
|
@@ -0,0 +1,1526 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Practical Analysis of Human Alignment with *PO",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
226,
|
| 8 |
+
79,
|
| 9 |
+
766,
|
| 10 |
+
99
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Kian Ahrabian\\*",
|
| 17 |
+
"bbox": [
|
| 18 |
+
174,
|
| 19 |
+
105,
|
| 20 |
+
322,
|
| 21 |
+
122
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Xihui Lin",
|
| 28 |
+
"bbox": [
|
| 29 |
+
450,
|
| 30 |
+
105,
|
| 31 |
+
544,
|
| 32 |
+
122
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Barun Patra",
|
| 39 |
+
"bbox": [
|
| 40 |
+
689,
|
| 41 |
+
105,
|
| 42 |
+
808,
|
| 43 |
+
122
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Vishrav Chaudhary $^{2}$",
|
| 50 |
+
"bbox": [
|
| 51 |
+
156,
|
| 52 |
+
134,
|
| 53 |
+
337,
|
| 54 |
+
153
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Alon Benhaim",
|
| 61 |
+
"bbox": [
|
| 62 |
+
431,
|
| 63 |
+
134,
|
| 64 |
+
564,
|
| 65 |
+
151
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "Jay Pujara<sup>1</sup>",
|
| 72 |
+
"bbox": [
|
| 73 |
+
695,
|
| 74 |
+
134,
|
| 75 |
+
801,
|
| 76 |
+
153
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "Xia Song²",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
452,
|
| 86 |
+
162,
|
| 87 |
+
544,
|
| 88 |
+
181
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "<sup>1</sup>University of Southern California, Information Sciences Institute",
|
| 95 |
+
"bbox": [
|
| 96 |
+
231,
|
| 97 |
+
186,
|
| 98 |
+
764,
|
| 99 |
+
203
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "text",
|
| 105 |
+
"text": "2Microsoft",
|
| 106 |
+
"bbox": [
|
| 107 |
+
452,
|
| 108 |
+
203,
|
| 109 |
+
544,
|
| 110 |
+
218
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "ahrabian@usc.edu,{xihlin, barun.patra@microsoft.com}",
|
| 117 |
+
"bbox": [
|
| 118 |
+
238,
|
| 119 |
+
219,
|
| 120 |
+
756,
|
| 121 |
+
237
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 0
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "text",
|
| 127 |
+
"text": "{vchaudhary, alonbenhaim}@microsoft.com, jpujara@isi.edu, xiao@microsoft.com",
|
| 128 |
+
"bbox": [
|
| 129 |
+
124,
|
| 130 |
+
237,
|
| 131 |
+
872,
|
| 132 |
+
253
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 0
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "text",
|
| 138 |
+
"text": "Abstract",
|
| 139 |
+
"text_level": 1,
|
| 140 |
+
"bbox": [
|
| 141 |
+
260,
|
| 142 |
+
261,
|
| 143 |
+
339,
|
| 144 |
+
275
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "At the forefront of state-of-the-art human alignment methods are preference optimization methods (*PO). Prior research has often concentrated on identifying the best-performing method, typically involving a grid search over hyperparameters, which can be impractical for general practitioners. In this paper, we examine the robustness of existing state-of-the-art methods to varying hyperparameters in a realistic out-of-distribution (OOD) scenario that mirrors real-world applications of human alignment. Our goal is to empirically find the method that increases the likelihood of achieving better results through the lens of various metrics, such as KL divergence and response length. We also introduce LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length, and improves performance. Our analysis of state-of-the-art reference-free (i.e., SimPO) and reference-dependent (i.e., DPO and LN-DPO) methods reveals that they perform similarly at their peak (i.e., best possible scenario). However, we uncover that the pattern of change in performance greatly varies as we move away from the best possible scenario.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
141,
|
| 153 |
+
292,
|
| 154 |
+
460,
|
| 155 |
+
677
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "1 Introduction",
|
| 162 |
+
"text_level": 1,
|
| 163 |
+
"bbox": [
|
| 164 |
+
114,
|
| 165 |
+
692,
|
| 166 |
+
258,
|
| 167 |
+
707
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 0
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "text",
|
| 173 |
+
"text": "In recent years, the quality of large language models (LLMs) has been constantly increasing (Chiang et al., 2024), achieving impressive results across tasks and benchmarks (Abdin et al., 2024; AI@Meta, 2024; Achiam et al., 2023; Team, 2023; Yang et al., 2024). However, even with the most rigorous filtering heuristics, the training data (Computer, 2023; Penedo et al., 2024) is typically contaminated with undesirable content that can lead to unacceptable behaviors (Bender et al., 2021; Gehman et al., 2020). To improve the model's",
|
| 174 |
+
"bbox": [
|
| 175 |
+
112,
|
| 176 |
+
719,
|
| 177 |
+
489,
|
| 178 |
+
896
|
| 179 |
+
],
|
| 180 |
+
"page_idx": 0
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"type": "table",
|
| 184 |
+
"img_path": "images/ca84818fee8fe65f44b04b94d99e950d9d857c8ba20b7a4e3245c86eee2d6cb2.jpg",
|
| 185 |
+
"table_caption": [],
|
| 186 |
+
"table_footnote": [],
|
| 187 |
+
"table_body": "<table><tr><td></td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>Mean Score</td><td>1.6</td><td>+0.3%</td><td>+2.7%</td></tr><tr><td>Mean Length</td><td>119.8</td><td>-15.9%</td><td>-22.9%</td></tr><tr><td>KL Divergence</td><td>55.0</td><td>-26.0%</td><td>-20.7%</td></tr><tr><td>Win vs. Chosen</td><td>77.1%</td><td>+0.8%</td><td>+3.1%</td></tr><tr><td>Win vs. SFT</td><td>60.7%</td><td>+2.1%</td><td>+5.0%</td></tr></table>",
|
| 188 |
+
"bbox": [
|
| 189 |
+
514,
|
| 190 |
+
259,
|
| 191 |
+
878,
|
| 192 |
+
393
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 0
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "Table 1: Best *PO Performance. The metrics are normalized by the respective DPO performance. The underlined values indicate the best performance.",
|
| 199 |
+
"bbox": [
|
| 200 |
+
507,
|
| 201 |
+
401,
|
| 202 |
+
884,
|
| 203 |
+
445
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 0
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "text",
|
| 209 |
+
"text": "alignment with human preferences, the de-facto approach has been to learn from human/AI-generated preference data (e.g., a chosen and a rejected response for each prompt). In particular, off-policy preference optimization methods (*PO) have been prevalent given their good performance and ease of implementation (Rafailov et al., 2024; Hong et al., 2024; Meng et al., 2024).",
|
| 210 |
+
"bbox": [
|
| 211 |
+
507,
|
| 212 |
+
464,
|
| 213 |
+
884,
|
| 214 |
+
592
|
| 215 |
+
],
|
| 216 |
+
"page_idx": 0
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"type": "text",
|
| 220 |
+
"text": "One commonly occurring practice when reporting the performance of new methods is to compare their best-performing variant (after a hyperparameter grid search) to a default baseline with a fixed set of hyperparameters. However, from a practical perspective for future users, these comparisons do not provide a good answer to the problem of which method is expected to achieve higher performance, given a fixed budget for hyperparameter search, as doing broad grid searches is often computationally infeasible for many practitioners. To this end, in this work, we aim to empirically identify the more robust method to hyperparameter variations while still being competitive in performance.",
|
| 221 |
+
"bbox": [
|
| 222 |
+
507,
|
| 223 |
+
596,
|
| 224 |
+
882,
|
| 225 |
+
821
|
| 226 |
+
],
|
| 227 |
+
"page_idx": 0
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"type": "text",
|
| 231 |
+
"text": "We set up our experiments in a realistic out-of-distribution (OOD) setting, focused on safety and helpfulness domains, where the train and test datasets share a common core goal, but their samples are generated from different distributions (e.g., AI and human expert). This setting resembles real-",
|
| 232 |
+
"bbox": [
|
| 233 |
+
507,
|
| 234 |
+
825,
|
| 235 |
+
884,
|
| 236 |
+
921
|
| 237 |
+
],
|
| 238 |
+
"page_idx": 0
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"type": "page_footnote",
|
| 242 |
+
"text": "*Work done during an internship at Microsoft.",
|
| 243 |
+
"bbox": [
|
| 244 |
+
136,
|
| 245 |
+
906,
|
| 246 |
+
418,
|
| 247 |
+
920
|
| 248 |
+
],
|
| 249 |
+
"page_idx": 0
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"type": "page_number",
|
| 253 |
+
"text": "8028",
|
| 254 |
+
"bbox": [
|
| 255 |
+
480,
|
| 256 |
+
927,
|
| 257 |
+
519,
|
| 258 |
+
940
|
| 259 |
+
],
|
| 260 |
+
"page_idx": 0
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"type": "footer",
|
| 264 |
+
"text": "Findings of the Association for Computational Linguistics:",
|
| 265 |
+
"bbox": [
|
| 266 |
+
317,
|
| 267 |
+
945,
|
| 268 |
+
677,
|
| 269 |
+
958
|
| 270 |
+
],
|
| 271 |
+
"page_idx": 0
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"type": "footer",
|
| 275 |
+
"text": "NAACL 2025, pages 8028-8036",
|
| 276 |
+
"bbox": [
|
| 277 |
+
400,
|
| 278 |
+
959,
|
| 279 |
+
596,
|
| 280 |
+
971
|
| 281 |
+
],
|
| 282 |
+
"page_idx": 0
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"type": "footer",
|
| 286 |
+
"text": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics",
|
| 287 |
+
"bbox": [
|
| 288 |
+
273,
|
| 289 |
+
972,
|
| 290 |
+
722,
|
| 291 |
+
985
|
| 292 |
+
],
|
| 293 |
+
"page_idx": 0
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"type": "table",
|
| 297 |
+
"img_path": "images/dbed2e0f58cc054d444f0f36dea74890c92cc541f9cd0faaa618fd4dcedfd3ff.jpg",
|
| 298 |
+
"table_caption": [],
|
| 299 |
+
"table_footnote": [],
|
| 300 |
+
"table_body": "<table><tr><td>Method</td><td>Objective</td><td>Hyperparameters</td></tr><tr><td>DPO</td><td>- log σ(β log πθ(yw|x)/πref(yw|x) - β log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {0.01, 0.05, 0.1, 0.3, 0.5}</td></tr><tr><td>SimPO</td><td>- log σ(β/|yw| log πθ(yw|x) - β/|yl| log πθ(yl|x) - γ)</td><td>β ∈ {1.0, 1.5, 2.0, 2.5}γ ∈ {0.5, 0.8, 1.0, 1.2, 1.4, 1.6}</td></tr><tr><td>LN-DPO</td><td>- log σ(β/|yw| log πθ(yw|x)/πref(yw|x) - β/|yl| log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {1.0, 1.5, 2.0, 2.5, 3.0, 3.5}</td></tr></table>",
|
| 301 |
+
"bbox": [
|
| 302 |
+
126,
|
| 303 |
+
80,
|
| 304 |
+
872,
|
| 305 |
+
208
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 1
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "Table 2: *PO Optimization Objectives. The preference data is formulated as $D = (x, y_w, y_l)$ , where $x$ is the prompt and $y_w$ and $y_l$ are the chosen and rejected responses.",
|
| 312 |
+
"bbox": [
|
| 313 |
+
112,
|
| 314 |
+
217,
|
| 315 |
+
882,
|
| 316 |
+
247
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 1
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "world scenarios as it simulates the release of large generative models for public use. Moreover, to better understand the behavior of the state-of-the-art models, we take the best-performing reference-free and reference-dependent models (as reported by Meng et al. (2024)) and analyze them through the lens of standard metrics such KL divergence, response length, and win rate. We also introduce an embarrassingly simple length-normalized extension of vanilla Direct Preference Optimization (DPO) (Rafailov et al., 2024), LN-DPO, that effectively mitigates the issue of lengthy generations without any apparent performance degradation<sup>1</sup>. In summary, our contributions are as follows:",
|
| 323 |
+
"bbox": [
|
| 324 |
+
110,
|
| 325 |
+
265,
|
| 326 |
+
489,
|
| 327 |
+
489
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 1
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "list",
|
| 333 |
+
"sub_type": "text",
|
| 334 |
+
"list_items": [
|
| 335 |
+
"- We examine state-of-the-art reference-free and reference-dependent preference optimization methods across a wide range of hyperparameters in a real-world setup.",
|
| 336 |
+
"- We analyze the performance of these methods on critical metrics such as mean response length, mean score on a gold reward model, win rate vs. chosen and SFT, and KL vs. SFT.",
|
| 337 |
+
"- We introduce and examine LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length and improves performance."
|
| 338 |
+
],
|
| 339 |
+
"bbox": [
|
| 340 |
+
136,
|
| 341 |
+
492,
|
| 342 |
+
489,
|
| 343 |
+
709
|
| 344 |
+
],
|
| 345 |
+
"page_idx": 1
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"type": "text",
|
| 349 |
+
"text": "2 Related Work",
|
| 350 |
+
"text_level": 1,
|
| 351 |
+
"bbox": [
|
| 352 |
+
112,
|
| 353 |
+
722,
|
| 354 |
+
270,
|
| 355 |
+
737
|
| 356 |
+
],
|
| 357 |
+
"page_idx": 1
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"text": "Since the introduction of DPO (Rafailov et al., 2024), there has been a body of works with new optimization objectives improving the performance and efficiency (Azar et al., 2024; Tang et al., 2024; Hong et al., 2024; Rosset et al., 2024; Meng et al., 2024; Xu et al., 2024a; Ethayarajh et al., 2024). These methods can be partitioned into two groups: reference-free (Meng et al., 2024; Hong et al.,",
|
| 362 |
+
"bbox": [
|
| 363 |
+
112,
|
| 364 |
+
747,
|
| 365 |
+
489,
|
| 366 |
+
876
|
| 367 |
+
],
|
| 368 |
+
"page_idx": 1
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"text": "2024) and reference-dependent (Rafailov et al., 2024; Park et al., 2024). Reference-free methods generally benefit from fast training runs, while reference-dependent methods have terms baked into their objective to control divergence from the reference model. In this work, we compare SimPO (Meng et al., 2024), a recent state-of-the-art reference-free method, with DPO and LN-DPO as reference-dependent methods (see Appendix A for extended related work).",
|
| 373 |
+
"bbox": [
|
| 374 |
+
507,
|
| 375 |
+
265,
|
| 376 |
+
884,
|
| 377 |
+
426
|
| 378 |
+
],
|
| 379 |
+
"page_idx": 1
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"text": "3 Experimental Setup",
|
| 384 |
+
"text_level": 1,
|
| 385 |
+
"bbox": [
|
| 386 |
+
507,
|
| 387 |
+
441,
|
| 388 |
+
717,
|
| 389 |
+
458
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 1
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "3.1 Datasets",
|
| 396 |
+
"text_level": 1,
|
| 397 |
+
"bbox": [
|
| 398 |
+
507,
|
| 399 |
+
469,
|
| 400 |
+
623,
|
| 401 |
+
483
|
| 402 |
+
],
|
| 403 |
+
"page_idx": 1
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "text",
|
| 407 |
+
"text": "For our datasets, we follow the setup introduced by Xu et al. (2024b). Specifically, we use the double safe/unsafe filtered train subset of SafeRLHF (Dai et al., 2024) for training and the test subset of HH-RLHF (Ganguli et al., 2022) for evaluation. This setup closely resembles real-world scenarios where even though models are trained on various domains (e.g., safety and helpfulness in our experiments), they have to generalize to similar unseen queries while interacting with the users.",
|
| 408 |
+
"bbox": [
|
| 409 |
+
507,
|
| 410 |
+
492,
|
| 411 |
+
884,
|
| 412 |
+
653
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 1
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "text",
|
| 418 |
+
"text": "3.2 Models",
|
| 419 |
+
"text_level": 1,
|
| 420 |
+
"bbox": [
|
| 421 |
+
507,
|
| 422 |
+
668,
|
| 423 |
+
613,
|
| 424 |
+
682
|
| 425 |
+
],
|
| 426 |
+
"page_idx": 1
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"type": "text",
|
| 430 |
+
"text": "For all our experiments, we chose the Phi-3 Medium model (Abdin et al., 2024) due to its high performance across benchmarks and small size, ensuring computational tractability. To evaluate the trained models, we use the OpenAssistant reward model (Köpf et al., 2024) to score the quality of their generated responses. We chose this model due to its small size and use in prior works (Xu et al., 2024b), ensuring fast and correct evaluations.",
|
| 431 |
+
"bbox": [
|
| 432 |
+
507,
|
| 433 |
+
690,
|
| 434 |
+
884,
|
| 435 |
+
835
|
| 436 |
+
],
|
| 437 |
+
"page_idx": 1
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"type": "text",
|
| 441 |
+
"text": "3.3 Optimization Objectives",
|
| 442 |
+
"text_level": 1,
|
| 443 |
+
"bbox": [
|
| 444 |
+
507,
|
| 445 |
+
850,
|
| 446 |
+
747,
|
| 447 |
+
866
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 1
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "text",
|
| 453 |
+
"text": "Considering the performances reported by Meng et al. (2024), we choose DPO as our reference-dependent method and SimPO as our reference-free",
|
| 454 |
+
"bbox": [
|
| 455 |
+
507,
|
| 456 |
+
873,
|
| 457 |
+
884,
|
| 458 |
+
921
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 1
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "page_footnote",
|
| 464 |
+
"text": "Concurrently, Meng et al. (2024) have added a similar method to their experiments (updated on July 7th, 2024). Here, we present a more thorough analysis and comparison.",
|
| 465 |
+
"bbox": [
|
| 466 |
+
112,
|
| 467 |
+
882,
|
| 468 |
+
489,
|
| 469 |
+
921
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 1
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "page_number",
|
| 475 |
+
"text": "8029",
|
| 476 |
+
"bbox": [
|
| 477 |
+
480,
|
| 478 |
+
927,
|
| 479 |
+
519,
|
| 480 |
+
940
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 1
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "image",
|
| 486 |
+
"img_path": "images/2df4690ac85923a5b66ae99c8f115c472e6b08188683047dbc62e7cb6e5abd4a.jpg",
|
| 487 |
+
"image_caption": [
|
| 488 |
+
"Figure 1: *PO Performance Distribution. Each sample in the distribution represents the performance of one set of hyperparameters on the denoted metric. The dashed line indicates the performance of the initial SFT model."
|
| 489 |
+
],
|
| 490 |
+
"image_footnote": [],
|
| 491 |
+
"bbox": [
|
| 492 |
+
137,
|
| 493 |
+
84,
|
| 494 |
+
379,
|
| 495 |
+
260
|
| 496 |
+
],
|
| 497 |
+
"page_idx": 2
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"type": "image",
|
| 501 |
+
"img_path": "images/89279394083a564dfd948c0cf58ca56f1b70e8267df795d19e3bd43c841e93de.jpg",
|
| 502 |
+
"image_caption": [],
|
| 503 |
+
"image_footnote": [],
|
| 504 |
+
"bbox": [
|
| 505 |
+
381,
|
| 506 |
+
85,
|
| 507 |
+
618,
|
| 508 |
+
261
|
| 509 |
+
],
|
| 510 |
+
"page_idx": 2
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"type": "image",
|
| 514 |
+
"img_path": "images/230df2bc9b974c86de1454a7150147c64eac96d953e020c66951ab822e1f2034.jpg",
|
| 515 |
+
"image_caption": [],
|
| 516 |
+
"image_footnote": [],
|
| 517 |
+
"bbox": [
|
| 518 |
+
623,
|
| 519 |
+
98,
|
| 520 |
+
860,
|
| 521 |
+
261
|
| 522 |
+
],
|
| 523 |
+
"page_idx": 2
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"type": "image",
|
| 527 |
+
"img_path": "images/59d8d04c4b582a7f0b1ba016881ce5b61cb49a37eeb903ef6ace935942b12cbe.jpg",
|
| 528 |
+
"image_caption": [
|
| 529 |
+
"Figure 2: Response Length. The top $\\mathrm{k\\%}$ ( $k \\in \\{1, 10, 25\\}$ ) denotes the percentage of best-performing hyperparameters taken from each method's runs."
|
| 530 |
+
],
|
| 531 |
+
"image_footnote": [],
|
| 532 |
+
"bbox": [
|
| 533 |
+
121,
|
| 534 |
+
313,
|
| 535 |
+
366,
|
| 536 |
+
451
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 2
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "image",
|
| 542 |
+
"img_path": "images/ffea458a63d268e8620b569a24aa68e1fa04a333a2a2f9535c65250ba4d661ea.jpg",
|
| 543 |
+
"image_caption": [],
|
| 544 |
+
"image_footnote": [],
|
| 545 |
+
"bbox": [
|
| 546 |
+
369,
|
| 547 |
+
315,
|
| 548 |
+
621,
|
| 549 |
+
451
|
| 550 |
+
],
|
| 551 |
+
"page_idx": 2
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"type": "image",
|
| 555 |
+
"img_path": "images/da79e28d4409b39092d8cd66f064c098c52bfa970dcf9d985c75ca17a587584b.jpg",
|
| 556 |
+
"image_caption": [],
|
| 557 |
+
"image_footnote": [],
|
| 558 |
+
"bbox": [
|
| 559 |
+
625,
|
| 560 |
+
325,
|
| 561 |
+
875,
|
| 562 |
+
451
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 2
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"text": "method. While DPO has an implicit length normalization through the reference model, the variance of the reward (i.e., $\\log \\frac{\\pi_{\\theta}}{\\pi_{\\mathrm{ref}}}$ ) increases with response length. As such, inspired by explicit length regularization in SimPO and R-DPO (Park et al., 2024), we further normalize it with the response length similar to SimPO, which we call LN-DPO (see Section 3.4 for more details).",
|
| 569 |
+
"bbox": [
|
| 570 |
+
112,
|
| 571 |
+
513,
|
| 572 |
+
485,
|
| 573 |
+
640
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 2
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "text",
|
| 579 |
+
"text": "3.4 Connection between LN-DPO and SimPO",
|
| 580 |
+
"text_level": 1,
|
| 581 |
+
"bbox": [
|
| 582 |
+
112,
|
| 583 |
+
653,
|
| 584 |
+
485,
|
| 585 |
+
668
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 2
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "LN-DPO is similar to an adaptive margin version of SimPO with per sample margin defined as",
|
| 592 |
+
"bbox": [
|
| 593 |
+
112,
|
| 594 |
+
673,
|
| 595 |
+
485,
|
| 596 |
+
706
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 2
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "equation",
|
| 602 |
+
"text": "\n$$\n\\gamma_ {w, l} = \\log \\frac {\\pi_ {\\mathrm {r e f}} \\left(y _ {w} \\mid x\\right)}{\\left| y _ {w} \\right|} - \\log \\frac {\\pi_ {\\mathrm {r e f}} \\left(y _ {l} \\mid x\\right)}{\\left| y _ {l} \\right|}. \\tag {1}\n$$\n",
|
| 603 |
+
"text_format": "latex",
|
| 604 |
+
"bbox": [
|
| 605 |
+
136,
|
| 606 |
+
715,
|
| 607 |
+
485,
|
| 608 |
+
750
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 2
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "Essentially, this adaptive margin encourages larger margins for pairs with large margins in the reference policy. Depending on the quality of the reference model and the labels, this change could be beneficial compared to SimPO's constant margin. The adaptive margin focuses more on \"easier\" pairs (i.e., pairs that have some prior evidence to be different) while less on \"harder\" pairs (i.e., pairs that are closer), which means that LN-DPO is potentially less prone to overfitting and less sensitive",
|
| 615 |
+
"bbox": [
|
| 616 |
+
112,
|
| 617 |
+
760,
|
| 618 |
+
487,
|
| 619 |
+
920
|
| 620 |
+
],
|
| 621 |
+
"page_idx": 2
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"text": "to wrong labels.",
|
| 626 |
+
"bbox": [
|
| 627 |
+
507,
|
| 628 |
+
514,
|
| 629 |
+
631,
|
| 630 |
+
529
|
| 631 |
+
],
|
| 632 |
+
"page_idx": 2
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"text": "4 Training Regimen",
|
| 637 |
+
"text_level": 1,
|
| 638 |
+
"bbox": [
|
| 639 |
+
507,
|
| 640 |
+
544,
|
| 641 |
+
699,
|
| 642 |
+
562
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 2
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "Following the common practice, before the preference optimization step we do a supervised fine-tuning (SFT) step. Specifically, we first run a grid search over the following hyperparameters: epochs $\\in \\{1,3\\}$ and learning rate $\\in \\{1e - 6,3e - 6,1e - 5,2e - 5\\}$ . Then we evaluate the final checkpoints against the test set and choose the one with the highest performance. This procedure ensures that the preference optimization methods are initialized from a good checkpoint. For the preference optimization methods, we run a grid search using 1) the same ranges as SFT for epochs and learning rate and 2) common values for method-specific hyperparameters as used in prior works (Meng et al., 2024; Rafailov et al., 2024; Hong et al., 2024). Table 2 presents the method-specific ranges used in our experiments. In all of our experiments, the batch size is set to 256.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
507,
|
| 651 |
+
571,
|
| 652 |
+
882,
|
| 653 |
+
860
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 2
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "5 Metrics",
|
| 660 |
+
"text_level": 1,
|
| 661 |
+
"bbox": [
|
| 662 |
+
507,
|
| 663 |
+
876,
|
| 664 |
+
611,
|
| 665 |
+
892
|
| 666 |
+
],
|
| 667 |
+
"page_idx": 2
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"text": "Our analysis focuses on the following five metrics:",
|
| 672 |
+
"bbox": [
|
| 673 |
+
507,
|
| 674 |
+
904,
|
| 675 |
+
880,
|
| 676 |
+
921
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 2
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "page_number",
|
| 682 |
+
"text": "8030",
|
| 683 |
+
"bbox": [
|
| 684 |
+
480,
|
| 685 |
+
927,
|
| 686 |
+
519,
|
| 687 |
+
940
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 2
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "image",
|
| 693 |
+
"img_path": "images/e276a116c771ddc4ee3912ff73bf29083ee089f7844e35b99c77e0afaf65a8ef.jpg",
|
| 694 |
+
"image_caption": [
|
| 695 |
+
"Figure 3: KL Divergence. The top $\\mathrm{k\\%}$ ( $k \\in \\{1, 10, 25\\}$ ) denotes the percentage of best-performing hyperparameters taken from each method's runs."
|
| 696 |
+
],
|
| 697 |
+
"image_footnote": [],
|
| 698 |
+
"bbox": [
|
| 699 |
+
119,
|
| 700 |
+
84,
|
| 701 |
+
366,
|
| 702 |
+
225
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 3
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "image",
|
| 708 |
+
"img_path": "images/06fa22965b956cc3edfbe902f3459f900f597abbf9eb6cdcb46fddd498b60b15.jpg",
|
| 709 |
+
"image_caption": [],
|
| 710 |
+
"image_footnote": [],
|
| 711 |
+
"bbox": [
|
| 712 |
+
369,
|
| 713 |
+
85,
|
| 714 |
+
620,
|
| 715 |
+
223
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 3
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "image",
|
| 721 |
+
"img_path": "images/3cf9e7e333d4cadc9e4698c55e2d36ecc1600644e74b5d23eb3c41f6f3b87899.jpg",
|
| 722 |
+
"image_caption": [],
|
| 723 |
+
"image_footnote": [],
|
| 724 |
+
"bbox": [
|
| 725 |
+
623,
|
| 726 |
+
96,
|
| 727 |
+
878,
|
| 728 |
+
223
|
| 729 |
+
],
|
| 730 |
+
"page_idx": 3
|
| 731 |
+
},
|
| 732 |
+
{
|
| 733 |
+
"type": "list",
|
| 734 |
+
"sub_type": "text",
|
| 735 |
+
"list_items": [
|
| 736 |
+
"- Mean Score: The average score of the generated responses, as judged by the gold reward model.",
|
| 737 |
+
"- Win vs. Chosen: The fraction of samples where the gold reward model assigns a higher score to the generated response compared to the chosen response in the dataset.",
|
| 738 |
+
"- Win vs. SFT: The fraction of samples where the gold reward model scores the generated response higher than the initial SFT model's response.",
|
| 739 |
+
"- KL divergence: The summed difference of log probabilities between the SFT and the trained models over the samples.",
|
| 740 |
+
"- Response length: The number of tokens in the generated response under the tokenization space of the base model."
|
| 741 |
+
],
|
| 742 |
+
"bbox": [
|
| 743 |
+
134,
|
| 744 |
+
291,
|
| 745 |
+
487,
|
| 746 |
+
583
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 3
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "6 Implementation Details",
|
| 753 |
+
"text_level": 1,
|
| 754 |
+
"bbox": [
|
| 755 |
+
112,
|
| 756 |
+
596,
|
| 757 |
+
352,
|
| 758 |
+
614
|
| 759 |
+
],
|
| 760 |
+
"page_idx": 3
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"type": "text",
|
| 764 |
+
"text": "We generate all the responses by sampling with a temperature $= 0.7$ , and top_p $= 0.95$ . Moreover, max_generation_length is set to 256 across all experiments, following the setup by Xu et al. (2024b). All our experiments are carried out on a cluster with $256 \\times \\mathrm{A}100$ 80GB GPUs. Finally, we implemented our code using the Transformers (Wolf et al., 2020), TRL (von Werra et al., 2020), and PyTorch (Paszke et al., 2019) libraries.",
|
| 765 |
+
"bbox": [
|
| 766 |
+
112,
|
| 767 |
+
621,
|
| 768 |
+
489,
|
| 769 |
+
766
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 3
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "7 Experimental Results",
|
| 776 |
+
"text_level": 1,
|
| 777 |
+
"bbox": [
|
| 778 |
+
112,
|
| 779 |
+
778,
|
| 780 |
+
336,
|
| 781 |
+
795
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 3
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "text",
|
| 787 |
+
"text": "7.1 Hyperparameter Robustness",
|
| 788 |
+
"text_level": 1,
|
| 789 |
+
"bbox": [
|
| 790 |
+
112,
|
| 791 |
+
804,
|
| 792 |
+
386,
|
| 793 |
+
819
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 3
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "text",
|
| 799 |
+
"text": "Best Performance. Following the common practice, we compare the best performance achieved by each method in Table 1. As evident, at their peaks, SimPO, LN-DPO, and DPO score similarly (within a 0.05 point on average). However, SimPO and LN-DPO show an edge in terms of the rest",
|
| 800 |
+
"bbox": [
|
| 801 |
+
112,
|
| 802 |
+
825,
|
| 803 |
+
489,
|
| 804 |
+
921
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 3
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "table",
|
| 810 |
+
"img_path": "images/92b12565c0d147e6105b31372320495519d7632420f20181e0311d263014c178.jpg",
|
| 811 |
+
"table_caption": [],
|
| 812 |
+
"table_footnote": [],
|
| 813 |
+
"table_body": "<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>49.04</td><td>47.51</td></tr><tr><td>LN-DPO</td><td>49.47</td><td>-</td><td>46.43</td></tr><tr><td>SimPO</td><td>51.12</td><td>51.09</td><td>-</td></tr></table>",
|
| 814 |
+
"bbox": [
|
| 815 |
+
556,
|
| 816 |
+
288,
|
| 817 |
+
838,
|
| 818 |
+
367
|
| 819 |
+
],
|
| 820 |
+
"page_idx": 3
|
| 821 |
+
},
|
| 822 |
+
{
|
| 823 |
+
"type": "table",
|
| 824 |
+
"img_path": "images/488487b28ea1eeaa2586cd82668a1e796cff585a2852ed72e00e673b554b6967.jpg",
|
| 825 |
+
"table_caption": [
|
| 826 |
+
"(a) Best"
|
| 827 |
+
],
|
| 828 |
+
"table_footnote": [],
|
| 829 |
+
"table_body": "<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>45.72</td><td>44.33</td></tr><tr><td>LN-DPO</td><td>51.77</td><td>-</td><td>47.28</td></tr><tr><td>SimPO</td><td>54.34</td><td>50.13</td><td>-</td></tr></table>",
|
| 830 |
+
"bbox": [
|
| 831 |
+
554,
|
| 832 |
+
391,
|
| 833 |
+
836,
|
| 834 |
+
470
|
| 835 |
+
],
|
| 836 |
+
"page_idx": 3
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "text",
|
| 840 |
+
"text": "(b) 75th Percentile",
|
| 841 |
+
"bbox": [
|
| 842 |
+
636,
|
| 843 |
+
475,
|
| 844 |
+
754,
|
| 845 |
+
487
|
| 846 |
+
],
|
| 847 |
+
"page_idx": 3
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "text",
|
| 851 |
+
"text": "Table 3: Head-to-head *PO Comparison. Each cell represents the win rate of the row method over the column method. The underlined values indicate the row method beating the column method.",
|
| 852 |
+
"bbox": [
|
| 853 |
+
507,
|
| 854 |
+
498,
|
| 855 |
+
884,
|
| 856 |
+
556
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 3
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "of the metrics. Specifically, we can observe the effectiveness of the length normalization term. We also notice a significant decrease in KL divergence. However, KL for SimPO decreases less than LN-DPO, showcasing a more significant divergence from SFT. For more details on tuning these models, see Appendix B.",
|
| 863 |
+
"bbox": [
|
| 864 |
+
507,
|
| 865 |
+
580,
|
| 866 |
+
884,
|
| 867 |
+
693
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 3
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "Head-to-head Performance. While comparing the pure performances achieved on the desired metrics is usually good enough to contrast different methods, there are potential cases where the averaging could be exploited (e.g., outliers with high rewards). Hence, it is essential also to do a head-to-head per sample comparison, which provides more fine-grained insights. Table 3 compares each method's best and 75th percentile performance. Notably, we observe a sharp performance drop in DPO from the best to the top $25\\%$ model, in contrast to the other two. This occurrence highlights the practical flaw in only comparing the best performances.",
|
| 874 |
+
"bbox": [
|
| 875 |
+
507,
|
| 876 |
+
712,
|
| 877 |
+
884,
|
| 878 |
+
921
|
| 879 |
+
],
|
| 880 |
+
"page_idx": 3
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "page_number",
|
| 884 |
+
"text": "8031",
|
| 885 |
+
"bbox": [
|
| 886 |
+
480,
|
| 887 |
+
928,
|
| 888 |
+
517,
|
| 889 |
+
940
|
| 890 |
+
],
|
| 891 |
+
"page_idx": 3
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"text": "Expected Performance. Given the limited resources that most users have, it is extremely difficult to run broad hyperparameter searches to find the best-performing combination. As such, it becomes crucial to analyze hyperparameter robustness, which provides insights into the expectation of finding good hyperparameters set from a limited search. Figure 1 presents the performance distribution *PO methods following a grid search over the hyperparameters denoted in Table 2 and Section 4. As evident, SimPO and LN-DPO effectively increase the average performance (i.e., shifting the distributions to the right) across hyperparameters, showcasing their superiority. Note that we stretched the range of hyperparameters until a plateau or an extreme variance was observed.",
|
| 896 |
+
"bbox": [
|
| 897 |
+
112,
|
| 898 |
+
84,
|
| 899 |
+
492,
|
| 900 |
+
341
|
| 901 |
+
],
|
| 902 |
+
"page_idx": 4
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "text",
|
| 906 |
+
"text": "7.2 Response Length",
|
| 907 |
+
"text_level": 1,
|
| 908 |
+
"bbox": [
|
| 909 |
+
112,
|
| 910 |
+
355,
|
| 911 |
+
294,
|
| 912 |
+
370
|
| 913 |
+
],
|
| 914 |
+
"page_idx": 4
|
| 915 |
+
},
|
| 916 |
+
{
|
| 917 |
+
"type": "text",
|
| 918 |
+
"text": "Since length exploitation is a critical issue (Park et al., 2024), we compare the response lengths across samples generated by the top $\\mathrm{k\\%}$ ( $k \\in \\{1, 10, 25\\}$ ) of each method's best-performing hyperparameters. As illustrated in Figure 2, on the best set of hyperparameters (i.e., top $1\\%$ ), the non-DPO methods showcase a left shift in length distribution (compared to DPO), which is a desired effect. However, this phenomenon starts to diminish as we include worse-performing hyperparameters. For example, LN-DPO has a higher rate than DPO in the tail-end of the top $25\\%$ distribution. Overall, we observed that both length-normalized models perform superior to DPO, with SimPO producing the shortest responses across the distribution.",
|
| 919 |
+
"bbox": [
|
| 920 |
+
112,
|
| 921 |
+
376,
|
| 922 |
+
489,
|
| 923 |
+
618
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 4
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "7.3 KL Divergence (vs. SFT)",
|
| 930 |
+
"text_level": 1,
|
| 931 |
+
"bbox": [
|
| 932 |
+
112,
|
| 933 |
+
632,
|
| 934 |
+
356,
|
| 935 |
+
646
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 4
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"text": "Since reference-free methods are not normalized against a reference policy (e.g., the SFT model), reward hacking might occur (i.e., lower loss with degraded performance). Therefore, we compare the KL divergence in Figure 3 across samples generated by the top $\\mathrm{k\\%}$ $(k\\in \\{1,10,25\\})$ of each method's best-performing hyperparameters. As evident, both SimPO and LN-DPO achieve lower KLs at their peak. However, as we move toward worse-performing models, DPO achieves lower KL (at $10\\%$ ). This phenomenon is due to many DPO runs failing to learn beyond the SFT model.",
|
| 942 |
+
"bbox": [
|
| 943 |
+
112,
|
| 944 |
+
653,
|
| 945 |
+
489,
|
| 946 |
+
847
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 4
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "text",
|
| 952 |
+
"text": "8 When to use LN-DPO over SimPO?",
|
| 953 |
+
"text_level": 1,
|
| 954 |
+
"bbox": [
|
| 955 |
+
112,
|
| 956 |
+
860,
|
| 957 |
+
457,
|
| 958 |
+
878
|
| 959 |
+
],
|
| 960 |
+
"page_idx": 4
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"type": "text",
|
| 964 |
+
"text": "While SimPO achieves superior performance on most metrics compared to LN-DPO, the lack of a",
|
| 965 |
+
"bbox": [
|
| 966 |
+
112,
|
| 967 |
+
889,
|
| 968 |
+
487,
|
| 969 |
+
921
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 4
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "text",
|
| 975 |
+
"text": "reference policy regularization could lead to drastic divergence from the initial checkpoint, as also shown in our experiments. This issue then could cause a degradation of performance on other benchmarks, which is a critical pitfall (as also observed in Korbak et al. (2022)). As such, we believe there are various scenarios where LN-DPO should be preferred to SimPO. We leave further experiments over this direction to future works.",
|
| 976 |
+
"bbox": [
|
| 977 |
+
507,
|
| 978 |
+
84,
|
| 979 |
+
884,
|
| 980 |
+
229
|
| 981 |
+
],
|
| 982 |
+
"page_idx": 4
|
| 983 |
+
},
|
| 984 |
+
{
|
| 985 |
+
"type": "text",
|
| 986 |
+
"text": "9 Conclusion",
|
| 987 |
+
"text_level": 1,
|
| 988 |
+
"bbox": [
|
| 989 |
+
507,
|
| 990 |
+
240,
|
| 991 |
+
640,
|
| 992 |
+
255
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 4
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "text",
|
| 998 |
+
"text": "In this work, we introduce LN-DPO, a length-normalized variation of DPO that reduces the average response length while staying reference-dependent. Moreover, we present a thorough analysis of LN-DPO and two state-of-the-art reference-dependent and reference-free preference optimization methods in a simulated real-world scenario for safety and helpfulness domains. Specifically, we cover the behavior of these methods across a wide range of hyperparameters under metrics such as mean response length, KL divergence (vs. SFT), and win rate (vs. chosen and SFT). Our experiments showcase state-of-the-art methods' strengths and weaknesses and provide insights for other practitioners.",
|
| 999 |
+
"bbox": [
|
| 1000 |
+
507,
|
| 1001 |
+
265,
|
| 1002 |
+
884,
|
| 1003 |
+
505
|
| 1004 |
+
],
|
| 1005 |
+
"page_idx": 4
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "text",
|
| 1009 |
+
"text": "Limitations",
|
| 1010 |
+
"text_level": 1,
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
509,
|
| 1013 |
+
517,
|
| 1014 |
+
613,
|
| 1015 |
+
532
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 4
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "text",
|
| 1021 |
+
"text": "Due to the extremely high costs of running such experiments (i.e., roughly 86000 GPU hours for the current experiments), in this work, we only experimented with a small set of models, methods, and datasets. While this might limit generalizability, we believe the existence of such analysis is critical to help practitioners save costs. Moreover, since the conclusion of our experiments, new reward models with higher performance have been released (e.g., ArmoRM (Wang et al., 2024)); however, we still rely on older, smaller models to keep the evaluation tractable on such a high number of runs.",
|
| 1022 |
+
"bbox": [
|
| 1023 |
+
507,
|
| 1024 |
+
542,
|
| 1025 |
+
882,
|
| 1026 |
+
734
|
| 1027 |
+
],
|
| 1028 |
+
"page_idx": 4
|
| 1029 |
+
},
|
| 1030 |
+
{
|
| 1031 |
+
"type": "text",
|
| 1032 |
+
"text": "Acknowledgements",
|
| 1033 |
+
"text_level": 1,
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
507,
|
| 1036 |
+
746,
|
| 1037 |
+
680,
|
| 1038 |
+
762
|
| 1039 |
+
],
|
| 1040 |
+
"page_idx": 4
|
| 1041 |
+
},
|
| 1042 |
+
{
|
| 1043 |
+
"type": "text",
|
| 1044 |
+
"text": "This work was partially funded by the Defense Advanced Research Projects Agency with the award HR00112220046.",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
507,
|
| 1047 |
+
771,
|
| 1048 |
+
882,
|
| 1049 |
+
818
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 4
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "text",
|
| 1055 |
+
"text": "References",
|
| 1056 |
+
"text_level": 1,
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
509,
|
| 1059 |
+
845,
|
| 1060 |
+
608,
|
| 1061 |
+
860
|
| 1062 |
+
],
|
| 1063 |
+
"page_idx": 4
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "text",
|
| 1067 |
+
"text": "Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
509,
|
| 1070 |
+
866,
|
| 1071 |
+
884,
|
| 1072 |
+
921
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 4
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "page_number",
|
| 1078 |
+
"text": "8032",
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
480,
|
| 1081 |
+
927,
|
| 1082 |
+
519,
|
| 1083 |
+
940
|
| 1084 |
+
],
|
| 1085 |
+
"page_idx": 4
|
| 1086 |
+
},
|
| 1087 |
+
{
|
| 1088 |
+
"type": "list",
|
| 1089 |
+
"sub_type": "ref_text",
|
| 1090 |
+
"list_items": [
|
| 1091 |
+
"capable language model locally on your phone. arXiv preprint arXiv:2404.14219.",
|
| 1092 |
+
"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.",
|
| 1093 |
+
"AI@Meta. 2024. Llama 3 model card.",
|
| 1094 |
+
"Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR.",
|
| 1095 |
+
"Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.",
|
| 1096 |
+
"Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.",
|
| 1097 |
+
"Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610-623.",
|
| 1098 |
+
"Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.",
|
| 1099 |
+
"Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. 2024. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319.",
|
| 1100 |
+
"Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. 2024. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132.",
|
| 1101 |
+
"Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.",
|
| 1102 |
+
"Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset."
|
| 1103 |
+
],
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
115,
|
| 1106 |
+
85,
|
| 1107 |
+
489,
|
| 1108 |
+
920
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 5
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "list",
|
| 1114 |
+
"sub_type": "ref_text",
|
| 1115 |
+
"list_items": [
|
| 1116 |
+
"Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2024. Safe rlhf: Safe reinforcement learning from human feedback. In The Twelfth International Conference on Learning Representations.",
|
| 1117 |
+
"Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306.",
|
| 1118 |
+
"Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858.",
|
| 1119 |
+
"Leo Gao, John Schulman, and Jacob Hilton. 2023. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pages 10835-10866. PMLR.",
|
| 1120 |
+
"Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462.",
|
| 1121 |
+
"Jiwoo Hong, Noah Lee, and James Thorne. 2024. Reference-free monolithic preference optimization with odds ratio. arXiv preprint arXiv:2403.07691.",
|
| 1122 |
+
"Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and Lewis Tunstall. 2024a. The n+ implementation details of rlhf with ppo: A case study on tl; dr summarization. arXiv preprint arXiv:2403.17031.",
|
| 1123 |
+
"Shengyi Costa Huang, Tianlin Liu, and Leandro von Werra. 2024b. The n implementation details of rlhf with ppo. In *ICLR Blogposts* 2024. https://d2jud02ci9yv69.cloudfront.net/2024-05-07-the-n-implementation-details-of-rlhf-with-ppo130/blog/the-n-implementation-details-of-rlhf-with-ppo/.",
|
| 1124 |
+
"Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking dpo and ppo: Disentangling best practices for learning from preference feedback. arXiv preprint arXiv:2406.09279.",
|
| 1125 |
+
"Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richard Nagyfi, et al. 2024. Openassistant conversations-democratizing large language model alignment. Advances in Neural Information Processing Systems, 36.",
|
| 1126 |
+
"Tomasz Korbak, Ethan Perez, and Christopher Buckley. 2022. RL with KL penalties is better viewed as Bayesian inference. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1083-1091, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 1127 |
+
],
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
510,
|
| 1130 |
+
85,
|
| 1131 |
+
884,
|
| 1132 |
+
920
|
| 1133 |
+
],
|
| 1134 |
+
"page_idx": 5
|
| 1135 |
+
},
|
| 1136 |
+
{
|
| 1137 |
+
"type": "page_number",
|
| 1138 |
+
"text": "8033",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
480,
|
| 1141 |
+
928,
|
| 1142 |
+
519,
|
| 1143 |
+
940
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 5
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "list",
|
| 1149 |
+
"sub_type": "ref_text",
|
| 1150 |
+
"list_items": [
|
| 1151 |
+
"Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let's verify step by step. arXiv preprint arXiv:2305.20050.",
|
| 1152 |
+
"Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734.",
|
| 1153 |
+
"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.",
|
| 1154 |
+
"Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. 2024. Disentangling length from quality in direct preference optimization. arXiv preprint arXiv:2403.19159.",
|
| 1155 |
+
"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.",
|
| 1156 |
+
"Guilherme Penedo, Hynek Kydlieck, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. Preprint, arXiv:2406.17557.",
|
| 1157 |
+
"Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.",
|
| 1158 |
+
"Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, and Tengyang Xie. 2024. Direct nash optimization: Teaching language models to self-improve with general preferences. arXiv preprint arXiv:2404.03715.",
|
| 1159 |
+
"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.",
|
| 1160 |
+
"Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. 2022. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems, 35:9460-9471.",
|
| 1161 |
+
"Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021."
|
| 1162 |
+
],
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
115,
|
| 1165 |
+
85,
|
| 1166 |
+
485,
|
| 1167 |
+
917
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 6
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "list",
|
| 1173 |
+
"sub_type": "ref_text",
|
| 1174 |
+
"list_items": [
|
| 1175 |
+
"Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. 2024. Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749.",
|
| 1176 |
+
"Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.",
|
| 1177 |
+
"InternLM Team. 2023. Internlm: A multilingual language model with progressively enhanced capabilities.",
|
| 1178 |
+
"Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. 2020. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl.",
|
| 1179 |
+
"Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, and Tong Zhang. 2024. Interpretable preferences via multi-objective reward modeling and mixture-of-experts. arXiv preprint arXiv:2406.12845.",
|
| 1180 |
+
"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
|
| 1181 |
+
"Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. 2024. Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333.",
|
| 1182 |
+
"Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. 2024a. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417.",
|
| 1183 |
+
"Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. 2024b. Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719.",
|
| 1184 |
+
"An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang"
|
| 1185 |
+
],
|
| 1186 |
+
"bbox": [
|
| 1187 |
+
510,
|
| 1188 |
+
85,
|
| 1189 |
+
880,
|
| 1190 |
+
920
|
| 1191 |
+
],
|
| 1192 |
+
"page_idx": 6
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "page_number",
|
| 1196 |
+
"text": "8034",
|
| 1197 |
+
"bbox": [
|
| 1198 |
+
480,
|
| 1199 |
+
928,
|
| 1200 |
+
519,
|
| 1201 |
+
940
|
| 1202 |
+
],
|
| 1203 |
+
"page_idx": 6
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "text",
|
| 1207 |
+
"text": "Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.",
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
131,
|
| 1210 |
+
85,
|
| 1211 |
+
487,
|
| 1212 |
+
203
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 7
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"text": "Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36.",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
114,
|
| 1221 |
+
212,
|
| 1222 |
+
489,
|
| 1223 |
+
280
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 7
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "A Extended Related Work",
|
| 1230 |
+
"text_level": 1,
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
112,
|
| 1233 |
+
291,
|
| 1234 |
+
361,
|
| 1235 |
+
307
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 7
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "text",
|
| 1241 |
+
"text": "Online Algorithms. Reinforcement learning from human/AI feedback (RLHF/RLAIF) is among the common approaches for aligning LLMs to human preferences (Christiano et al., 2017; Bai et al., 2022a; Stiannon et al., 2020; Bai et al., 2022b), and has been used to train models such as GPT-4 (Achiam et al., 2023) and Llama-3 (AI@Meta, 2024). In most cases, these approaches are comprised of three stages: 1) supervised finetuning (Taori et al., 2023; Zhou et al., 2024; Xia et al., 2024), 2) reward modeling (Gao et al., 2023; Chen et al., 2024; Lightman et al., 2023), and 3) policy optimization (Schulman et al., 2017). The prominent method for policy optimization is Proximal Policy Optimization (PPO), an online on-policy approach (Schulman et al., 2017). While PPO has shown promising performances (Stiannon et al., 2020; Ouyang et al., 2022; Achiam et al., 2023), it suffers from problems such as having too many subtle details for reproducibility (Huang et al., 2024b), 2) taking a long time for training (Huang et al., 2024a), and 3) reward over-optimization (Skalse et al., 2022).",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
115,
|
| 1244 |
+
317,
|
| 1245 |
+
489,
|
| 1246 |
+
687
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 7
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "text",
|
| 1252 |
+
"text": "Offline Algorithms. To address the drawbacks of RLHF/RLAIF, recent works have proposed simpler and more efficient offline algorithms, particularly Direct Preference Optimization (DPO) (Rafailov et al., 2024), which is based on the Bradley-Terry model (Bradley and Terry, 1952). These offline algorithms directly optimize an objective on the preference data with an implicit reward model without needing to have separate stages. Some recent works have focused on making a broad comparison between PPO and DPO. Specifically, they showcase the potential for PPO with a gold reward model $(\\sim +10\\%)$ while underlying the similarity to DPO $(\\sim +1\\%$ averaged across benchmarks)",
|
| 1253 |
+
"bbox": [
|
| 1254 |
+
112,
|
| 1255 |
+
696,
|
| 1256 |
+
489,
|
| 1257 |
+
921
|
| 1258 |
+
],
|
| 1259 |
+
"page_idx": 7
|
| 1260 |
+
},
|
| 1261 |
+
{
|
| 1262 |
+
"type": "text",
|
| 1263 |
+
"text": "when trained on the same data (Ivison et al., 2024; Xu et al., 2024b).",
|
| 1264 |
+
"bbox": [
|
| 1265 |
+
507,
|
| 1266 |
+
84,
|
| 1267 |
+
882,
|
| 1268 |
+
116
|
| 1269 |
+
],
|
| 1270 |
+
"page_idx": 7
|
| 1271 |
+
},
|
| 1272 |
+
{
|
| 1273 |
+
"type": "text",
|
| 1274 |
+
"text": "B Hyperparameter Tuning Considerations",
|
| 1275 |
+
"text_level": 1,
|
| 1276 |
+
"bbox": [
|
| 1277 |
+
509,
|
| 1278 |
+
128,
|
| 1279 |
+
761,
|
| 1280 |
+
160
|
| 1281 |
+
],
|
| 1282 |
+
"page_idx": 7
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "text",
|
| 1286 |
+
"text": "DPO. As presented in Figure 4, lower $\\beta$ leads to higher performances; however, as $\\beta$ decreases, the performance variance increases, which showcases the method's instability. Overall, $\\beta = 0.05$ provides the best balance of stability and performance.",
|
| 1287 |
+
"bbox": [
|
| 1288 |
+
507,
|
| 1289 |
+
170,
|
| 1290 |
+
884,
|
| 1291 |
+
250
|
| 1292 |
+
],
|
| 1293 |
+
"page_idx": 7
|
| 1294 |
+
},
|
| 1295 |
+
{
|
| 1296 |
+
"type": "text",
|
| 1297 |
+
"text": "LN-DPO. While we initially borrowed $\\beta$ 's range from SimPO (Meng et al., 2024), more experiments showed benefits in further decreasing its value. Figure 5 presents the performance spread across different runs. From these experiments, $\\beta \\in [1.0, 2.0]$ contains most of the best-performing models. Moreover, we observe the relatively low (compared to DPO) variance across the performances, showcasing another benefit of LN-DPO.",
|
| 1298 |
+
"bbox": [
|
| 1299 |
+
507,
|
| 1300 |
+
258,
|
| 1301 |
+
884,
|
| 1302 |
+
404
|
| 1303 |
+
],
|
| 1304 |
+
"page_idx": 7
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "text",
|
| 1308 |
+
"text": "SimPO. In contrast to the other two methods, SimPO has two method-specific hyperparameters: $\\beta$ and $\\gamma$ . As illustrated in Figure 6, on average, lower $\\beta$ values lead to better performance. We believe the performance uptick in the lower range is due to a difference in the average length of this work's and the original work's training sets. Moreover, as showcased in Figure 7, the best performing models have a $\\gamma \\in [1.0, 1.4]$ , in line with the suggestion by Meng et al. (2024). Notably, $\\beta$ and $\\gamma$ have a relatively low variance across experiments, another upside of SimPO.",
|
| 1309 |
+
"bbox": [
|
| 1310 |
+
507,
|
| 1311 |
+
413,
|
| 1312 |
+
882,
|
| 1313 |
+
605
|
| 1314 |
+
],
|
| 1315 |
+
"page_idx": 7
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "text",
|
| 1319 |
+
"text": "C The Answer to the Ultimate Question",
|
| 1320 |
+
"text_level": 1,
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
507,
|
| 1323 |
+
617,
|
| 1324 |
+
870,
|
| 1325 |
+
634
|
| 1326 |
+
],
|
| 1327 |
+
"page_idx": 7
|
| 1328 |
+
},
|
| 1329 |
+
{
|
| 1330 |
+
"type": "text",
|
| 1331 |
+
"text": "Based on our collective empirical results, we believe SimPO to be the best starting point among the three methods, mainly due to its robustness toward hyperparameter variations and effective length reduction. As for SimPO's hyperparameters, we recommend $\\beta \\in \\{1.0, 1.5\\}$ and $\\gamma \\approx 1.2$ . Moreover, while LN-DPO is consistently second-best in most of our experiments, we discuss scenarios for choosing it over SimPO in Section 8.",
|
| 1332 |
+
"bbox": [
|
| 1333 |
+
507,
|
| 1334 |
+
643,
|
| 1335 |
+
884,
|
| 1336 |
+
787
|
| 1337 |
+
],
|
| 1338 |
+
"page_idx": 7
|
| 1339 |
+
},
|
| 1340 |
+
{
|
| 1341 |
+
"type": "page_number",
|
| 1342 |
+
"text": "8035",
|
| 1343 |
+
"bbox": [
|
| 1344 |
+
480,
|
| 1345 |
+
928,
|
| 1346 |
+
519,
|
| 1347 |
+
940
|
| 1348 |
+
],
|
| 1349 |
+
"page_idx": 7
|
| 1350 |
+
},
|
| 1351 |
+
{
|
| 1352 |
+
"type": "image",
|
| 1353 |
+
"img_path": "images/e1a5b37974e9ea136d738144a3d8f9cd651e4620d0c8bf6bbbf812a1866d2732.jpg",
|
| 1354 |
+
"image_caption": [
|
| 1355 |
+
"Figure 4: DPO $\\beta$ . Each point indicates a run with the corresponding $\\beta$ value."
|
| 1356 |
+
],
|
| 1357 |
+
"image_footnote": [],
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
119,
|
| 1360 |
+
105,
|
| 1361 |
+
366,
|
| 1362 |
+
236
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 8
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "image",
|
| 1368 |
+
"img_path": "images/1d70bffe2885f134bf940c6da7a9da4c9175c137c8a9892774858ca13abf4425.jpg",
|
| 1369 |
+
"image_caption": [],
|
| 1370 |
+
"image_footnote": [],
|
| 1371 |
+
"bbox": [
|
| 1372 |
+
371,
|
| 1373 |
+
105,
|
| 1374 |
+
620,
|
| 1375 |
+
233
|
| 1376 |
+
],
|
| 1377 |
+
"page_idx": 8
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "image",
|
| 1381 |
+
"img_path": "images/c0083bce9d390fe17004d82194a3a6e71190b3fee46a96d3d2a6d25b25c2be3d.jpg",
|
| 1382 |
+
"image_caption": [],
|
| 1383 |
+
"image_footnote": [],
|
| 1384 |
+
"bbox": [
|
| 1385 |
+
623,
|
| 1386 |
+
107,
|
| 1387 |
+
880,
|
| 1388 |
+
236
|
| 1389 |
+
],
|
| 1390 |
+
"page_idx": 8
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "image",
|
| 1394 |
+
"img_path": "images/a49a58fa9e396d67c4f65c4862a98bc1aab2c9ffaa2f5604839a89c01ce496b0.jpg",
|
| 1395 |
+
"image_caption": [
|
| 1396 |
+
"Figure 5: LN-DPO $\\beta$ . Each point indicates a run with the corresponding $\\beta$ value."
|
| 1397 |
+
],
|
| 1398 |
+
"image_footnote": [],
|
| 1399 |
+
"bbox": [
|
| 1400 |
+
119,
|
| 1401 |
+
316,
|
| 1402 |
+
371,
|
| 1403 |
+
445
|
| 1404 |
+
],
|
| 1405 |
+
"page_idx": 8
|
| 1406 |
+
},
|
| 1407 |
+
{
|
| 1408 |
+
"type": "image",
|
| 1409 |
+
"img_path": "images/b055ce64a9fad6c9f29806f2d1159ccd3e8941d36f4865c0ed97297ca2132fc5.jpg",
|
| 1410 |
+
"image_caption": [],
|
| 1411 |
+
"image_footnote": [],
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
374,
|
| 1414 |
+
317,
|
| 1415 |
+
623,
|
| 1416 |
+
444
|
| 1417 |
+
],
|
| 1418 |
+
"page_idx": 8
|
| 1419 |
+
},
|
| 1420 |
+
{
|
| 1421 |
+
"type": "image",
|
| 1422 |
+
"img_path": "images/8f2da863a72d2e96a3cf0f65b2359e2d627f1863dd4666973fb9df8d85bcbd3f.jpg",
|
| 1423 |
+
"image_caption": [],
|
| 1424 |
+
"image_footnote": [],
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
626,
|
| 1427 |
+
317,
|
| 1428 |
+
878,
|
| 1429 |
+
445
|
| 1430 |
+
],
|
| 1431 |
+
"page_idx": 8
|
| 1432 |
+
},
|
| 1433 |
+
{
|
| 1434 |
+
"type": "image",
|
| 1435 |
+
"img_path": "images/dda20d6431a1b46162ba8283f4ed957ea826a8ca5c9ddf8959e3465679058fb5.jpg",
|
| 1436 |
+
"image_caption": [
|
| 1437 |
+
"Figure 6: SimPO $\\beta$ . Each point indicates a run with the corresponding $\\beta$ value."
|
| 1438 |
+
],
|
| 1439 |
+
"image_footnote": [],
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
119,
|
| 1442 |
+
529,
|
| 1443 |
+
366,
|
| 1444 |
+
653
|
| 1445 |
+
],
|
| 1446 |
+
"page_idx": 8
|
| 1447 |
+
},
|
| 1448 |
+
{
|
| 1449 |
+
"type": "image",
|
| 1450 |
+
"img_path": "images/1b332534e17c990e75d1b805a78951e8856cc7b789d105370c2b7702d1221aa0.jpg",
|
| 1451 |
+
"image_caption": [],
|
| 1452 |
+
"image_footnote": [],
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
369,
|
| 1455 |
+
529,
|
| 1456 |
+
621,
|
| 1457 |
+
653
|
| 1458 |
+
],
|
| 1459 |
+
"page_idx": 8
|
| 1460 |
+
},
|
| 1461 |
+
{
|
| 1462 |
+
"type": "image",
|
| 1463 |
+
"img_path": "images/171678577ca608f65d5d04159b50679523463164b21cd6a74335493712e54167.jpg",
|
| 1464 |
+
"image_caption": [],
|
| 1465 |
+
"image_footnote": [],
|
| 1466 |
+
"bbox": [
|
| 1467 |
+
625,
|
| 1468 |
+
529,
|
| 1469 |
+
875,
|
| 1470 |
+
653
|
| 1471 |
+
],
|
| 1472 |
+
"page_idx": 8
|
| 1473 |
+
},
|
| 1474 |
+
{
|
| 1475 |
+
"type": "image",
|
| 1476 |
+
"img_path": "images/0e0b02cf85496f79717116a6140c8a6a6869288710ac576ee46d07bf1265c349.jpg",
|
| 1477 |
+
"image_caption": [
|
| 1478 |
+
"Figure 7: SimPO $\\gamma$ . Each point indicates a run with the corresponding $\\gamma$ value."
|
| 1479 |
+
],
|
| 1480 |
+
"image_footnote": [],
|
| 1481 |
+
"bbox": [
|
| 1482 |
+
119,
|
| 1483 |
+
740,
|
| 1484 |
+
366,
|
| 1485 |
+
865
|
| 1486 |
+
],
|
| 1487 |
+
"page_idx": 8
|
| 1488 |
+
},
|
| 1489 |
+
{
|
| 1490 |
+
"type": "image",
|
| 1491 |
+
"img_path": "images/9a1244e50ea4c902f99202a47a846de106cd0bde67f43627e61763a2f0a6c68e.jpg",
|
| 1492 |
+
"image_caption": [],
|
| 1493 |
+
"image_footnote": [],
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
369,
|
| 1496 |
+
740,
|
| 1497 |
+
621,
|
| 1498 |
+
864
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 8
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "image",
|
| 1504 |
+
"img_path": "images/528fcb6d12f0bc8341f8603f840fec4752e3c96d985510f19d53497a7504ae6c.jpg",
|
| 1505 |
+
"image_caption": [],
|
| 1506 |
+
"image_footnote": [],
|
| 1507 |
+
"bbox": [
|
| 1508 |
+
625,
|
| 1509 |
+
739,
|
| 1510 |
+
875,
|
| 1511 |
+
865
|
| 1512 |
+
],
|
| 1513 |
+
"page_idx": 8
|
| 1514 |
+
},
|
| 1515 |
+
{
|
| 1516 |
+
"type": "page_number",
|
| 1517 |
+
"text": "8036",
|
| 1518 |
+
"bbox": [
|
| 1519 |
+
480,
|
| 1520 |
+
928,
|
| 1521 |
+
521,
|
| 1522 |
+
940
|
| 1523 |
+
],
|
| 1524 |
+
"page_idx": 8
|
| 1525 |
+
}
|
| 1526 |
+
]
|
2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_model.json
ADDED
|
@@ -0,0 +1,2044 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.227,
|
| 7 |
+
0.08,
|
| 8 |
+
0.768,
|
| 9 |
+
0.1
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "A Practical Analysis of Human Alignment with *PO"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.175,
|
| 18 |
+
0.106,
|
| 19 |
+
0.324,
|
| 20 |
+
0.123
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Kian Ahrabian\\*"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.451,
|
| 29 |
+
0.106,
|
| 30 |
+
0.546,
|
| 31 |
+
0.123
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Xihui Lin"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.69,
|
| 40 |
+
0.106,
|
| 41 |
+
0.809,
|
| 42 |
+
0.123
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Barun Patra"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.157,
|
| 51 |
+
0.135,
|
| 52 |
+
0.339,
|
| 53 |
+
0.154
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Vishrav Chaudhary\\(^{2}\\)"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.432,
|
| 62 |
+
0.135,
|
| 63 |
+
0.566,
|
| 64 |
+
0.152
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Alon Benhaim"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.696,
|
| 73 |
+
0.135,
|
| 74 |
+
0.802,
|
| 75 |
+
0.154
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Jay Pujara<sup>1</sup>"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.453,
|
| 84 |
+
0.163,
|
| 85 |
+
0.545,
|
| 86 |
+
0.182
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Xia Song²"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.232,
|
| 95 |
+
0.187,
|
| 96 |
+
0.766,
|
| 97 |
+
0.204
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "<sup>1</sup>University of Southern California, Information Sciences Institute"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.453,
|
| 106 |
+
0.204,
|
| 107 |
+
0.546,
|
| 108 |
+
0.219
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "2Microsoft"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.239,
|
| 117 |
+
0.221,
|
| 118 |
+
0.757,
|
| 119 |
+
0.238
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "ahrabian@usc.edu,{xihlin, barun.patra@microsoft.com}"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.125,
|
| 128 |
+
0.238,
|
| 129 |
+
0.873,
|
| 130 |
+
0.254
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "{vchaudhary, alonbenhaim}@microsoft.com, jpujara@isi.edu, xiao@microsoft.com"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "title",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.261,
|
| 139 |
+
0.262,
|
| 140 |
+
0.341,
|
| 141 |
+
0.276
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Abstract"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.142,
|
| 150 |
+
0.293,
|
| 151 |
+
0.461,
|
| 152 |
+
0.678
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "At the forefront of state-of-the-art human alignment methods are preference optimization methods (*PO). Prior research has often concentrated on identifying the best-performing method, typically involving a grid search over hyperparameters, which can be impractical for general practitioners. In this paper, we examine the robustness of existing state-of-the-art methods to varying hyperparameters in a realistic out-of-distribution (OOD) scenario that mirrors real-world applications of human alignment. Our goal is to empirically find the method that increases the likelihood of achieving better results through the lens of various metrics, such as KL divergence and response length. We also introduce LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length, and improves performance. Our analysis of state-of-the-art reference-free (i.e., SimPO) and reference-dependent (i.e., DPO and LN-DPO) methods reveals that they perform similarly at their peak (i.e., best possible scenario). However, we uncover that the pattern of change in performance greatly varies as we move away from the best possible scenario."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "title",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.115,
|
| 161 |
+
0.693,
|
| 162 |
+
0.26,
|
| 163 |
+
0.708
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "1 Introduction"
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.113,
|
| 172 |
+
0.72,
|
| 173 |
+
0.49,
|
| 174 |
+
0.897
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "In recent years, the quality of large language models (LLMs) has been constantly increasing (Chiang et al., 2024), achieving impressive results across tasks and benchmarks (Abdin et al., 2024; AI@Meta, 2024; Achiam et al., 2023; Team, 2023; Yang et al., 2024). However, even with the most rigorous filtering heuristics, the training data (Computer, 2023; Penedo et al., 2024) is typically contaminated with undesirable content that can lead to unacceptable behaviors (Bender et al., 2021; Gehman et al., 2020). To improve the model's"
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "table",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.515,
|
| 183 |
+
0.26,
|
| 184 |
+
0.88,
|
| 185 |
+
0.394
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "<table><tr><td></td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>Mean Score</td><td>1.6</td><td>+0.3%</td><td>+2.7%</td></tr><tr><td>Mean Length</td><td>119.8</td><td>-15.9%</td><td>-22.9%</td></tr><tr><td>KL Divergence</td><td>55.0</td><td>-26.0%</td><td>-20.7%</td></tr><tr><td>Win vs. Chosen</td><td>77.1%</td><td>+0.8%</td><td>+3.1%</td></tr><tr><td>Win vs. SFT</td><td>60.7%</td><td>+2.1%</td><td>+5.0%</td></tr></table>"
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "table_caption",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.509,
|
| 194 |
+
0.403,
|
| 195 |
+
0.885,
|
| 196 |
+
0.447
|
| 197 |
+
],
|
| 198 |
+
"angle": 0,
|
| 199 |
+
"content": "Table 1: Best *PO Performance. The metrics are normalized by the respective DPO performance. The underlined values indicate the best performance."
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "text",
|
| 203 |
+
"bbox": [
|
| 204 |
+
0.508,
|
| 205 |
+
0.465,
|
| 206 |
+
0.885,
|
| 207 |
+
0.593
|
| 208 |
+
],
|
| 209 |
+
"angle": 0,
|
| 210 |
+
"content": "alignment with human preferences, the de-facto approach has been to learn from human/AI-generated preference data (e.g., a chosen and a rejected response for each prompt). In particular, off-policy preference optimization methods (*PO) have been prevalent given their good performance and ease of implementation (Rafailov et al., 2024; Hong et al., 2024; Meng et al., 2024)."
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"bbox": [
|
| 215 |
+
0.508,
|
| 216 |
+
0.597,
|
| 217 |
+
0.884,
|
| 218 |
+
0.822
|
| 219 |
+
],
|
| 220 |
+
"angle": 0,
|
| 221 |
+
"content": "One commonly occurring practice when reporting the performance of new methods is to compare their best-performing variant (after a hyperparameter grid search) to a default baseline with a fixed set of hyperparameters. However, from a practical perspective for future users, these comparisons do not provide a good answer to the problem of which method is expected to achieve higher performance, given a fixed budget for hyperparameter search, as doing broad grid searches is often computationally infeasible for many practitioners. To this end, in this work, we aim to empirically identify the more robust method to hyperparameter variations while still being competitive in performance."
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"bbox": [
|
| 226 |
+
0.508,
|
| 227 |
+
0.826,
|
| 228 |
+
0.885,
|
| 229 |
+
0.922
|
| 230 |
+
],
|
| 231 |
+
"angle": 0,
|
| 232 |
+
"content": "We set up our experiments in a realistic out-of-distribution (OOD) setting, focused on safety and helpfulness domains, where the train and test datasets share a common core goal, but their samples are generated from different distributions (e.g., AI and human expert). This setting resembles real-"
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "page_footnote",
|
| 236 |
+
"bbox": [
|
| 237 |
+
0.137,
|
| 238 |
+
0.907,
|
| 239 |
+
0.419,
|
| 240 |
+
0.921
|
| 241 |
+
],
|
| 242 |
+
"angle": 0,
|
| 243 |
+
"content": "*Work done during an internship at Microsoft."
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "page_number",
|
| 247 |
+
"bbox": [
|
| 248 |
+
0.481,
|
| 249 |
+
0.928,
|
| 250 |
+
0.521,
|
| 251 |
+
0.941
|
| 252 |
+
],
|
| 253 |
+
"angle": 0,
|
| 254 |
+
"content": "8028"
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "footer",
|
| 258 |
+
"bbox": [
|
| 259 |
+
0.319,
|
| 260 |
+
0.946,
|
| 261 |
+
0.678,
|
| 262 |
+
0.959
|
| 263 |
+
],
|
| 264 |
+
"angle": 0,
|
| 265 |
+
"content": "Findings of the Association for Computational Linguistics:"
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"type": "footer",
|
| 269 |
+
"bbox": [
|
| 270 |
+
0.4,
|
| 271 |
+
0.96,
|
| 272 |
+
0.598,
|
| 273 |
+
0.972
|
| 274 |
+
],
|
| 275 |
+
"angle": 0,
|
| 276 |
+
"content": "NAACL 2025, pages 8028-8036"
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"type": "footer",
|
| 280 |
+
"bbox": [
|
| 281 |
+
0.275,
|
| 282 |
+
0.973,
|
| 283 |
+
0.724,
|
| 284 |
+
0.986
|
| 285 |
+
],
|
| 286 |
+
"angle": 0,
|
| 287 |
+
"content": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics"
|
| 288 |
+
}
|
| 289 |
+
],
|
| 290 |
+
[
|
| 291 |
+
{
|
| 292 |
+
"type": "table",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.127,
|
| 295 |
+
0.082,
|
| 296 |
+
0.873,
|
| 297 |
+
0.209
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "<table><tr><td>Method</td><td>Objective</td><td>Hyperparameters</td></tr><tr><td>DPO</td><td>- log σ(β log πθ(yw|x)/πref(yw|x) - β log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {0.01, 0.05, 0.1, 0.3, 0.5}</td></tr><tr><td>SimPO</td><td>- log σ(β/|yw| log πθ(yw|x) - β/|yl| log πθ(yl|x) - γ)</td><td>β ∈ {1.0, 1.5, 2.0, 2.5}γ ∈ {0.5, 0.8, 1.0, 1.2, 1.4, 1.6}</td></tr><tr><td>LN-DPO</td><td>- log σ(β/|yw| log πθ(yw|x)/πref(yw|x) - β/|yl| log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {1.0, 1.5, 2.0, 2.5, 3.0, 3.5}</td></tr></table>"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "table_caption",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.113,
|
| 306 |
+
0.218,
|
| 307 |
+
0.883,
|
| 308 |
+
0.248
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "Table 2: *PO Optimization Objectives. The preference data is formulated as \\( D = (x, y_w, y_l) \\), where \\( x \\) is the prompt and \\( y_w \\) and \\( y_l \\) are the chosen and rejected responses."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.112,
|
| 317 |
+
0.266,
|
| 318 |
+
0.49,
|
| 319 |
+
0.49
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "world scenarios as it simulates the release of large generative models for public use. Moreover, to better understand the behavior of the state-of-the-art models, we take the best-performing reference-free and reference-dependent models (as reported by Meng et al. (2024)) and analyze them through the lens of standard metrics such KL divergence, response length, and win rate. We also introduce an embarrassingly simple length-normalized extension of vanilla Direct Preference Optimization (DPO) (Rafailov et al., 2024), LN-DPO, that effectively mitigates the issue of lengthy generations without any apparent performance degradation<sup>1</sup>. In summary, our contributions are as follows:"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.137,
|
| 328 |
+
0.493,
|
| 329 |
+
0.489,
|
| 330 |
+
0.558
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "- We examine state-of-the-art reference-free and reference-dependent preference optimization methods across a wide range of hyperparameters in a real-world setup."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.137,
|
| 339 |
+
0.563,
|
| 340 |
+
0.49,
|
| 341 |
+
0.626
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "- We analyze the performance of these methods on critical metrics such as mean response length, mean score on a gold reward model, win rate vs. chosen and SFT, and KL vs. SFT."
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.137,
|
| 350 |
+
0.632,
|
| 351 |
+
0.49,
|
| 352 |
+
0.711
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "- We introduce and examine LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length and improves performance."
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "list",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.137,
|
| 361 |
+
0.493,
|
| 362 |
+
0.49,
|
| 363 |
+
0.711
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": null
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "title",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.114,
|
| 372 |
+
0.723,
|
| 373 |
+
0.271,
|
| 374 |
+
0.738
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "2 Related Work"
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.113,
|
| 383 |
+
0.748,
|
| 384 |
+
0.49,
|
| 385 |
+
0.877
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "Since the introduction of DPO (Rafailov et al., 2024), there has been a body of works with new optimization objectives improving the performance and efficiency (Azar et al., 2024; Tang et al., 2024; Hong et al., 2024; Rosset et al., 2024; Meng et al., 2024; Xu et al., 2024a; Ethayarajh et al., 2024). These methods can be partitioned into two groups: reference-free (Meng et al., 2024; Hong et al.,"
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.508,
|
| 394 |
+
0.266,
|
| 395 |
+
0.885,
|
| 396 |
+
0.427
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": "2024) and reference-dependent (Rafailov et al., 2024; Park et al., 2024). Reference-free methods generally benefit from fast training runs, while reference-dependent methods have terms baked into their objective to control divergence from the reference model. In this work, we compare SimPO (Meng et al., 2024), a recent state-of-the-art reference-free method, with DPO and LN-DPO as reference-dependent methods (see Appendix A for extended related work)."
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "title",
|
| 403 |
+
"bbox": [
|
| 404 |
+
0.509,
|
| 405 |
+
0.442,
|
| 406 |
+
0.719,
|
| 407 |
+
0.459
|
| 408 |
+
],
|
| 409 |
+
"angle": 0,
|
| 410 |
+
"content": "3 Experimental Setup"
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "title",
|
| 414 |
+
"bbox": [
|
| 415 |
+
0.509,
|
| 416 |
+
0.47,
|
| 417 |
+
0.625,
|
| 418 |
+
0.484
|
| 419 |
+
],
|
| 420 |
+
"angle": 0,
|
| 421 |
+
"content": "3.1 Datasets"
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"bbox": [
|
| 426 |
+
0.508,
|
| 427 |
+
0.493,
|
| 428 |
+
0.885,
|
| 429 |
+
0.655
|
| 430 |
+
],
|
| 431 |
+
"angle": 0,
|
| 432 |
+
"content": "For our datasets, we follow the setup introduced by Xu et al. (2024b). Specifically, we use the double safe/unsafe filtered train subset of SafeRLHF (Dai et al., 2024) for training and the test subset of HH-RLHF (Ganguli et al., 2022) for evaluation. This setup closely resembles real-world scenarios where even though models are trained on various domains (e.g., safety and helpfulness in our experiments), they have to generalize to similar unseen queries while interacting with the users."
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "title",
|
| 436 |
+
"bbox": [
|
| 437 |
+
0.509,
|
| 438 |
+
0.669,
|
| 439 |
+
0.615,
|
| 440 |
+
0.683
|
| 441 |
+
],
|
| 442 |
+
"angle": 0,
|
| 443 |
+
"content": "3.2 Models"
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"bbox": [
|
| 448 |
+
0.508,
|
| 449 |
+
0.692,
|
| 450 |
+
0.885,
|
| 451 |
+
0.837
|
| 452 |
+
],
|
| 453 |
+
"angle": 0,
|
| 454 |
+
"content": "For all our experiments, we chose the Phi-3 Medium model (Abdin et al., 2024) due to its high performance across benchmarks and small size, ensuring computational tractability. To evaluate the trained models, we use the OpenAssistant reward model (Köpf et al., 2024) to score the quality of their generated responses. We chose this model due to its small size and use in prior works (Xu et al., 2024b), ensuring fast and correct evaluations."
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "title",
|
| 458 |
+
"bbox": [
|
| 459 |
+
0.509,
|
| 460 |
+
0.851,
|
| 461 |
+
0.749,
|
| 462 |
+
0.867
|
| 463 |
+
],
|
| 464 |
+
"angle": 0,
|
| 465 |
+
"content": "3.3 Optimization Objectives"
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"bbox": [
|
| 470 |
+
0.508,
|
| 471 |
+
0.874,
|
| 472 |
+
0.885,
|
| 473 |
+
0.922
|
| 474 |
+
],
|
| 475 |
+
"angle": 0,
|
| 476 |
+
"content": "Considering the performances reported by Meng et al. (2024), we choose DPO as our reference-dependent method and SimPO as our reference-free"
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "page_footnote",
|
| 480 |
+
"bbox": [
|
| 481 |
+
0.113,
|
| 482 |
+
0.883,
|
| 483 |
+
0.49,
|
| 484 |
+
0.922
|
| 485 |
+
],
|
| 486 |
+
"angle": 0,
|
| 487 |
+
"content": "Concurrently, Meng et al. (2024) have added a similar method to their experiments (updated on July 7th, 2024). Here, we present a more thorough analysis and comparison."
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "page_number",
|
| 491 |
+
"bbox": [
|
| 492 |
+
0.481,
|
| 493 |
+
0.928,
|
| 494 |
+
0.521,
|
| 495 |
+
0.941
|
| 496 |
+
],
|
| 497 |
+
"angle": 0,
|
| 498 |
+
"content": "8029"
|
| 499 |
+
}
|
| 500 |
+
],
|
| 501 |
+
[
|
| 502 |
+
{
|
| 503 |
+
"type": "image",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.139,
|
| 506 |
+
0.085,
|
| 507 |
+
0.38,
|
| 508 |
+
0.261
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": null
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "image",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.383,
|
| 517 |
+
0.086,
|
| 518 |
+
0.619,
|
| 519 |
+
0.262
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": null
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "image",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.624,
|
| 528 |
+
0.099,
|
| 529 |
+
0.861,
|
| 530 |
+
0.262
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": null
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "image_caption",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.113,
|
| 539 |
+
0.276,
|
| 540 |
+
0.882,
|
| 541 |
+
0.307
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "Figure 1: *PO Performance Distribution. Each sample in the distribution represents the performance of one set of hyperparameters on the denoted metric. The dashed line indicates the performance of the initial SFT model."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "image",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.122,
|
| 550 |
+
0.315,
|
| 551 |
+
0.368,
|
| 552 |
+
0.453
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": null
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "image",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.371,
|
| 561 |
+
0.316,
|
| 562 |
+
0.622,
|
| 563 |
+
0.453
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": null
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "image",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.626,
|
| 572 |
+
0.326,
|
| 573 |
+
0.876,
|
| 574 |
+
0.453
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": null
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "image_caption",
|
| 581 |
+
"bbox": [
|
| 582 |
+
0.113,
|
| 583 |
+
0.467,
|
| 584 |
+
0.882,
|
| 585 |
+
0.495
|
| 586 |
+
],
|
| 587 |
+
"angle": 0,
|
| 588 |
+
"content": "Figure 2: Response Length. The top \\(\\mathrm{k\\%}\\) (\\(k \\in \\{1, 10, 25\\}\\)) denotes the percentage of best-performing hyperparameters taken from each method's runs."
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"bbox": [
|
| 593 |
+
0.113,
|
| 594 |
+
0.514,
|
| 595 |
+
0.487,
|
| 596 |
+
0.641
|
| 597 |
+
],
|
| 598 |
+
"angle": 0,
|
| 599 |
+
"content": "method. While DPO has an implicit length normalization through the reference model, the variance of the reward (i.e., \\(\\log \\frac{\\pi_{\\theta}}{\\pi_{\\mathrm{ref}}}\\)) increases with response length. As such, inspired by explicit length regularization in SimPO and R-DPO (Park et al., 2024), we further normalize it with the response length similar to SimPO, which we call LN-DPO (see Section 3.4 for more details)."
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "title",
|
| 603 |
+
"bbox": [
|
| 604 |
+
0.114,
|
| 605 |
+
0.654,
|
| 606 |
+
0.486,
|
| 607 |
+
0.669
|
| 608 |
+
],
|
| 609 |
+
"angle": 0,
|
| 610 |
+
"content": "3.4 Connection between LN-DPO and SimPO"
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"bbox": [
|
| 615 |
+
0.114,
|
| 616 |
+
0.674,
|
| 617 |
+
0.486,
|
| 618 |
+
0.707
|
| 619 |
+
],
|
| 620 |
+
"angle": 0,
|
| 621 |
+
"content": "LN-DPO is similar to an adaptive margin version of SimPO with per sample margin defined as"
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "equation",
|
| 625 |
+
"bbox": [
|
| 626 |
+
0.137,
|
| 627 |
+
0.717,
|
| 628 |
+
0.487,
|
| 629 |
+
0.751
|
| 630 |
+
],
|
| 631 |
+
"angle": 0,
|
| 632 |
+
"content": "\\[\n\\gamma_ {w, l} = \\log \\frac {\\pi_ {\\mathrm {r e f}} \\left(y _ {w} \\mid x\\right)}{\\left| y _ {w} \\right|} - \\log \\frac {\\pi_ {\\mathrm {r e f}} \\left(y _ {l} \\mid x\\right)}{\\left| y _ {l} \\right|}. \\tag {1}\n\\]"
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"bbox": [
|
| 637 |
+
0.113,
|
| 638 |
+
0.761,
|
| 639 |
+
0.489,
|
| 640 |
+
0.921
|
| 641 |
+
],
|
| 642 |
+
"angle": 0,
|
| 643 |
+
"content": "Essentially, this adaptive margin encourages larger margins for pairs with large margins in the reference policy. Depending on the quality of the reference model and the labels, this change could be beneficial compared to SimPO's constant margin. The adaptive margin focuses more on \"easier\" pairs (i.e., pairs that have some prior evidence to be different) while less on \"harder\" pairs (i.e., pairs that are closer), which means that LN-DPO is potentially less prone to overfitting and less sensitive"
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"bbox": [
|
| 648 |
+
0.509,
|
| 649 |
+
0.515,
|
| 650 |
+
0.632,
|
| 651 |
+
0.53
|
| 652 |
+
],
|
| 653 |
+
"angle": 0,
|
| 654 |
+
"content": "to wrong labels."
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "title",
|
| 658 |
+
"bbox": [
|
| 659 |
+
0.509,
|
| 660 |
+
0.545,
|
| 661 |
+
0.7,
|
| 662 |
+
0.563
|
| 663 |
+
],
|
| 664 |
+
"angle": 0,
|
| 665 |
+
"content": "4 Training Regimen"
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "text",
|
| 669 |
+
"bbox": [
|
| 670 |
+
0.508,
|
| 671 |
+
0.573,
|
| 672 |
+
0.884,
|
| 673 |
+
0.862
|
| 674 |
+
],
|
| 675 |
+
"angle": 0,
|
| 676 |
+
"content": "Following the common practice, before the preference optimization step we do a supervised fine-tuning (SFT) step. Specifically, we first run a grid search over the following hyperparameters: epochs \\(\\in \\{1,3\\}\\) and learning rate \\(\\in \\{1e - 6,3e - 6,1e - 5,2e - 5\\}\\). Then we evaluate the final checkpoints against the test set and choose the one with the highest performance. This procedure ensures that the preference optimization methods are initialized from a good checkpoint. For the preference optimization methods, we run a grid search using 1) the same ranges as SFT for epochs and learning rate and 2) common values for method-specific hyperparameters as used in prior works (Meng et al., 2024; Rafailov et al., 2024; Hong et al., 2024). Table 2 presents the method-specific ranges used in our experiments. In all of our experiments, the batch size is set to 256."
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "title",
|
| 680 |
+
"bbox": [
|
| 681 |
+
0.509,
|
| 682 |
+
0.877,
|
| 683 |
+
0.612,
|
| 684 |
+
0.893
|
| 685 |
+
],
|
| 686 |
+
"angle": 0,
|
| 687 |
+
"content": "5 Metrics"
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"bbox": [
|
| 692 |
+
0.509,
|
| 693 |
+
0.906,
|
| 694 |
+
0.882,
|
| 695 |
+
0.922
|
| 696 |
+
],
|
| 697 |
+
"angle": 0,
|
| 698 |
+
"content": "Our analysis focuses on the following five metrics:"
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "page_number",
|
| 702 |
+
"bbox": [
|
| 703 |
+
0.481,
|
| 704 |
+
0.928,
|
| 705 |
+
0.521,
|
| 706 |
+
0.941
|
| 707 |
+
],
|
| 708 |
+
"angle": 0,
|
| 709 |
+
"content": "8030"
|
| 710 |
+
}
|
| 711 |
+
],
|
| 712 |
+
[
|
| 713 |
+
{
|
| 714 |
+
"type": "image",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.121,
|
| 717 |
+
0.085,
|
| 718 |
+
0.368,
|
| 719 |
+
0.226
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": null
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "image",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.37,
|
| 728 |
+
0.086,
|
| 729 |
+
0.621,
|
| 730 |
+
0.224
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": null
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "image",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.625,
|
| 739 |
+
0.097,
|
| 740 |
+
0.88,
|
| 741 |
+
0.224
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": null
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "image_caption",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.113,
|
| 750 |
+
0.238,
|
| 751 |
+
0.882,
|
| 752 |
+
0.267
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "Figure 3: KL Divergence. The top \\(\\mathrm{k\\%}\\) (\\(k \\in \\{1, 10, 25\\}\\)) denotes the percentage of best-performing hyperparameters taken from each method's runs."
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "text",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.136,
|
| 761 |
+
0.292,
|
| 762 |
+
0.489,
|
| 763 |
+
0.338
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "- Mean Score: The average score of the generated responses, as judged by the gold reward model."
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "text",
|
| 770 |
+
"bbox": [
|
| 771 |
+
0.136,
|
| 772 |
+
0.346,
|
| 773 |
+
0.488,
|
| 774 |
+
0.409
|
| 775 |
+
],
|
| 776 |
+
"angle": 0,
|
| 777 |
+
"content": "- Win vs. Chosen: The fraction of samples where the gold reward model assigns a higher score to the generated response compared to the chosen response in the dataset."
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"bbox": [
|
| 782 |
+
0.136,
|
| 783 |
+
0.415,
|
| 784 |
+
0.488,
|
| 785 |
+
0.478
|
| 786 |
+
],
|
| 787 |
+
"angle": 0,
|
| 788 |
+
"content": "- Win vs. SFT: The fraction of samples where the gold reward model scores the generated response higher than the initial SFT model's response."
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"bbox": [
|
| 793 |
+
0.136,
|
| 794 |
+
0.484,
|
| 795 |
+
0.488,
|
| 796 |
+
0.532
|
| 797 |
+
],
|
| 798 |
+
"angle": 0,
|
| 799 |
+
"content": "- KL divergence: The summed difference of log probabilities between the SFT and the trained models over the samples."
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"bbox": [
|
| 804 |
+
0.136,
|
| 805 |
+
0.538,
|
| 806 |
+
0.488,
|
| 807 |
+
0.585
|
| 808 |
+
],
|
| 809 |
+
"angle": 0,
|
| 810 |
+
"content": "- Response length: The number of tokens in the generated response under the tokenization space of the base model."
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "list",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.136,
|
| 816 |
+
0.292,
|
| 817 |
+
0.489,
|
| 818 |
+
0.585
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": null
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "title",
|
| 825 |
+
"bbox": [
|
| 826 |
+
0.114,
|
| 827 |
+
0.598,
|
| 828 |
+
0.353,
|
| 829 |
+
0.615
|
| 830 |
+
],
|
| 831 |
+
"angle": 0,
|
| 832 |
+
"content": "6 Implementation Details"
|
| 833 |
+
},
|
| 834 |
+
{
|
| 835 |
+
"type": "text",
|
| 836 |
+
"bbox": [
|
| 837 |
+
0.113,
|
| 838 |
+
0.623,
|
| 839 |
+
0.49,
|
| 840 |
+
0.768
|
| 841 |
+
],
|
| 842 |
+
"angle": 0,
|
| 843 |
+
"content": "We generate all the responses by sampling with a temperature \\(= 0.7\\), and top_p \\(= 0.95\\). Moreover, max_generation_length is set to 256 across all experiments, following the setup by Xu et al. (2024b). All our experiments are carried out on a cluster with \\(256 \\times \\mathrm{A}100\\) 80GB GPUs. Finally, we implemented our code using the Transformers (Wolf et al., 2020), TRL (von Werra et al., 2020), and PyTorch (Paszke et al., 2019) libraries."
|
| 844 |
+
},
|
| 845 |
+
{
|
| 846 |
+
"type": "title",
|
| 847 |
+
"bbox": [
|
| 848 |
+
0.114,
|
| 849 |
+
0.78,
|
| 850 |
+
0.337,
|
| 851 |
+
0.796
|
| 852 |
+
],
|
| 853 |
+
"angle": 0,
|
| 854 |
+
"content": "7 Experimental Results"
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "title",
|
| 858 |
+
"bbox": [
|
| 859 |
+
0.114,
|
| 860 |
+
0.805,
|
| 861 |
+
0.388,
|
| 862 |
+
0.82
|
| 863 |
+
],
|
| 864 |
+
"angle": 0,
|
| 865 |
+
"content": "7.1 Hyperparameter Robustness"
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "text",
|
| 869 |
+
"bbox": [
|
| 870 |
+
0.113,
|
| 871 |
+
0.826,
|
| 872 |
+
0.49,
|
| 873 |
+
0.922
|
| 874 |
+
],
|
| 875 |
+
"angle": 0,
|
| 876 |
+
"content": "Best Performance. Following the common practice, we compare the best performance achieved by each method in Table 1. As evident, at their peaks, SimPO, LN-DPO, and DPO score similarly (within a 0.05 point on average). However, SimPO and LN-DPO show an edge in terms of the rest"
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "table",
|
| 880 |
+
"bbox": [
|
| 881 |
+
0.557,
|
| 882 |
+
0.289,
|
| 883 |
+
0.839,
|
| 884 |
+
0.368
|
| 885 |
+
],
|
| 886 |
+
"angle": 0,
|
| 887 |
+
"content": "<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>49.04</td><td>47.51</td></tr><tr><td>LN-DPO</td><td>49.47</td><td>-</td><td>46.43</td></tr><tr><td>SimPO</td><td>51.12</td><td>51.09</td><td>-</td></tr></table>"
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "table_caption",
|
| 891 |
+
"bbox": [
|
| 892 |
+
0.671,
|
| 893 |
+
0.373,
|
| 894 |
+
0.722,
|
| 895 |
+
0.385
|
| 896 |
+
],
|
| 897 |
+
"angle": 0,
|
| 898 |
+
"content": "(a) Best"
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "table",
|
| 902 |
+
"bbox": [
|
| 903 |
+
0.556,
|
| 904 |
+
0.392,
|
| 905 |
+
0.838,
|
| 906 |
+
0.472
|
| 907 |
+
],
|
| 908 |
+
"angle": 0,
|
| 909 |
+
"content": "<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>45.72</td><td>44.33</td></tr><tr><td>LN-DPO</td><td>51.77</td><td>-</td><td>47.28</td></tr><tr><td>SimPO</td><td>54.34</td><td>50.13</td><td>-</td></tr></table>"
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "table_caption",
|
| 913 |
+
"bbox": [
|
| 914 |
+
0.638,
|
| 915 |
+
0.476,
|
| 916 |
+
0.755,
|
| 917 |
+
0.488
|
| 918 |
+
],
|
| 919 |
+
"angle": 0,
|
| 920 |
+
"content": "(b) 75th Percentile"
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "table_caption",
|
| 924 |
+
"bbox": [
|
| 925 |
+
0.508,
|
| 926 |
+
0.499,
|
| 927 |
+
0.885,
|
| 928 |
+
0.557
|
| 929 |
+
],
|
| 930 |
+
"angle": 0,
|
| 931 |
+
"content": "Table 3: Head-to-head *PO Comparison. Each cell represents the win rate of the row method over the column method. The underlined values indicate the row method beating the column method."
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"bbox": [
|
| 936 |
+
0.508,
|
| 937 |
+
0.581,
|
| 938 |
+
0.885,
|
| 939 |
+
0.694
|
| 940 |
+
],
|
| 941 |
+
"angle": 0,
|
| 942 |
+
"content": "of the metrics. Specifically, we can observe the effectiveness of the length normalization term. We also notice a significant decrease in KL divergence. However, KL for SimPO decreases less than LN-DPO, showcasing a more significant divergence from SFT. For more details on tuning these models, see Appendix B."
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "text",
|
| 946 |
+
"bbox": [
|
| 947 |
+
0.508,
|
| 948 |
+
0.713,
|
| 949 |
+
0.885,
|
| 950 |
+
0.922
|
| 951 |
+
],
|
| 952 |
+
"angle": 0,
|
| 953 |
+
"content": "Head-to-head Performance. While comparing the pure performances achieved on the desired metrics is usually good enough to contrast different methods, there are potential cases where the averaging could be exploited (e.g., outliers with high rewards). Hence, it is essential also to do a head-to-head per sample comparison, which provides more fine-grained insights. Table 3 compares each method's best and 75th percentile performance. Notably, we observe a sharp performance drop in DPO from the best to the top \\(25\\%\\) model, in contrast to the other two. This occurrence highlights the practical flaw in only comparing the best performances."
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "page_number",
|
| 957 |
+
"bbox": [
|
| 958 |
+
0.481,
|
| 959 |
+
0.929,
|
| 960 |
+
0.519,
|
| 961 |
+
0.941
|
| 962 |
+
],
|
| 963 |
+
"angle": 0,
|
| 964 |
+
"content": "8031"
|
| 965 |
+
}
|
| 966 |
+
],
|
| 967 |
+
[
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.113,
|
| 972 |
+
0.085,
|
| 973 |
+
0.493,
|
| 974 |
+
0.342
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "Expected Performance. Given the limited resources that most users have, it is extremely difficult to run broad hyperparameter searches to find the best-performing combination. As such, it becomes crucial to analyze hyperparameter robustness, which provides insights into the expectation of finding good hyperparameters set from a limited search. Figure 1 presents the performance distribution *PO methods following a grid search over the hyperparameters denoted in Table 2 and Section 4. As evident, SimPO and LN-DPO effectively increase the average performance (i.e., shifting the distributions to the right) across hyperparameters, showcasing their superiority. Note that we stretched the range of hyperparameters until a plateau or an extreme variance was observed."
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "title",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.114,
|
| 983 |
+
0.356,
|
| 984 |
+
0.295,
|
| 985 |
+
0.372
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "7.2 Response Length"
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.113,
|
| 994 |
+
0.378,
|
| 995 |
+
0.49,
|
| 996 |
+
0.619
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "Since length exploitation is a critical issue (Park et al., 2024), we compare the response lengths across samples generated by the top \\(\\mathrm{k\\%}\\) (\\(k \\in \\{1, 10, 25\\}\\)) of each method's best-performing hyperparameters. As illustrated in Figure 2, on the best set of hyperparameters (i.e., top \\(1\\%\\)), the non-DPO methods showcase a left shift in length distribution (compared to DPO), which is a desired effect. However, this phenomenon starts to diminish as we include worse-performing hyperparameters. For example, LN-DPO has a higher rate than DPO in the tail-end of the top \\(25\\%\\) distribution. Overall, we observed that both length-normalized models perform superior to DPO, with SimPO producing the shortest responses across the distribution."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "title",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.114,
|
| 1005 |
+
0.633,
|
| 1006 |
+
0.357,
|
| 1007 |
+
0.648
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "7.3 KL Divergence (vs. SFT)"
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "text",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
0.113,
|
| 1016 |
+
0.655,
|
| 1017 |
+
0.49,
|
| 1018 |
+
0.848
|
| 1019 |
+
],
|
| 1020 |
+
"angle": 0,
|
| 1021 |
+
"content": "Since reference-free methods are not normalized against a reference policy (e.g., the SFT model), reward hacking might occur (i.e., lower loss with degraded performance). Therefore, we compare the KL divergence in Figure 3 across samples generated by the top \\(\\mathrm{k\\%}\\) \\((k\\in \\{1,10,25\\})\\) of each method's best-performing hyperparameters. As evident, both SimPO and LN-DPO achieve lower KLs at their peak. However, as we move toward worse-performing models, DPO achieves lower KL (at \\(10\\%\\)). This phenomenon is due to many DPO runs failing to learn beyond the SFT model."
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "title",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
0.113,
|
| 1027 |
+
0.862,
|
| 1028 |
+
0.458,
|
| 1029 |
+
0.879
|
| 1030 |
+
],
|
| 1031 |
+
"angle": 0,
|
| 1032 |
+
"content": "8 When to use LN-DPO over SimPO?"
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "text",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
0.113,
|
| 1038 |
+
0.89,
|
| 1039 |
+
0.489,
|
| 1040 |
+
0.922
|
| 1041 |
+
],
|
| 1042 |
+
"angle": 0,
|
| 1043 |
+
"content": "While SimPO achieves superior performance on most metrics compared to LN-DPO, the lack of a"
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "text",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
0.508,
|
| 1049 |
+
0.085,
|
| 1050 |
+
0.885,
|
| 1051 |
+
0.23
|
| 1052 |
+
],
|
| 1053 |
+
"angle": 0,
|
| 1054 |
+
"content": "reference policy regularization could lead to drastic divergence from the initial checkpoint, as also shown in our experiments. This issue then could cause a degradation of performance on other benchmarks, which is a critical pitfall (as also observed in Korbak et al. (2022)). As such, we believe there are various scenarios where LN-DPO should be preferred to SimPO. We leave further experiments over this direction to future works."
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "title",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
0.509,
|
| 1060 |
+
0.241,
|
| 1061 |
+
0.642,
|
| 1062 |
+
0.256
|
| 1063 |
+
],
|
| 1064 |
+
"angle": 0,
|
| 1065 |
+
"content": "9 Conclusion"
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
0.508,
|
| 1071 |
+
0.266,
|
| 1072 |
+
0.885,
|
| 1073 |
+
0.506
|
| 1074 |
+
],
|
| 1075 |
+
"angle": 0,
|
| 1076 |
+
"content": "In this work, we introduce LN-DPO, a length-normalized variation of DPO that reduces the average response length while staying reference-dependent. Moreover, we present a thorough analysis of LN-DPO and two state-of-the-art reference-dependent and reference-free preference optimization methods in a simulated real-world scenario for safety and helpfulness domains. Specifically, we cover the behavior of these methods across a wide range of hyperparameters under metrics such as mean response length, KL divergence (vs. SFT), and win rate (vs. chosen and SFT). Our experiments showcase state-of-the-art methods' strengths and weaknesses and provide insights for other practitioners."
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "title",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
0.51,
|
| 1082 |
+
0.518,
|
| 1083 |
+
0.615,
|
| 1084 |
+
0.533
|
| 1085 |
+
],
|
| 1086 |
+
"angle": 0,
|
| 1087 |
+
"content": "Limitations"
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "text",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
0.508,
|
| 1093 |
+
0.543,
|
| 1094 |
+
0.884,
|
| 1095 |
+
0.736
|
| 1096 |
+
],
|
| 1097 |
+
"angle": 0,
|
| 1098 |
+
"content": "Due to the extremely high costs of running such experiments (i.e., roughly 86000 GPU hours for the current experiments), in this work, we only experimented with a small set of models, methods, and datasets. While this might limit generalizability, we believe the existence of such analysis is critical to help practitioners save costs. Moreover, since the conclusion of our experiments, new reward models with higher performance have been released (e.g., ArmoRM (Wang et al., 2024)); however, we still rely on older, smaller models to keep the evaluation tractable on such a high number of runs."
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "title",
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
0.509,
|
| 1104 |
+
0.747,
|
| 1105 |
+
0.682,
|
| 1106 |
+
0.763
|
| 1107 |
+
],
|
| 1108 |
+
"angle": 0,
|
| 1109 |
+
"content": "Acknowledgements"
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "text",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
0.508,
|
| 1115 |
+
0.772,
|
| 1116 |
+
0.884,
|
| 1117 |
+
0.819
|
| 1118 |
+
],
|
| 1119 |
+
"angle": 0,
|
| 1120 |
+
"content": "This work was partially funded by the Defense Advanced Research Projects Agency with the award HR00112220046."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "title",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
0.51,
|
| 1126 |
+
0.846,
|
| 1127 |
+
0.61,
|
| 1128 |
+
0.861
|
| 1129 |
+
],
|
| 1130 |
+
"angle": 0,
|
| 1131 |
+
"content": "References"
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "text",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
0.51,
|
| 1137 |
+
0.868,
|
| 1138 |
+
0.885,
|
| 1139 |
+
0.922
|
| 1140 |
+
],
|
| 1141 |
+
"angle": 0,
|
| 1142 |
+
"content": "Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly"
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "page_number",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
0.481,
|
| 1148 |
+
0.928,
|
| 1149 |
+
0.521,
|
| 1150 |
+
0.941
|
| 1151 |
+
],
|
| 1152 |
+
"angle": 0,
|
| 1153 |
+
"content": "8032"
|
| 1154 |
+
}
|
| 1155 |
+
],
|
| 1156 |
+
[
|
| 1157 |
+
{
|
| 1158 |
+
"type": "ref_text",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.135,
|
| 1161 |
+
0.086,
|
| 1162 |
+
0.489,
|
| 1163 |
+
0.113
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "capable language model locally on your phone. arXiv preprint arXiv:2404.14219."
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "ref_text",
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
0.117,
|
| 1172 |
+
0.124,
|
| 1173 |
+
0.49,
|
| 1174 |
+
0.19
|
| 1175 |
+
],
|
| 1176 |
+
"angle": 0,
|
| 1177 |
+
"content": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774."
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "ref_text",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
0.117,
|
| 1183 |
+
0.201,
|
| 1184 |
+
0.378,
|
| 1185 |
+
0.214
|
| 1186 |
+
],
|
| 1187 |
+
"angle": 0,
|
| 1188 |
+
"content": "AI@Meta. 2024. Llama 3 model card."
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "ref_text",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
0.117,
|
| 1194 |
+
0.226,
|
| 1195 |
+
0.49,
|
| 1196 |
+
0.317
|
| 1197 |
+
],
|
| 1198 |
+
"angle": 0,
|
| 1199 |
+
"content": "Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR."
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "ref_text",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
0.117,
|
| 1205 |
+
0.329,
|
| 1206 |
+
0.489,
|
| 1207 |
+
0.408
|
| 1208 |
+
],
|
| 1209 |
+
"angle": 0,
|
| 1210 |
+
"content": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862."
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "ref_text",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
0.117,
|
| 1216 |
+
0.419,
|
| 1217 |
+
0.489,
|
| 1218 |
+
0.497
|
| 1219 |
+
],
|
| 1220 |
+
"angle": 0,
|
| 1221 |
+
"content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073."
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "ref_text",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
0.117,
|
| 1227 |
+
0.509,
|
| 1228 |
+
0.489,
|
| 1229 |
+
0.588
|
| 1230 |
+
],
|
| 1231 |
+
"angle": 0,
|
| 1232 |
+
"content": "Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610-623."
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "ref_text",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
0.117,
|
| 1238 |
+
0.599,
|
| 1239 |
+
0.49,
|
| 1240 |
+
0.651
|
| 1241 |
+
],
|
| 1242 |
+
"angle": 0,
|
| 1243 |
+
"content": "Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345."
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "ref_text",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
0.117,
|
| 1249 |
+
0.663,
|
| 1250 |
+
0.49,
|
| 1251 |
+
0.729
|
| 1252 |
+
],
|
| 1253 |
+
"angle": 0,
|
| 1254 |
+
"content": "Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. 2024. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319."
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "ref_text",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
0.117,
|
| 1260 |
+
0.74,
|
| 1261 |
+
0.49,
|
| 1262 |
+
0.819
|
| 1263 |
+
],
|
| 1264 |
+
"angle": 0,
|
| 1265 |
+
"content": "Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. 2024. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132."
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "ref_text",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
0.117,
|
| 1271 |
+
0.83,
|
| 1272 |
+
0.49,
|
| 1273 |
+
0.883
|
| 1274 |
+
],
|
| 1275 |
+
"angle": 0,
|
| 1276 |
+
"content": "Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30."
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "ref_text",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
0.117,
|
| 1282 |
+
0.894,
|
| 1283 |
+
0.489,
|
| 1284 |
+
0.921
|
| 1285 |
+
],
|
| 1286 |
+
"angle": 0,
|
| 1287 |
+
"content": "Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset."
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "list",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
0.117,
|
| 1293 |
+
0.086,
|
| 1294 |
+
0.49,
|
| 1295 |
+
0.921
|
| 1296 |
+
],
|
| 1297 |
+
"angle": 0,
|
| 1298 |
+
"content": null
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "ref_text",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
0.512,
|
| 1304 |
+
0.086,
|
| 1305 |
+
0.885,
|
| 1306 |
+
0.152
|
| 1307 |
+
],
|
| 1308 |
+
"angle": 0,
|
| 1309 |
+
"content": "Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2024. Safe rlhf: Safe reinforcement learning from human feedback. In The Twelfth International Conference on Learning Representations."
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "ref_text",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
0.512,
|
| 1315 |
+
0.16,
|
| 1316 |
+
0.885,
|
| 1317 |
+
0.213
|
| 1318 |
+
],
|
| 1319 |
+
"angle": 0,
|
| 1320 |
+
"content": "Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306."
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "ref_text",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
0.512,
|
| 1326 |
+
0.222,
|
| 1327 |
+
0.885,
|
| 1328 |
+
0.301
|
| 1329 |
+
],
|
| 1330 |
+
"angle": 0,
|
| 1331 |
+
"content": "Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858."
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "ref_text",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
0.512,
|
| 1337 |
+
0.309,
|
| 1338 |
+
0.885,
|
| 1339 |
+
0.361
|
| 1340 |
+
],
|
| 1341 |
+
"angle": 0,
|
| 1342 |
+
"content": "Leo Gao, John Schulman, and Jacob Hilton. 2023. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pages 10835-10866. PMLR."
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "ref_text",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
0.512,
|
| 1348 |
+
0.37,
|
| 1349 |
+
0.885,
|
| 1350 |
+
0.424
|
| 1351 |
+
],
|
| 1352 |
+
"angle": 0,
|
| 1353 |
+
"content": "Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462."
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "ref_text",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
0.512,
|
| 1359 |
+
0.432,
|
| 1360 |
+
0.885,
|
| 1361 |
+
0.472
|
| 1362 |
+
],
|
| 1363 |
+
"angle": 0,
|
| 1364 |
+
"content": "Jiwoo Hong, Noah Lee, and James Thorne. 2024. Reference-free monolithic preference optimization with odds ratio. arXiv preprint arXiv:2403.07691."
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "ref_text",
|
| 1368 |
+
"bbox": [
|
| 1369 |
+
0.512,
|
| 1370 |
+
0.48,
|
| 1371 |
+
0.885,
|
| 1372 |
+
0.546
|
| 1373 |
+
],
|
| 1374 |
+
"angle": 0,
|
| 1375 |
+
"content": "Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and Lewis Tunstall. 2024a. The n+ implementation details of rlhf with ppo: A case study on tl; dr summarization. arXiv preprint arXiv:2403.17031."
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "ref_text",
|
| 1379 |
+
"bbox": [
|
| 1380 |
+
0.512,
|
| 1381 |
+
0.554,
|
| 1382 |
+
0.885,
|
| 1383 |
+
0.647
|
| 1384 |
+
],
|
| 1385 |
+
"angle": 0,
|
| 1386 |
+
"content": "Shengyi Costa Huang, Tianlin Liu, and Leandro von Werra. 2024b. The n implementation details of rlhf with ppo. In *ICLR Blogposts* 2024. https://d2jud02ci9yv69.cloudfront.net/2024-05-07-the-n-implementation-details-of-rlhf-with-ppo130/blog/the-n-implementation-details-of-rlhf-with-ppo/."
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "ref_text",
|
| 1390 |
+
"bbox": [
|
| 1391 |
+
0.512,
|
| 1392 |
+
0.654,
|
| 1393 |
+
0.885,
|
| 1394 |
+
0.733
|
| 1395 |
+
],
|
| 1396 |
+
"angle": 0,
|
| 1397 |
+
"content": "Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking dpo and ppo: Disentangling best practices for learning from preference feedback. arXiv preprint arXiv:2406.09279."
|
| 1398 |
+
},
|
| 1399 |
+
{
|
| 1400 |
+
"type": "ref_text",
|
| 1401 |
+
"bbox": [
|
| 1402 |
+
0.512,
|
| 1403 |
+
0.741,
|
| 1404 |
+
0.885,
|
| 1405 |
+
0.834
|
| 1406 |
+
],
|
| 1407 |
+
"angle": 0,
|
| 1408 |
+
"content": "Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richard Nagyfi, et al. 2024. Openassistant conversations-democratizing large language model alignment. Advances in Neural Information Processing Systems, 36."
|
| 1409 |
+
},
|
| 1410 |
+
{
|
| 1411 |
+
"type": "ref_text",
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
0.512,
|
| 1414 |
+
0.842,
|
| 1415 |
+
0.885,
|
| 1416 |
+
0.921
|
| 1417 |
+
],
|
| 1418 |
+
"angle": 0,
|
| 1419 |
+
"content": "Tomasz Korbak, Ethan Perez, and Christopher Buckley. 2022. RL with KL penalties is better viewed as Bayesian inference. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1083-1091, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 1420 |
+
},
|
| 1421 |
+
{
|
| 1422 |
+
"type": "list",
|
| 1423 |
+
"bbox": [
|
| 1424 |
+
0.512,
|
| 1425 |
+
0.086,
|
| 1426 |
+
0.885,
|
| 1427 |
+
0.921
|
| 1428 |
+
],
|
| 1429 |
+
"angle": 0,
|
| 1430 |
+
"content": null
|
| 1431 |
+
},
|
| 1432 |
+
{
|
| 1433 |
+
"type": "page_number",
|
| 1434 |
+
"bbox": [
|
| 1435 |
+
0.481,
|
| 1436 |
+
0.929,
|
| 1437 |
+
0.521,
|
| 1438 |
+
0.941
|
| 1439 |
+
],
|
| 1440 |
+
"angle": 0,
|
| 1441 |
+
"content": "8033"
|
| 1442 |
+
}
|
| 1443 |
+
],
|
| 1444 |
+
[
|
| 1445 |
+
{
|
| 1446 |
+
"type": "ref_text",
|
| 1447 |
+
"bbox": [
|
| 1448 |
+
0.117,
|
| 1449 |
+
0.086,
|
| 1450 |
+
0.487,
|
| 1451 |
+
0.151
|
| 1452 |
+
],
|
| 1453 |
+
"angle": 0,
|
| 1454 |
+
"content": "Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let's verify step by step. arXiv preprint arXiv:2305.20050."
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "ref_text",
|
| 1458 |
+
"bbox": [
|
| 1459 |
+
0.117,
|
| 1460 |
+
0.165,
|
| 1461 |
+
0.487,
|
| 1462 |
+
0.216
|
| 1463 |
+
],
|
| 1464 |
+
"angle": 0,
|
| 1465 |
+
"content": "Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734."
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "ref_text",
|
| 1469 |
+
"bbox": [
|
| 1470 |
+
0.117,
|
| 1471 |
+
0.23,
|
| 1472 |
+
0.487,
|
| 1473 |
+
0.308
|
| 1474 |
+
],
|
| 1475 |
+
"angle": 0,
|
| 1476 |
+
"content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744."
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"type": "ref_text",
|
| 1480 |
+
"bbox": [
|
| 1481 |
+
0.117,
|
| 1482 |
+
0.322,
|
| 1483 |
+
0.487,
|
| 1484 |
+
0.372
|
| 1485 |
+
],
|
| 1486 |
+
"angle": 0,
|
| 1487 |
+
"content": "Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. 2024. Disentangling length from quality in direct preference optimization. arXiv preprint arXiv:2403.19159."
|
| 1488 |
+
},
|
| 1489 |
+
{
|
| 1490 |
+
"type": "ref_text",
|
| 1491 |
+
"bbox": [
|
| 1492 |
+
0.117,
|
| 1493 |
+
0.386,
|
| 1494 |
+
0.487,
|
| 1495 |
+
0.464
|
| 1496 |
+
],
|
| 1497 |
+
"angle": 0,
|
| 1498 |
+
"content": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32."
|
| 1499 |
+
},
|
| 1500 |
+
{
|
| 1501 |
+
"type": "ref_text",
|
| 1502 |
+
"bbox": [
|
| 1503 |
+
0.117,
|
| 1504 |
+
0.478,
|
| 1505 |
+
0.487,
|
| 1506 |
+
0.542
|
| 1507 |
+
],
|
| 1508 |
+
"angle": 0,
|
| 1509 |
+
"content": "Guilherme Penedo, Hynek Kydlieck, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. Preprint, arXiv:2406.17557."
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "ref_text",
|
| 1513 |
+
"bbox": [
|
| 1514 |
+
0.117,
|
| 1515 |
+
0.555,
|
| 1516 |
+
0.487,
|
| 1517 |
+
0.62
|
| 1518 |
+
],
|
| 1519 |
+
"angle": 0,
|
| 1520 |
+
"content": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36."
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "ref_text",
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
0.117,
|
| 1526 |
+
0.634,
|
| 1527 |
+
0.487,
|
| 1528 |
+
0.699
|
| 1529 |
+
],
|
| 1530 |
+
"angle": 0,
|
| 1531 |
+
"content": "Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, and Tengyang Xie. 2024. Direct nash optimization: Teaching language models to self-improve with general preferences. arXiv preprint arXiv:2404.03715."
|
| 1532 |
+
},
|
| 1533 |
+
{
|
| 1534 |
+
"type": "ref_text",
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
0.117,
|
| 1537 |
+
0.712,
|
| 1538 |
+
0.487,
|
| 1539 |
+
0.763
|
| 1540 |
+
],
|
| 1541 |
+
"angle": 0,
|
| 1542 |
+
"content": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347."
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "ref_text",
|
| 1546 |
+
"bbox": [
|
| 1547 |
+
0.117,
|
| 1548 |
+
0.777,
|
| 1549 |
+
0.487,
|
| 1550 |
+
0.829
|
| 1551 |
+
],
|
| 1552 |
+
"angle": 0,
|
| 1553 |
+
"content": "Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. 2022. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems, 35:9460-9471."
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "ref_text",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
0.117,
|
| 1559 |
+
0.842,
|
| 1560 |
+
0.487,
|
| 1561 |
+
0.919
|
| 1562 |
+
],
|
| 1563 |
+
"angle": 0,
|
| 1564 |
+
"content": "Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021."
|
| 1565 |
+
},
|
| 1566 |
+
{
|
| 1567 |
+
"type": "list",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
0.117,
|
| 1570 |
+
0.086,
|
| 1571 |
+
0.487,
|
| 1572 |
+
0.919
|
| 1573 |
+
],
|
| 1574 |
+
"angle": 0,
|
| 1575 |
+
"content": null
|
| 1576 |
+
},
|
| 1577 |
+
{
|
| 1578 |
+
"type": "ref_text",
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
0.513,
|
| 1581 |
+
0.086,
|
| 1582 |
+
0.882,
|
| 1583 |
+
0.166
|
| 1584 |
+
],
|
| 1585 |
+
"angle": 0,
|
| 1586 |
+
"content": "Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. 2024. Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749."
|
| 1587 |
+
},
|
| 1588 |
+
{
|
| 1589 |
+
"type": "ref_text",
|
| 1590 |
+
"bbox": [
|
| 1591 |
+
0.512,
|
| 1592 |
+
0.178,
|
| 1593 |
+
0.882,
|
| 1594 |
+
0.229
|
| 1595 |
+
],
|
| 1596 |
+
"angle": 0,
|
| 1597 |
+
"content": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model."
|
| 1598 |
+
},
|
| 1599 |
+
{
|
| 1600 |
+
"type": "ref_text",
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
0.512,
|
| 1603 |
+
0.242,
|
| 1604 |
+
0.882,
|
| 1605 |
+
0.279
|
| 1606 |
+
],
|
| 1607 |
+
"angle": 0,
|
| 1608 |
+
"content": "InternLM Team. 2023. Internlm: A multilingual language model with progressively enhanced capabilities."
|
| 1609 |
+
},
|
| 1610 |
+
{
|
| 1611 |
+
"type": "ref_text",
|
| 1612 |
+
"bbox": [
|
| 1613 |
+
0.512,
|
| 1614 |
+
0.292,
|
| 1615 |
+
0.882,
|
| 1616 |
+
0.357
|
| 1617 |
+
],
|
| 1618 |
+
"angle": 0,
|
| 1619 |
+
"content": "Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. 2020. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl."
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "ref_text",
|
| 1623 |
+
"bbox": [
|
| 1624 |
+
0.512,
|
| 1625 |
+
0.368,
|
| 1626 |
+
0.882,
|
| 1627 |
+
0.42
|
| 1628 |
+
],
|
| 1629 |
+
"angle": 0,
|
| 1630 |
+
"content": "Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, and Tong Zhang. 2024. Interpretable preferences via multi-objective reward modeling and mixture-of-experts. arXiv preprint arXiv:2406.12845."
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "ref_text",
|
| 1634 |
+
"bbox": [
|
| 1635 |
+
0.512,
|
| 1636 |
+
0.432,
|
| 1637 |
+
0.882,
|
| 1638 |
+
0.588
|
| 1639 |
+
],
|
| 1640 |
+
"angle": 0,
|
| 1641 |
+
"content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics."
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "ref_text",
|
| 1645 |
+
"bbox": [
|
| 1646 |
+
0.512,
|
| 1647 |
+
0.6,
|
| 1648 |
+
0.882,
|
| 1649 |
+
0.652
|
| 1650 |
+
],
|
| 1651 |
+
"angle": 0,
|
| 1652 |
+
"content": "Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. 2024. Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333."
|
| 1653 |
+
},
|
| 1654 |
+
{
|
| 1655 |
+
"type": "ref_text",
|
| 1656 |
+
"bbox": [
|
| 1657 |
+
0.512,
|
| 1658 |
+
0.663,
|
| 1659 |
+
0.882,
|
| 1660 |
+
0.74
|
| 1661 |
+
],
|
| 1662 |
+
"angle": 0,
|
| 1663 |
+
"content": "Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. 2024a. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417."
|
| 1664 |
+
},
|
| 1665 |
+
{
|
| 1666 |
+
"type": "ref_text",
|
| 1667 |
+
"bbox": [
|
| 1668 |
+
0.512,
|
| 1669 |
+
0.752,
|
| 1670 |
+
0.882,
|
| 1671 |
+
0.817
|
| 1672 |
+
],
|
| 1673 |
+
"angle": 0,
|
| 1674 |
+
"content": "Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. 2024b. Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719."
|
| 1675 |
+
},
|
| 1676 |
+
{
|
| 1677 |
+
"type": "ref_text",
|
| 1678 |
+
"bbox": [
|
| 1679 |
+
0.512,
|
| 1680 |
+
0.829,
|
| 1681 |
+
0.882,
|
| 1682 |
+
0.921
|
| 1683 |
+
],
|
| 1684 |
+
"angle": 0,
|
| 1685 |
+
"content": "An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang"
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "list",
|
| 1689 |
+
"bbox": [
|
| 1690 |
+
0.512,
|
| 1691 |
+
0.086,
|
| 1692 |
+
0.882,
|
| 1693 |
+
0.921
|
| 1694 |
+
],
|
| 1695 |
+
"angle": 0,
|
| 1696 |
+
"content": null
|
| 1697 |
+
},
|
| 1698 |
+
{
|
| 1699 |
+
"type": "page_number",
|
| 1700 |
+
"bbox": [
|
| 1701 |
+
0.481,
|
| 1702 |
+
0.929,
|
| 1703 |
+
0.521,
|
| 1704 |
+
0.941
|
| 1705 |
+
],
|
| 1706 |
+
"angle": 0,
|
| 1707 |
+
"content": "8034"
|
| 1708 |
+
}
|
| 1709 |
+
],
|
| 1710 |
+
[
|
| 1711 |
+
{
|
| 1712 |
+
"type": "text",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
0.132,
|
| 1715 |
+
0.086,
|
| 1716 |
+
0.488,
|
| 1717 |
+
0.204
|
| 1718 |
+
],
|
| 1719 |
+
"angle": 0,
|
| 1720 |
+
"content": "Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671."
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "text",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
0.115,
|
| 1726 |
+
0.213,
|
| 1727 |
+
0.49,
|
| 1728 |
+
0.281
|
| 1729 |
+
],
|
| 1730 |
+
"angle": 0,
|
| 1731 |
+
"content": "Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36."
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "title",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
0.114,
|
| 1737 |
+
0.292,
|
| 1738 |
+
0.362,
|
| 1739 |
+
0.308
|
| 1740 |
+
],
|
| 1741 |
+
"angle": 0,
|
| 1742 |
+
"content": "A Extended Related Work"
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "text",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
0.117,
|
| 1748 |
+
0.318,
|
| 1749 |
+
0.49,
|
| 1750 |
+
0.688
|
| 1751 |
+
],
|
| 1752 |
+
"angle": 0,
|
| 1753 |
+
"content": "Online Algorithms. Reinforcement learning from human/AI feedback (RLHF/RLAIF) is among the common approaches for aligning LLMs to human preferences (Christiano et al., 2017; Bai et al., 2022a; Stiannon et al., 2020; Bai et al., 2022b), and has been used to train models such as GPT-4 (Achiam et al., 2023) and Llama-3 (AI@Meta, 2024). In most cases, these approaches are comprised of three stages: 1) supervised finetuning (Taori et al., 2023; Zhou et al., 2024; Xia et al., 2024), 2) reward modeling (Gao et al., 2023; Chen et al., 2024; Lightman et al., 2023), and 3) policy optimization (Schulman et al., 2017). The prominent method for policy optimization is Proximal Policy Optimization (PPO), an online on-policy approach (Schulman et al., 2017). While PPO has shown promising performances (Stiannon et al., 2020; Ouyang et al., 2022; Achiam et al., 2023), it suffers from problems such as having too many subtle details for reproducibility (Huang et al., 2024b), 2) taking a long time for training (Huang et al., 2024a), and 3) reward over-optimization (Skalse et al., 2022)."
|
| 1754 |
+
},
|
| 1755 |
+
{
|
| 1756 |
+
"type": "text",
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
0.113,
|
| 1759 |
+
0.697,
|
| 1760 |
+
0.49,
|
| 1761 |
+
0.922
|
| 1762 |
+
],
|
| 1763 |
+
"angle": 0,
|
| 1764 |
+
"content": "Offline Algorithms. To address the drawbacks of RLHF/RLAIF, recent works have proposed simpler and more efficient offline algorithms, particularly Direct Preference Optimization (DPO) (Rafailov et al., 2024), which is based on the Bradley-Terry model (Bradley and Terry, 1952). These offline algorithms directly optimize an objective on the preference data with an implicit reward model without needing to have separate stages. Some recent works have focused on making a broad comparison between PPO and DPO. Specifically, they showcase the potential for PPO with a gold reward model \\((\\sim +10\\%)\\) while underlying the similarity to DPO \\((\\sim +1\\%\\) averaged across benchmarks)"
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "text",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
0.508,
|
| 1770 |
+
0.085,
|
| 1771 |
+
0.883,
|
| 1772 |
+
0.117
|
| 1773 |
+
],
|
| 1774 |
+
"angle": 0,
|
| 1775 |
+
"content": "when trained on the same data (Ivison et al., 2024; Xu et al., 2024b)."
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "title",
|
| 1779 |
+
"bbox": [
|
| 1780 |
+
0.51,
|
| 1781 |
+
0.129,
|
| 1782 |
+
0.762,
|
| 1783 |
+
0.161
|
| 1784 |
+
],
|
| 1785 |
+
"angle": 0,
|
| 1786 |
+
"content": "B Hyperparameter Tuning Considerations"
|
| 1787 |
+
},
|
| 1788 |
+
{
|
| 1789 |
+
"type": "text",
|
| 1790 |
+
"bbox": [
|
| 1791 |
+
0.508,
|
| 1792 |
+
0.171,
|
| 1793 |
+
0.885,
|
| 1794 |
+
0.251
|
| 1795 |
+
],
|
| 1796 |
+
"angle": 0,
|
| 1797 |
+
"content": "DPO. As presented in Figure 4, lower \\(\\beta\\) leads to higher performances; however, as \\(\\beta\\) decreases, the performance variance increases, which showcases the method's instability. Overall, \\(\\beta = 0.05\\) provides the best balance of stability and performance."
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "text",
|
| 1801 |
+
"bbox": [
|
| 1802 |
+
0.508,
|
| 1803 |
+
0.259,
|
| 1804 |
+
0.885,
|
| 1805 |
+
0.405
|
| 1806 |
+
],
|
| 1807 |
+
"angle": 0,
|
| 1808 |
+
"content": "LN-DPO. While we initially borrowed \\(\\beta\\)'s range from SimPO (Meng et al., 2024), more experiments showed benefits in further decreasing its value. Figure 5 presents the performance spread across different runs. From these experiments, \\(\\beta \\in [1.0, 2.0]\\) contains most of the best-performing models. Moreover, we observe the relatively low (compared to DPO) variance across the performances, showcasing another benefit of LN-DPO."
|
| 1809 |
+
},
|
| 1810 |
+
{
|
| 1811 |
+
"type": "text",
|
| 1812 |
+
"bbox": [
|
| 1813 |
+
0.508,
|
| 1814 |
+
0.414,
|
| 1815 |
+
0.884,
|
| 1816 |
+
0.606
|
| 1817 |
+
],
|
| 1818 |
+
"angle": 0,
|
| 1819 |
+
"content": "SimPO. In contrast to the other two methods, SimPO has two method-specific hyperparameters: \\(\\beta\\) and \\(\\gamma\\). As illustrated in Figure 6, on average, lower \\(\\beta\\) values lead to better performance. We believe the performance uptick in the lower range is due to a difference in the average length of this work's and the original work's training sets. Moreover, as showcased in Figure 7, the best performing models have a \\(\\gamma \\in [1.0, 1.4]\\), in line with the suggestion by Meng et al. (2024). Notably, \\(\\beta\\) and \\(\\gamma\\) have a relatively low variance across experiments, another upside of SimPO."
|
| 1820 |
+
},
|
| 1821 |
+
{
|
| 1822 |
+
"type": "title",
|
| 1823 |
+
"bbox": [
|
| 1824 |
+
0.509,
|
| 1825 |
+
0.618,
|
| 1826 |
+
0.871,
|
| 1827 |
+
0.636
|
| 1828 |
+
],
|
| 1829 |
+
"angle": 0,
|
| 1830 |
+
"content": "C The Answer to the Ultimate Question"
|
| 1831 |
+
},
|
| 1832 |
+
{
|
| 1833 |
+
"type": "text",
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
0.508,
|
| 1836 |
+
0.644,
|
| 1837 |
+
0.885,
|
| 1838 |
+
0.788
|
| 1839 |
+
],
|
| 1840 |
+
"angle": 0,
|
| 1841 |
+
"content": "Based on our collective empirical results, we believe SimPO to be the best starting point among the three methods, mainly due to its robustness toward hyperparameter variations and effective length reduction. As for SimPO's hyperparameters, we recommend \\(\\beta \\in \\{1.0, 1.5\\}\\) and \\(\\gamma \\approx 1.2\\). Moreover, while LN-DPO is consistently second-best in most of our experiments, we discuss scenarios for choosing it over SimPO in Section 8."
|
| 1842 |
+
},
|
| 1843 |
+
{
|
| 1844 |
+
"type": "page_number",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
0.481,
|
| 1847 |
+
0.929,
|
| 1848 |
+
0.521,
|
| 1849 |
+
0.941
|
| 1850 |
+
],
|
| 1851 |
+
"angle": 0,
|
| 1852 |
+
"content": "8035"
|
| 1853 |
+
}
|
| 1854 |
+
],
|
| 1855 |
+
[
|
| 1856 |
+
{
|
| 1857 |
+
"type": "image",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
0.12,
|
| 1860 |
+
0.107,
|
| 1861 |
+
0.368,
|
| 1862 |
+
0.237
|
| 1863 |
+
],
|
| 1864 |
+
"angle": 0,
|
| 1865 |
+
"content": null
|
| 1866 |
+
},
|
| 1867 |
+
{
|
| 1868 |
+
"type": "image",
|
| 1869 |
+
"bbox": [
|
| 1870 |
+
0.373,
|
| 1871 |
+
0.106,
|
| 1872 |
+
0.621,
|
| 1873 |
+
0.234
|
| 1874 |
+
],
|
| 1875 |
+
"angle": 0,
|
| 1876 |
+
"content": null
|
| 1877 |
+
},
|
| 1878 |
+
{
|
| 1879 |
+
"type": "image",
|
| 1880 |
+
"bbox": [
|
| 1881 |
+
0.624,
|
| 1882 |
+
0.108,
|
| 1883 |
+
0.882,
|
| 1884 |
+
0.237
|
| 1885 |
+
],
|
| 1886 |
+
"angle": 0,
|
| 1887 |
+
"content": null
|
| 1888 |
+
},
|
| 1889 |
+
{
|
| 1890 |
+
"type": "image_caption",
|
| 1891 |
+
"bbox": [
|
| 1892 |
+
0.237,
|
| 1893 |
+
0.248,
|
| 1894 |
+
0.76,
|
| 1895 |
+
0.264
|
| 1896 |
+
],
|
| 1897 |
+
"angle": 0,
|
| 1898 |
+
"content": "Figure 4: DPO \\(\\beta\\). Each point indicates a run with the corresponding \\(\\beta\\) value."
|
| 1899 |
+
},
|
| 1900 |
+
{
|
| 1901 |
+
"type": "image",
|
| 1902 |
+
"bbox": [
|
| 1903 |
+
0.12,
|
| 1904 |
+
0.317,
|
| 1905 |
+
0.372,
|
| 1906 |
+
0.446
|
| 1907 |
+
],
|
| 1908 |
+
"angle": 0,
|
| 1909 |
+
"content": null
|
| 1910 |
+
},
|
| 1911 |
+
{
|
| 1912 |
+
"type": "image",
|
| 1913 |
+
"bbox": [
|
| 1914 |
+
0.375,
|
| 1915 |
+
0.318,
|
| 1916 |
+
0.625,
|
| 1917 |
+
0.445
|
| 1918 |
+
],
|
| 1919 |
+
"angle": 0,
|
| 1920 |
+
"content": null
|
| 1921 |
+
},
|
| 1922 |
+
{
|
| 1923 |
+
"type": "image",
|
| 1924 |
+
"bbox": [
|
| 1925 |
+
0.627,
|
| 1926 |
+
0.318,
|
| 1927 |
+
0.879,
|
| 1928 |
+
0.446
|
| 1929 |
+
],
|
| 1930 |
+
"angle": 0,
|
| 1931 |
+
"content": null
|
| 1932 |
+
},
|
| 1933 |
+
{
|
| 1934 |
+
"type": "image_caption",
|
| 1935 |
+
"bbox": [
|
| 1936 |
+
0.222,
|
| 1937 |
+
0.459,
|
| 1938 |
+
0.776,
|
| 1939 |
+
0.475
|
| 1940 |
+
],
|
| 1941 |
+
"angle": 0,
|
| 1942 |
+
"content": "Figure 5: LN-DPO \\(\\beta\\). Each point indicates a run with the corresponding \\(\\beta\\) value."
|
| 1943 |
+
},
|
| 1944 |
+
{
|
| 1945 |
+
"type": "image",
|
| 1946 |
+
"bbox": [
|
| 1947 |
+
0.12,
|
| 1948 |
+
0.53,
|
| 1949 |
+
0.368,
|
| 1950 |
+
0.655
|
| 1951 |
+
],
|
| 1952 |
+
"angle": 0,
|
| 1953 |
+
"content": null
|
| 1954 |
+
},
|
| 1955 |
+
{
|
| 1956 |
+
"type": "image",
|
| 1957 |
+
"bbox": [
|
| 1958 |
+
0.371,
|
| 1959 |
+
0.53,
|
| 1960 |
+
0.622,
|
| 1961 |
+
0.655
|
| 1962 |
+
],
|
| 1963 |
+
"angle": 0,
|
| 1964 |
+
"content": null
|
| 1965 |
+
},
|
| 1966 |
+
{
|
| 1967 |
+
"type": "image",
|
| 1968 |
+
"bbox": [
|
| 1969 |
+
0.626,
|
| 1970 |
+
0.53,
|
| 1971 |
+
0.877,
|
| 1972 |
+
0.655
|
| 1973 |
+
],
|
| 1974 |
+
"angle": 0,
|
| 1975 |
+
"content": null
|
| 1976 |
+
},
|
| 1977 |
+
{
|
| 1978 |
+
"type": "image_caption",
|
| 1979 |
+
"bbox": [
|
| 1980 |
+
0.228,
|
| 1981 |
+
0.67,
|
| 1982 |
+
0.768,
|
| 1983 |
+
0.685
|
| 1984 |
+
],
|
| 1985 |
+
"angle": 0,
|
| 1986 |
+
"content": "Figure 6: SimPO \\(\\beta\\). Each point indicates a run with the corresponding \\(\\beta\\) value."
|
| 1987 |
+
},
|
| 1988 |
+
{
|
| 1989 |
+
"type": "image",
|
| 1990 |
+
"bbox": [
|
| 1991 |
+
0.12,
|
| 1992 |
+
0.741,
|
| 1993 |
+
0.368,
|
| 1994 |
+
0.866
|
| 1995 |
+
],
|
| 1996 |
+
"angle": 0,
|
| 1997 |
+
"content": null
|
| 1998 |
+
},
|
| 1999 |
+
{
|
| 2000 |
+
"type": "image",
|
| 2001 |
+
"bbox": [
|
| 2002 |
+
0.371,
|
| 2003 |
+
0.741,
|
| 2004 |
+
0.622,
|
| 2005 |
+
0.865
|
| 2006 |
+
],
|
| 2007 |
+
"angle": 0,
|
| 2008 |
+
"content": null
|
| 2009 |
+
},
|
| 2010 |
+
{
|
| 2011 |
+
"type": "image",
|
| 2012 |
+
"bbox": [
|
| 2013 |
+
0.626,
|
| 2014 |
+
0.74,
|
| 2015 |
+
0.877,
|
| 2016 |
+
0.866
|
| 2017 |
+
],
|
| 2018 |
+
"angle": 0,
|
| 2019 |
+
"content": null
|
| 2020 |
+
},
|
| 2021 |
+
{
|
| 2022 |
+
"type": "image_caption",
|
| 2023 |
+
"bbox": [
|
| 2024 |
+
0.229,
|
| 2025 |
+
0.881,
|
| 2026 |
+
0.767,
|
| 2027 |
+
0.896
|
| 2028 |
+
],
|
| 2029 |
+
"angle": 0,
|
| 2030 |
+
"content": "Figure 7: SimPO \\(\\gamma\\). Each point indicates a run with the corresponding \\(\\gamma\\) value."
|
| 2031 |
+
},
|
| 2032 |
+
{
|
| 2033 |
+
"type": "page_number",
|
| 2034 |
+
"bbox": [
|
| 2035 |
+
0.481,
|
| 2036 |
+
0.929,
|
| 2037 |
+
0.522,
|
| 2038 |
+
0.941
|
| 2039 |
+
],
|
| 2040 |
+
"angle": 0,
|
| 2041 |
+
"content": "8036"
|
| 2042 |
+
}
|
| 2043 |
+
]
|
| 2044 |
+
]
|
2025/A Practical Analysis of Human Alignment with _PO/2cd002da-3dfa-4ffb-9fba-c06d76c990ec_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:263ff7bddff8d1a941f31c0a3b262e06a9b1062e99ea7b21b6ea5cc70fbc0c91
|
| 3 |
+
size 1807707
|
2025/A Practical Analysis of Human Alignment with _PO/full.md
ADDED
|
@@ -0,0 +1,276 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Practical Analysis of Human Alignment with *PO
|
| 2 |
+
|
| 3 |
+
Kian Ahrabian\*
|
| 4 |
+
|
| 5 |
+
Xihui Lin
|
| 6 |
+
|
| 7 |
+
Barun Patra
|
| 8 |
+
|
| 9 |
+
Vishrav Chaudhary $^{2}$
|
| 10 |
+
|
| 11 |
+
Alon Benhaim
|
| 12 |
+
|
| 13 |
+
Jay Pujara<sup>1</sup>
|
| 14 |
+
|
| 15 |
+
# Xia Song²
|
| 16 |
+
|
| 17 |
+
<sup>1</sup>University of Southern California, Information Sciences Institute
|
| 18 |
+
|
| 19 |
+
2Microsoft
|
| 20 |
+
|
| 21 |
+
ahrabian@usc.edu,{xihlin, barun.patra@microsoft.com}
|
| 22 |
+
|
| 23 |
+
{vchaudhary, alonbenhaim}@microsoft.com, jpujara@isi.edu, xiao@microsoft.com
|
| 24 |
+
|
| 25 |
+
# Abstract
|
| 26 |
+
|
| 27 |
+
At the forefront of state-of-the-art human alignment methods are preference optimization methods (*PO). Prior research has often concentrated on identifying the best-performing method, typically involving a grid search over hyperparameters, which can be impractical for general practitioners. In this paper, we examine the robustness of existing state-of-the-art methods to varying hyperparameters in a realistic out-of-distribution (OOD) scenario that mirrors real-world applications of human alignment. Our goal is to empirically find the method that increases the likelihood of achieving better results through the lens of various metrics, such as KL divergence and response length. We also introduce LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length, and improves performance. Our analysis of state-of-the-art reference-free (i.e., SimPO) and reference-dependent (i.e., DPO and LN-DPO) methods reveals that they perform similarly at their peak (i.e., best possible scenario). However, we uncover that the pattern of change in performance greatly varies as we move away from the best possible scenario.
|
| 28 |
+
|
| 29 |
+
# 1 Introduction
|
| 30 |
+
|
| 31 |
+
In recent years, the quality of large language models (LLMs) has been constantly increasing (Chiang et al., 2024), achieving impressive results across tasks and benchmarks (Abdin et al., 2024; AI@Meta, 2024; Achiam et al., 2023; Team, 2023; Yang et al., 2024). However, even with the most rigorous filtering heuristics, the training data (Computer, 2023; Penedo et al., 2024) is typically contaminated with undesirable content that can lead to unacceptable behaviors (Bender et al., 2021; Gehman et al., 2020). To improve the model's
|
| 32 |
+
|
| 33 |
+
<table><tr><td></td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>Mean Score</td><td>1.6</td><td>+0.3%</td><td>+2.7%</td></tr><tr><td>Mean Length</td><td>119.8</td><td>-15.9%</td><td>-22.9%</td></tr><tr><td>KL Divergence</td><td>55.0</td><td>-26.0%</td><td>-20.7%</td></tr><tr><td>Win vs. Chosen</td><td>77.1%</td><td>+0.8%</td><td>+3.1%</td></tr><tr><td>Win vs. SFT</td><td>60.7%</td><td>+2.1%</td><td>+5.0%</td></tr></table>
|
| 34 |
+
|
| 35 |
+
Table 1: Best *PO Performance. The metrics are normalized by the respective DPO performance. The underlined values indicate the best performance.
|
| 36 |
+
|
| 37 |
+
alignment with human preferences, the de-facto approach has been to learn from human/AI-generated preference data (e.g., a chosen and a rejected response for each prompt). In particular, off-policy preference optimization methods (*PO) have been prevalent given their good performance and ease of implementation (Rafailov et al., 2024; Hong et al., 2024; Meng et al., 2024).
|
| 38 |
+
|
| 39 |
+
One commonly occurring practice when reporting the performance of new methods is to compare their best-performing variant (after a hyperparameter grid search) to a default baseline with a fixed set of hyperparameters. However, from a practical perspective for future users, these comparisons do not provide a good answer to the problem of which method is expected to achieve higher performance, given a fixed budget for hyperparameter search, as doing broad grid searches is often computationally infeasible for many practitioners. To this end, in this work, we aim to empirically identify the more robust method to hyperparameter variations while still being competitive in performance.
|
| 40 |
+
|
| 41 |
+
We set up our experiments in a realistic out-of-distribution (OOD) setting, focused on safety and helpfulness domains, where the train and test datasets share a common core goal, but their samples are generated from different distributions (e.g., AI and human expert). This setting resembles real-
|
| 42 |
+
|
| 43 |
+
<table><tr><td>Method</td><td>Objective</td><td>Hyperparameters</td></tr><tr><td>DPO</td><td>- log σ(β log πθ(yw|x)/πref(yw|x) - β log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {0.01, 0.05, 0.1, 0.3, 0.5}</td></tr><tr><td>SimPO</td><td>- log σ(β/|yw| log πθ(yw|x) - β/|yl| log πθ(yl|x) - γ)</td><td>β ∈ {1.0, 1.5, 2.0, 2.5}γ ∈ {0.5, 0.8, 1.0, 1.2, 1.4, 1.6}</td></tr><tr><td>LN-DPO</td><td>- log σ(β/|yw| log πθ(yw|x)/πref(yw|x) - β/|yl| log πθ(yl|x)/πref(yl|x))</td><td>β ∈ {1.0, 1.5, 2.0, 2.5, 3.0, 3.5}</td></tr></table>
|
| 44 |
+
|
| 45 |
+
Table 2: *PO Optimization Objectives. The preference data is formulated as $D = (x, y_w, y_l)$ , where $x$ is the prompt and $y_w$ and $y_l$ are the chosen and rejected responses.
|
| 46 |
+
|
| 47 |
+
world scenarios as it simulates the release of large generative models for public use. Moreover, to better understand the behavior of the state-of-the-art models, we take the best-performing reference-free and reference-dependent models (as reported by Meng et al. (2024)) and analyze them through the lens of standard metrics such KL divergence, response length, and win rate. We also introduce an embarrassingly simple length-normalized extension of vanilla Direct Preference Optimization (DPO) (Rafailov et al., 2024), LN-DPO, that effectively mitigates the issue of lengthy generations without any apparent performance degradation<sup>1</sup>. In summary, our contributions are as follows:
|
| 48 |
+
|
| 49 |
+
- We examine state-of-the-art reference-free and reference-dependent preference optimization methods across a wide range of hyperparameters in a real-world setup.
|
| 50 |
+
- We analyze the performance of these methods on critical metrics such as mean response length, mean score on a gold reward model, win rate vs. chosen and SFT, and KL vs. SFT.
|
| 51 |
+
- We introduce and examine LN-DPO, a simple length-normalized version of DPO that is more stable across hyperparameters, effectively reduces the average response length and improves performance.
|
| 52 |
+
|
| 53 |
+
# 2 Related Work
|
| 54 |
+
|
| 55 |
+
Since the introduction of DPO (Rafailov et al., 2024), there has been a body of works with new optimization objectives improving the performance and efficiency (Azar et al., 2024; Tang et al., 2024; Hong et al., 2024; Rosset et al., 2024; Meng et al., 2024; Xu et al., 2024a; Ethayarajh et al., 2024). These methods can be partitioned into two groups: reference-free (Meng et al., 2024; Hong et al.,
|
| 56 |
+
|
| 57 |
+
2024) and reference-dependent (Rafailov et al., 2024; Park et al., 2024). Reference-free methods generally benefit from fast training runs, while reference-dependent methods have terms baked into their objective to control divergence from the reference model. In this work, we compare SimPO (Meng et al., 2024), a recent state-of-the-art reference-free method, with DPO and LN-DPO as reference-dependent methods (see Appendix A for extended related work).
|
| 58 |
+
|
| 59 |
+
# 3 Experimental Setup
|
| 60 |
+
|
| 61 |
+
# 3.1 Datasets
|
| 62 |
+
|
| 63 |
+
For our datasets, we follow the setup introduced by Xu et al. (2024b). Specifically, we use the double safe/unsafe filtered train subset of SafeRLHF (Dai et al., 2024) for training and the test subset of HH-RLHF (Ganguli et al., 2022) for evaluation. This setup closely resembles real-world scenarios where even though models are trained on various domains (e.g., safety and helpfulness in our experiments), they have to generalize to similar unseen queries while interacting with the users.
|
| 64 |
+
|
| 65 |
+
# 3.2 Models
|
| 66 |
+
|
| 67 |
+
For all our experiments, we chose the Phi-3 Medium model (Abdin et al., 2024) due to its high performance across benchmarks and small size, ensuring computational tractability. To evaluate the trained models, we use the OpenAssistant reward model (Köpf et al., 2024) to score the quality of their generated responses. We chose this model due to its small size and use in prior works (Xu et al., 2024b), ensuring fast and correct evaluations.
|
| 68 |
+
|
| 69 |
+
# 3.3 Optimization Objectives
|
| 70 |
+
|
| 71 |
+
Considering the performances reported by Meng et al. (2024), we choose DPO as our reference-dependent method and SimPO as our reference-free
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
Figure 1: *PO Performance Distribution. Each sample in the distribution represents the performance of one set of hyperparameters on the denoted metric. The dashed line indicates the performance of the initial SFT model.
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 2: Response Length. The top $\mathrm{k\%}$ ( $k \in \{1, 10, 25\}$ ) denotes the percentage of best-performing hyperparameters taken from each method's runs.
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
method. While DPO has an implicit length normalization through the reference model, the variance of the reward (i.e., $\log \frac{\pi_{\theta}}{\pi_{\mathrm{ref}}}$ ) increases with response length. As such, inspired by explicit length regularization in SimPO and R-DPO (Park et al., 2024), we further normalize it with the response length similar to SimPO, which we call LN-DPO (see Section 3.4 for more details).
|
| 88 |
+
|
| 89 |
+
# 3.4 Connection between LN-DPO and SimPO
|
| 90 |
+
|
| 91 |
+
LN-DPO is similar to an adaptive margin version of SimPO with per sample margin defined as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\gamma_ {w, l} = \log \frac {\pi_ {\mathrm {r e f}} \left(y _ {w} \mid x\right)}{\left| y _ {w} \right|} - \log \frac {\pi_ {\mathrm {r e f}} \left(y _ {l} \mid x\right)}{\left| y _ {l} \right|}. \tag {1}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Essentially, this adaptive margin encourages larger margins for pairs with large margins in the reference policy. Depending on the quality of the reference model and the labels, this change could be beneficial compared to SimPO's constant margin. The adaptive margin focuses more on "easier" pairs (i.e., pairs that have some prior evidence to be different) while less on "harder" pairs (i.e., pairs that are closer), which means that LN-DPO is potentially less prone to overfitting and less sensitive
|
| 98 |
+
|
| 99 |
+
to wrong labels.
|
| 100 |
+
|
| 101 |
+
# 4 Training Regimen
|
| 102 |
+
|
| 103 |
+
Following the common practice, before the preference optimization step we do a supervised fine-tuning (SFT) step. Specifically, we first run a grid search over the following hyperparameters: epochs $\in \{1,3\}$ and learning rate $\in \{1e - 6,3e - 6,1e - 5,2e - 5\}$ . Then we evaluate the final checkpoints against the test set and choose the one with the highest performance. This procedure ensures that the preference optimization methods are initialized from a good checkpoint. For the preference optimization methods, we run a grid search using 1) the same ranges as SFT for epochs and learning rate and 2) common values for method-specific hyperparameters as used in prior works (Meng et al., 2024; Rafailov et al., 2024; Hong et al., 2024). Table 2 presents the method-specific ranges used in our experiments. In all of our experiments, the batch size is set to 256.
|
| 104 |
+
|
| 105 |
+
# 5 Metrics
|
| 106 |
+
|
| 107 |
+
Our analysis focuses on the following five metrics:
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
Figure 3: KL Divergence. The top $\mathrm{k\%}$ ( $k \in \{1, 10, 25\}$ ) denotes the percentage of best-performing hyperparameters taken from each method's runs.
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
|
| 116 |
+
- Mean Score: The average score of the generated responses, as judged by the gold reward model.
|
| 117 |
+
- Win vs. Chosen: The fraction of samples where the gold reward model assigns a higher score to the generated response compared to the chosen response in the dataset.
|
| 118 |
+
- Win vs. SFT: The fraction of samples where the gold reward model scores the generated response higher than the initial SFT model's response.
|
| 119 |
+
- KL divergence: The summed difference of log probabilities between the SFT and the trained models over the samples.
|
| 120 |
+
- Response length: The number of tokens in the generated response under the tokenization space of the base model.
|
| 121 |
+
|
| 122 |
+
# 6 Implementation Details
|
| 123 |
+
|
| 124 |
+
We generate all the responses by sampling with a temperature $= 0.7$ , and top_p $= 0.95$ . Moreover, max_generation_length is set to 256 across all experiments, following the setup by Xu et al. (2024b). All our experiments are carried out on a cluster with $256 \times \mathrm{A}100$ 80GB GPUs. Finally, we implemented our code using the Transformers (Wolf et al., 2020), TRL (von Werra et al., 2020), and PyTorch (Paszke et al., 2019) libraries.
|
| 125 |
+
|
| 126 |
+
# 7 Experimental Results
|
| 127 |
+
|
| 128 |
+
# 7.1 Hyperparameter Robustness
|
| 129 |
+
|
| 130 |
+
Best Performance. Following the common practice, we compare the best performance achieved by each method in Table 1. As evident, at their peaks, SimPO, LN-DPO, and DPO score similarly (within a 0.05 point on average). However, SimPO and LN-DPO show an edge in terms of the rest
|
| 131 |
+
|
| 132 |
+
<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>49.04</td><td>47.51</td></tr><tr><td>LN-DPO</td><td>49.47</td><td>-</td><td>46.43</td></tr><tr><td>SimPO</td><td>51.12</td><td>51.09</td><td>-</td></tr></table>
|
| 133 |
+
|
| 134 |
+
(a) Best
|
| 135 |
+
|
| 136 |
+
<table><tr><td>%</td><td>DPO</td><td>LN-DPO</td><td>SimPO</td></tr><tr><td>DPO</td><td>-</td><td>45.72</td><td>44.33</td></tr><tr><td>LN-DPO</td><td>51.77</td><td>-</td><td>47.28</td></tr><tr><td>SimPO</td><td>54.34</td><td>50.13</td><td>-</td></tr></table>
|
| 137 |
+
|
| 138 |
+
(b) 75th Percentile
|
| 139 |
+
|
| 140 |
+
Table 3: Head-to-head *PO Comparison. Each cell represents the win rate of the row method over the column method. The underlined values indicate the row method beating the column method.
|
| 141 |
+
|
| 142 |
+
of the metrics. Specifically, we can observe the effectiveness of the length normalization term. We also notice a significant decrease in KL divergence. However, KL for SimPO decreases less than LN-DPO, showcasing a more significant divergence from SFT. For more details on tuning these models, see Appendix B.
|
| 143 |
+
|
| 144 |
+
Head-to-head Performance. While comparing the pure performances achieved on the desired metrics is usually good enough to contrast different methods, there are potential cases where the averaging could be exploited (e.g., outliers with high rewards). Hence, it is essential also to do a head-to-head per sample comparison, which provides more fine-grained insights. Table 3 compares each method's best and 75th percentile performance. Notably, we observe a sharp performance drop in DPO from the best to the top $25\%$ model, in contrast to the other two. This occurrence highlights the practical flaw in only comparing the best performances.
|
| 145 |
+
|
| 146 |
+
Expected Performance. Given the limited resources that most users have, it is extremely difficult to run broad hyperparameter searches to find the best-performing combination. As such, it becomes crucial to analyze hyperparameter robustness, which provides insights into the expectation of finding good hyperparameters set from a limited search. Figure 1 presents the performance distribution *PO methods following a grid search over the hyperparameters denoted in Table 2 and Section 4. As evident, SimPO and LN-DPO effectively increase the average performance (i.e., shifting the distributions to the right) across hyperparameters, showcasing their superiority. Note that we stretched the range of hyperparameters until a plateau or an extreme variance was observed.
|
| 147 |
+
|
| 148 |
+
# 7.2 Response Length
|
| 149 |
+
|
| 150 |
+
Since length exploitation is a critical issue (Park et al., 2024), we compare the response lengths across samples generated by the top $\mathrm{k\%}$ ( $k \in \{1, 10, 25\}$ ) of each method's best-performing hyperparameters. As illustrated in Figure 2, on the best set of hyperparameters (i.e., top $1\%$ ), the non-DPO methods showcase a left shift in length distribution (compared to DPO), which is a desired effect. However, this phenomenon starts to diminish as we include worse-performing hyperparameters. For example, LN-DPO has a higher rate than DPO in the tail-end of the top $25\%$ distribution. Overall, we observed that both length-normalized models perform superior to DPO, with SimPO producing the shortest responses across the distribution.
|
| 151 |
+
|
| 152 |
+
# 7.3 KL Divergence (vs. SFT)
|
| 153 |
+
|
| 154 |
+
Since reference-free methods are not normalized against a reference policy (e.g., the SFT model), reward hacking might occur (i.e., lower loss with degraded performance). Therefore, we compare the KL divergence in Figure 3 across samples generated by the top $\mathrm{k\%}$ $(k\in \{1,10,25\})$ of each method's best-performing hyperparameters. As evident, both SimPO and LN-DPO achieve lower KLs at their peak. However, as we move toward worse-performing models, DPO achieves lower KL (at $10\%$ ). This phenomenon is due to many DPO runs failing to learn beyond the SFT model.
|
| 155 |
+
|
| 156 |
+
# 8 When to use LN-DPO over SimPO?
|
| 157 |
+
|
| 158 |
+
While SimPO achieves superior performance on most metrics compared to LN-DPO, the lack of a
|
| 159 |
+
|
| 160 |
+
reference policy regularization could lead to drastic divergence from the initial checkpoint, as also shown in our experiments. This issue then could cause a degradation of performance on other benchmarks, which is a critical pitfall (as also observed in Korbak et al. (2022)). As such, we believe there are various scenarios where LN-DPO should be preferred to SimPO. We leave further experiments over this direction to future works.
|
| 161 |
+
|
| 162 |
+
# 9 Conclusion
|
| 163 |
+
|
| 164 |
+
In this work, we introduce LN-DPO, a length-normalized variation of DPO that reduces the average response length while staying reference-dependent. Moreover, we present a thorough analysis of LN-DPO and two state-of-the-art reference-dependent and reference-free preference optimization methods in a simulated real-world scenario for safety and helpfulness domains. Specifically, we cover the behavior of these methods across a wide range of hyperparameters under metrics such as mean response length, KL divergence (vs. SFT), and win rate (vs. chosen and SFT). Our experiments showcase state-of-the-art methods' strengths and weaknesses and provide insights for other practitioners.
|
| 165 |
+
|
| 166 |
+
# Limitations
|
| 167 |
+
|
| 168 |
+
Due to the extremely high costs of running such experiments (i.e., roughly 86000 GPU hours for the current experiments), in this work, we only experimented with a small set of models, methods, and datasets. While this might limit generalizability, we believe the existence of such analysis is critical to help practitioners save costs. Moreover, since the conclusion of our experiments, new reward models with higher performance have been released (e.g., ArmoRM (Wang et al., 2024)); however, we still rely on older, smaller models to keep the evaluation tractable on such a high number of runs.
|
| 169 |
+
|
| 170 |
+
# Acknowledgements
|
| 171 |
+
|
| 172 |
+
This work was partially funded by the Defense Advanced Research Projects Agency with the award HR00112220046.
|
| 173 |
+
|
| 174 |
+
# References
|
| 175 |
+
|
| 176 |
+
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. 2024. Phi-3 technical report: A highly
|
| 177 |
+
|
| 178 |
+
capable language model locally on your phone. arXiv preprint arXiv:2404.14219.
|
| 179 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 180 |
+
AI@Meta. 2024. Llama 3 model card.
|
| 181 |
+
Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoretical paradigm to understand learning from human preferences. In International Conference on Artificial Intelligence and Statistics, pages 4447-4455. PMLR.
|
| 182 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
|
| 183 |
+
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
|
| 184 |
+
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610-623.
|
| 185 |
+
Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.
|
| 186 |
+
Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, and Bryan Catanzaro. 2024. Odin: Disentangled reward mitigates hacking in rlhf. arXiv preprint arXiv:2402.07319.
|
| 187 |
+
Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E Gonzalez, et al. 2024. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint arXiv:2403.04132.
|
| 188 |
+
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.
|
| 189 |
+
Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset.
|
| 190 |
+
|
| 191 |
+
Josef Dai, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. 2024. Safe rlhf: Safe reinforcement learning from human feedback. In The Twelfth International Conference on Learning Representations.
|
| 192 |
+
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306.
|
| 193 |
+
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858.
|
| 194 |
+
Leo Gao, John Schulman, and Jacob Hilton. 2023. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pages 10835-10866. PMLR.
|
| 195 |
+
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. Realtoxicityprompts: Evaluating neural toxic degeneration in language models. arXiv preprint arXiv:2009.11462.
|
| 196 |
+
Jiwoo Hong, Noah Lee, and James Thorne. 2024. Reference-free monolithic preference optimization with odds ratio. arXiv preprint arXiv:2403.07691.
|
| 197 |
+
Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and Lewis Tunstall. 2024a. The n+ implementation details of rlhf with ppo: A case study on tl; dr summarization. arXiv preprint arXiv:2403.17031.
|
| 198 |
+
Shengyi Costa Huang, Tianlin Liu, and Leandro von Werra. 2024b. The n implementation details of rlhf with ppo. In *ICLR Blogposts* 2024. https://d2jud02ci9yv69.cloudfront.net/2024-05-07-the-n-implementation-details-of-rlhf-with-ppo130/blog/the-n-implementation-details-of-rlhf-with-ppo/.
|
| 199 |
+
Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking dpo and ppo: Disentangling best practices for learning from preference feedback. arXiv preprint arXiv:2406.09279.
|
| 200 |
+
Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi Rui Tam, Keith Stevens, Abdullah Barhoum, Duc Nguyen, Oliver Stanley, Richard Nagyfi, et al. 2024. Openassistant conversations-democratizing large language model alignment. Advances in Neural Information Processing Systems, 36.
|
| 201 |
+
Tomasz Korbak, Ethan Perez, and Christopher Buckley. 2022. RL with KL penalties is better viewed as Bayesian inference. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1083-1091, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 202 |
+
|
| 203 |
+
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let's verify step by step. arXiv preprint arXiv:2305.20050.
|
| 204 |
+
Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734.
|
| 205 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35:27730-27744.
|
| 206 |
+
Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. 2024. Disentangling length from quality in direct preference optimization. arXiv preprint arXiv:2403.19159.
|
| 207 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32.
|
| 208 |
+
Guilherme Penedo, Hynek Kydlieck, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. Preprint, arXiv:2406.17557.
|
| 209 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2024. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36.
|
| 210 |
+
Corby Rosset, Ching-An Cheng, Arindam Mitra, Michael Santacroce, Ahmed Awadallah, and Tengyang Xie. 2024. Direct nash optimization: Teaching language models to self-improve with general preferences. arXiv preprint arXiv:2404.03715.
|
| 211 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
|
| 212 |
+
Joar Skalse, Nikolaus Howe, Dmitrii Krasheninnikov, and David Krueger. 2022. Defining and characterizing reward gaming. Advances in Neural Information Processing Systems, 35:9460-9471.
|
| 213 |
+
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008-3021.
|
| 214 |
+
|
| 215 |
+
Yunhao Tang, Zhaohan Daniel Guo, Zeyu Zheng, Daniele Calandriello, Rémi Munos, Mark Rowland, Pierre Harvey Richemond, Michal Valko, Bernardo Ávila Pires, and Bilal Piot. 2024. Generalized preference optimization: A unified approach to offline alignment. arXiv preprint arXiv:2402.05749.
|
| 216 |
+
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.
|
| 217 |
+
InternLM Team. 2023. Internlm: A multilingual language model with progressively enhanced capabilities.
|
| 218 |
+
Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. 2020. Trl: Transformer reinforcement learning. https://github.com/huggingface/trl.
|
| 219 |
+
Haoxiang Wang, Wei Xiong, Tengyang Xie, Han Zhao, and Tong Zhang. 2024. Interpretable preferences via multi-objective reward modeling and mixture-of-experts. arXiv preprint arXiv:2406.12845.
|
| 220 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 221 |
+
Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. 2024. Less: Selecting influential data for targeted instruction tuning. arXiv preprint arXiv:2402.04333.
|
| 222 |
+
Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. 2024a. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417.
|
| 223 |
+
Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu, and Yi Wu. 2024b. Is dpo superior to ppo for llm alignment? a comprehensive study. arXiv preprint arXiv:2404.10719.
|
| 224 |
+
An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Keqin Chen, Kexin Yang
|
| 225 |
+
|
| 226 |
+
Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zhihao Fan. 2024. Qwen2 technical report. arXiv preprint arXiv:2407.10671.
|
| 227 |
+
|
| 228 |
+
Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36.
|
| 229 |
+
|
| 230 |
+
# A Extended Related Work
|
| 231 |
+
|
| 232 |
+
Online Algorithms. Reinforcement learning from human/AI feedback (RLHF/RLAIF) is among the common approaches for aligning LLMs to human preferences (Christiano et al., 2017; Bai et al., 2022a; Stiannon et al., 2020; Bai et al., 2022b), and has been used to train models such as GPT-4 (Achiam et al., 2023) and Llama-3 (AI@Meta, 2024). In most cases, these approaches are comprised of three stages: 1) supervised finetuning (Taori et al., 2023; Zhou et al., 2024; Xia et al., 2024), 2) reward modeling (Gao et al., 2023; Chen et al., 2024; Lightman et al., 2023), and 3) policy optimization (Schulman et al., 2017). The prominent method for policy optimization is Proximal Policy Optimization (PPO), an online on-policy approach (Schulman et al., 2017). While PPO has shown promising performances (Stiannon et al., 2020; Ouyang et al., 2022; Achiam et al., 2023), it suffers from problems such as having too many subtle details for reproducibility (Huang et al., 2024b), 2) taking a long time for training (Huang et al., 2024a), and 3) reward over-optimization (Skalse et al., 2022).
|
| 233 |
+
|
| 234 |
+
Offline Algorithms. To address the drawbacks of RLHF/RLAIF, recent works have proposed simpler and more efficient offline algorithms, particularly Direct Preference Optimization (DPO) (Rafailov et al., 2024), which is based on the Bradley-Terry model (Bradley and Terry, 1952). These offline algorithms directly optimize an objective on the preference data with an implicit reward model without needing to have separate stages. Some recent works have focused on making a broad comparison between PPO and DPO. Specifically, they showcase the potential for PPO with a gold reward model $(\sim +10\%)$ while underlying the similarity to DPO $(\sim +1\%$ averaged across benchmarks)
|
| 235 |
+
|
| 236 |
+
when trained on the same data (Ivison et al., 2024; Xu et al., 2024b).
|
| 237 |
+
|
| 238 |
+
# B Hyperparameter Tuning Considerations
|
| 239 |
+
|
| 240 |
+
DPO. As presented in Figure 4, lower $\beta$ leads to higher performances; however, as $\beta$ decreases, the performance variance increases, which showcases the method's instability. Overall, $\beta = 0.05$ provides the best balance of stability and performance.
|
| 241 |
+
|
| 242 |
+
LN-DPO. While we initially borrowed $\beta$ 's range from SimPO (Meng et al., 2024), more experiments showed benefits in further decreasing its value. Figure 5 presents the performance spread across different runs. From these experiments, $\beta \in [1.0, 2.0]$ contains most of the best-performing models. Moreover, we observe the relatively low (compared to DPO) variance across the performances, showcasing another benefit of LN-DPO.
|
| 243 |
+
|
| 244 |
+
SimPO. In contrast to the other two methods, SimPO has two method-specific hyperparameters: $\beta$ and $\gamma$ . As illustrated in Figure 6, on average, lower $\beta$ values lead to better performance. We believe the performance uptick in the lower range is due to a difference in the average length of this work's and the original work's training sets. Moreover, as showcased in Figure 7, the best performing models have a $\gamma \in [1.0, 1.4]$ , in line with the suggestion by Meng et al. (2024). Notably, $\beta$ and $\gamma$ have a relatively low variance across experiments, another upside of SimPO.
|
| 245 |
+
|
| 246 |
+
# C The Answer to the Ultimate Question
|
| 247 |
+
|
| 248 |
+
Based on our collective empirical results, we believe SimPO to be the best starting point among the three methods, mainly due to its robustness toward hyperparameter variations and effective length reduction. As for SimPO's hyperparameters, we recommend $\beta \in \{1.0, 1.5\}$ and $\gamma \approx 1.2$ . Moreover, while LN-DPO is consistently second-best in most of our experiments, we discuss scenarios for choosing it over SimPO in Section 8.
|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
Figure 4: DPO $\beta$ . Each point indicates a run with the corresponding $\beta$ value.
|
| 252 |
+
|
| 253 |
+

|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
Figure 5: LN-DPO $\beta$ . Each point indicates a run with the corresponding $\beta$ value.
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
Figure 6: SimPO $\beta$ . Each point indicates a run with the corresponding $\beta$ value.
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
Figure 7: SimPO $\gamma$ . Each point indicates a run with the corresponding $\gamma$ value.
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
|
| 276 |
+

|
2025/A Practical Analysis of Human Alignment with _PO/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e1ca577e99f1e82d31443ae5108d6f317b3e3eddded77e27661fb87e762c5b76
|
| 3 |
+
size 399071
|
2025/A Practical Analysis of Human Alignment with _PO/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Practical Examination of AI-Generated Text Detectors for Large Language Models/b14cf0fa-1fb0-49d3-8f53-f8604a27965e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|